When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Miscellaneous fixes for the sched code:
1. Make sure get_addr() on an SGL respects the use phys flag since
the runlist needs physical addresses when NVLINK is in use.
2. Ensure the runlist is contiguous. Since the runlist memory is
not virtually addressed the buffer must be physically contiguous.
3. Use all 64 bits of the runlist address in the runlist base addr
register (and related fields).
JIRA EVLR-2333
Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654667
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch does a couple of things. First it renames
NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former
is somewhat ambiguous in meaning. The latter clearly states what
must happen: nvgpu needs to treat sysmem as coherent. This flag
does simply follow the state of the DMA API but there's no reason
to expect a casual reader of the code to know that when the DMA
API is coherent nvgpu must treat sysmem as coherent.
One thing to note though: when the dGPU is using PCIe and the
PCIe controller is coherent, it doesn't actually matter what we
do. However, we use this flag for determining how to make CPU
mappings in nvgpu_mem_begin() so this flag is still relevant for
the CPU side of things.
Next this patch adds a check in the core kernel GMMU mapping
routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag
is set that the IO coherent flag is passed into the mapping code.
This is the primary fix that made NVLINK start working.
Finally the setting of the USE_COHERENT_SYSMEM flag and the
NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for
iGPUs. The iGPU also must correctly match it's CPU mappings and
GPU mappings for proper operation.
JIRA EVLR-2333
Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654519
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta
and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not
have correct behavior on GV11B due to that.
The patch adds an instance of css_hw_set_handled_snapshots
for Volta to fix that.
The patch also renames css_hw_set_handled_snapshots
to gk20a_css_hw_set_handled_snapshots to make it more clear
that the function is arch dependent.
Bug 1960846
Change-Id: I92c35a862ecd7f918dd1458c086fc7ae42ca8fc5
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add support for user fence updates i.e. increments added by user space
in pushbuffer directly
Add a submit IOCTL flag NVGPU_SUBMIT_GPFIFO_FLAGS_USER_FENCE_UPDATE to indicate
if User has added increments in pushbuffer
If yes, number_of_increment value is received in fence.value from User
If User is adding increments in the pushbuffer then we don't need to do any job
tracking in the kernel
So fail the submit if we evaluate need_job_tracking to true and
FLAGS_USER_FENCE_UPDATE is set
User is responsible for ensuring all pre-requisites for a fast submit and to
prevent kernel job tracking
Since user space adds increments in the pushbuffer, just handle the threshold
book keeping in kernel.
Bug 200326065
Jira NVGPU-179
Change-Id: Ic0f0b1aa69e3389a4c3305fb6a559c5113719e0f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1661854
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose
per-channel allocated syncpoint to user space
API will also return current value of the syncpoint
On supported platforms, this API will also return a RW semaphore address
(corresponding to syncpoint shim) to user space
Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate
support for this new API
Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver
Set this flag for GV11B and GP10B for now
Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA
address of a syncpoint
Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX
value of syncpoint
Bug 200326065
Jira NVGPU-179
Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658009
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes implements the initial (as per bringup) nvlink driver.
(1) SW initialization of nvlink core driver structures
(2) Nvlink interrupt handling
(3) Device initialization (IOCTRL, pll and clocks, device level intr)
(4) Falcon support for minion
(5) Minion load and bootstrapping
(6) Link initialization and DL PROD settings
(7) Device Interface init (and switching HSHUB to nvlink)
(8) HS set/get mode for both link and sublink
(9) Topology discovery and VBIOS settings.
(10) Ensures we get physical contiguous memory when Nvlink is enabled
This driver includes a hack for the current single dev/single link limitation.
JIRA: EVLR-2331
JIRA: EVLR-2330
JIRA: EVLR-2329
JIRA: EVLR-2328
Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are numerous places where variables are assigned to but then
never used. This patch cleans up all these unused variables and
in some cases simplifies surrounding logic.
Also delete unused header includes and add necessary header includes.
JIRA NVGPU-525
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ice9ec2a0e97f262d0dcfebe22f83208dbea569d9
Reviewed-on: https://git-master.nvidia.com/r/1662548
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Number of gr map_tiles is equal to number of tpcs.
During gr_gv11b_setup_rop_mapping programming, check for validity
of gr map_tiles before programming gr_crstr_gpc_map_tile map.
Bug 200389570
Bug 2051856
JIRA NVGPU-523
Change-Id: Iaeb13c6a433d76ad895f89909e3033f887665619
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657727
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GV100 currently has has_syncpoint flag set to false and thus basic syncpoint
SHIM requirements are not initialized for GV100
Hence disable syncpoint address support flag for GV100 until has_syncpoint is
enabled
Bug 200327559
Jira NVGPU-511
Change-Id: Iaf73efe0a2939cba802823d64fe7cb93e25bdbd8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658065
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Until issue related to low frequencies root caused,
limit min frequency to known safe value: 216.75Mhz.
This change needs to be reverted, once orginal issue
root-caused and fixed.
Bug 2051863
Bug 2056266
Change-Id: If6e56f59ee5fa06967fde1128b58a7fc97be74e9
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657595
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The hw semas in a sema pool are stored in a list. All elements in this
list are freed in a loop when a semaphore pool is destroyed. However,
each hw sema is always owned by a channel, and each such channel frees
its hw sema during channel closure before putting a ref to the VM which
holds a ref to the sema pool, so the lifetime of all the hw semas is
shorter than that of the pool and this list is always empty when freeing
the pool. Delete the list and this freeing loop.
Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is
never accessed.
Jira NVGPU-512
Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1653540
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Updated subctx pdb info for all instblks created.
Earlier subctx pdb info was getting updated during instblk
commit. But some instblks like pmu instblk are never committed.
Missing subctx pdb info in instblk is creating issues accessing
subctx info. So, by filling subctx pdb info during instblk
creation fixed all these issues.
Also as part of re-org of the function gv11b_init_subcontext_pdb,
moved setting subctx info in ram_in_engine_wfi_veid_w() to
channel_gv11b_setup_ramfc.
Bug 2051863
Change-Id: Ida96118e8f86b638fa6a8586d026ad2617ebbf64
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654678
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Log masks are a bitmask, and passed as u32 through the API calls.
They were still defined as enums, which causes unnecessary implicit
conversions.
Convert the log masks to be defined as u32.
JIRA NVGPU-52
Change-Id: I4b20f0ad2a9f18056502940ea677b3ea8526d830
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649816
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>