We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space
But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync
Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint
Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not
Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform
For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel
Bug 200326065
Jira NVGPU-179
Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl)
argument which is a void pointer. Change the type signatures to take
struct nvgpu_sgl * which is an opaque marker type that makes it more
difficult to pass around wrong arguments, as anything goes for void *.
Explicit types add also self-documentation to the code.
For some added safety, some explicit type casts are now required in
implementors of the nvgpu_sgt_ops interface when converting between the
general nvgpu_sgl type and implementation-specific types. This is not
purely a bad thing because the casts explain clearly where type
conversions are happening.
Jira NVGPU-30
Jira NVGPU-52
Jira NVGPU-305
Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643555
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.
Bug 2070609
Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch does a couple of things. First it renames
NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former
is somewhat ambiguous in meaning. The latter clearly states what
must happen: nvgpu needs to treat sysmem as coherent. This flag
does simply follow the state of the DMA API but there's no reason
to expect a casual reader of the code to know that when the DMA
API is coherent nvgpu must treat sysmem as coherent.
One thing to note though: when the dGPU is using PCIe and the
PCIe controller is coherent, it doesn't actually matter what we
do. However, we use this flag for determining how to make CPU
mappings in nvgpu_mem_begin() so this flag is still relevant for
the CPU side of things.
Next this patch adds a check in the core kernel GMMU mapping
routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag
is set that the IO coherent flag is passed into the mapping code.
This is the primary fix that made NVLINK start working.
Finally the setting of the USE_COHERENT_SYSMEM flag and the
NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for
iGPUs. The iGPU also must correctly match it's CPU mappings and
GPU mappings for proper operation.
JIRA EVLR-2333
Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654519
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose
per-channel allocated syncpoint to user space
API will also return current value of the syncpoint
On supported platforms, this API will also return a RW semaphore address
(corresponding to syncpoint shim) to user space
Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate
support for this new API
Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver
Set this flag for GV11B and GP10B for now
Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA
address of a syncpoint
Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX
value of syncpoint
Bug 200326065
Jira NVGPU-179
Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658009
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes implements the initial (as per bringup) nvlink driver.
(1) SW initialization of nvlink core driver structures
(2) Nvlink interrupt handling
(3) Device initialization (IOCTRL, pll and clocks, device level intr)
(4) Falcon support for minion
(5) Minion load and bootstrapping
(6) Link initialization and DL PROD settings
(7) Device Interface init (and switching HSHUB to nvlink)
(8) HS set/get mode for both link and sublink
(9) Topology discovery and VBIOS settings.
(10) Ensures we get physical contiguous memory when Nvlink is enabled
This driver includes a hack for the current single dev/single link limitation.
JIRA: EVLR-2331
JIRA: EVLR-2330
JIRA: EVLR-2329
JIRA: EVLR-2328
Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The hw semas in a sema pool are stored in a list. All elements in this
list are freed in a loop when a semaphore pool is destroyed. However,
each hw sema is always owned by a channel, and each such channel frees
its hw sema during channel closure before putting a ref to the VM which
holds a ref to the sema pool, so the lifetime of all the hw semas is
shorter than that of the pool and this list is always empty when freeing
the pool. Delete the list and this freeing loop.
Meanwhile delete also the nr_incrs member in nvgpu_semaphore_int that is
never accessed.
Jira NVGPU-512
Change-Id: Ie072029f9e7cc749141e9f02ef45fdf64358ad96
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1653540
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Log masks are a bitmask, and passed as u32 through the API calls.
They were still defined as enums, which causes unnecessary implicit
conversions.
Convert the log masks to be defined as u32.
JIRA NVGPU-52
Change-Id: I4b20f0ad2a9f18056502940ea677b3ea8526d830
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649816
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint
address map in user space
We already map whole syncpoint shim to each address space with base address
being vm->syncpt_ro_map_gpu_va
This new API exposes this base GPU_VA address of syncpoint map, and unit size
of each syncpoint to user space.
User space can then calculate address of each syncpoint as
syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size)
Note that this syncpoint address is read_only, and should be only used for
inserting semaphore acquires.
Adding semaphore release with this address would result in MMU_FAULT
Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported
on Xavier SoC
Bug 200327559
Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649365
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add characteristic flag NVGPU_GPU_FLAGS_SUPPORT_SYNCPOINT_ADDRESS to indicate if
platform supports semaphore GPU_VA address for a syncpoint
Define NVGPU_SUPPORT_SYNCPOINT_ADDRESS for core driver book keeping
Set this flag for both GV100 and GV11B since Xavier SoC supports a semaphore
GPU_VA address for a syncpoint through syncpoint SHIM
Bug 200327559
Change-Id: I1f31673c9fd59f493d0b35a80d23151fc063ae06
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649364
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The GPU has multiple different operating modes in respect to IOMMU'ability.
As such there needs to be a clean way to tell the driver whether it is
IOMMU'able or not. This state also does not always reflect what is possible:
all becasue the GPU can generate IOMMU'ed memory requests doesn't mean it
wants to.
The nvgpu_iommuable() API has now existed for a little while which is a
useful way to convey whether nvgpu should consider the GPU as IOMMU'able.
However, there is also the g->mm.bypass_smmu flag which used to be able to
override what the GPU decided it should do. Typically it was assigned
the same value as nvgpu_iommuable() but that was not necessarily a
requirment.
This patch removes all the usages of g->mm.bypass_smmu and instead uses the
nvgpu_iommuable() function. All places where the check against
g->mm.bypass_smmu have been replaced with nvgpu_iommuable(). The code
should now be much cleaner.
Subsequently other checks can also be placed in the nvgpu_iommuable()
function. For example, when NVLINK comes online and the GPU should no
longer consider DMA addresses and instead use scatter-gather lists
directly the ngpu_iommuable() function will be able to check the state of
NVLINK and then act accordingly.
Change-Id: I0da6262386de15709decac89d63d3eecfec20cd7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1648332
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
VM and CDE code assumes that dma_buf_attachment is stored as a pointer
in the private dma_buf_drvdata, so it is not tracked. In Linux trees
without dma_buf_*_drvdata() support this is not true, so change the
code to explicitly track dma_buf_attachment.
JIRA NVGPU-4
Change-Id: I692f05a19a6469195d5444a7e5ff6e92f77ae272
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1648004
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-pd, scc, ds, ssync, mme and sked exceptions are
enabled. This will be useful for debugging
-Handle enabled interrupts
-Add gr ops to handle ssync hww. For legacy
chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr.
Since ssync hww is not enabled on legacy chips, added
ssync hww exception handling for volta only.
Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644751
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Adds the skeleton and integration of the GV100 endpoint driver to NVGPU
(1) Adds a OS abstraction layer for the internal nvlink structure.
(2) Adds linux specific integration with Nvlink core driver.
(3) Adds function pointers for nvlink api, initialization and isr process.
(4) Adds initial support for minion.
(5) Adds new GPU enable properties to handle NVLINK presence
(6) Adds new GPU enable properties for SG_PHY bypass (required for NVLINK over
PCI)
(7) Adds parsing of nvlink vbios structures.
(8) Adds logging defines for NVGPU
JIRA: EVLR-2328
Change-Id: I0720a165a15c7187892c8c1a0662ec598354ac06
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644708
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
These '\n' were leftover from the previous debugging macro usage
which did no add the '\n' automagically. However, once swapped over
to the nvgpu logging system the '\n' is added and no longer needs
to be present in the code.
This did require one extra modification though to keep things
consistent. The __alloc_pstat() macro, used for sending output
either to a seq_file or the terminal, needed to add the '\n' for
seq_printf() calls and the '\n' had to be deleted in the C files.
Change-Id: I4d56317fe2a87bd00033cfe79d06ffc048d91049
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1613641
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created nv_pmu_rpc_struct_acr_bootstrap_gr_falcons struct
- gv100_load_falcon_ucode() function to bootstrap GR
flacons using RPC, wait for INIT_WPR_REGION before
creating & executing BOOTSTRAP_GR_FALCONS RPC.
- Added code to handle BOOTSTRAP_GR_FALCONS ack in
RPC handler
Change-Id: If70dc75bb2789970382853fb001d970a346b2915
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1613316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created nv_pmu_rpc_struct_acr_init_wpr_region struct
- Function gv100_pmu_init_acr() to create & execute
INIT_WPR_REGION using RPC.
- Updated gv100 HAL .init_wpr_region to point
to gv100_pmu_init_acr()
- Added code to handle INIT_WPR_REGION ack in
RPC handler.
Change-Id: I699fa945790689e5f24ad5d3de022efb458662e0
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1613290
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added new version of pmu init msg "pmu_init_msg_pmu_v5"
-created methods to support new pmu init message parameter
read based on f/w version for below ops.
.get_pmu_msg_pmu_init_msg_ptr
.get_pmu_init_msg_pmu_sw_mg_off
.get_pmu_init_msg_pmu_sw_mg_size
-Corrected PMU_DMEM_ALLOC_ALIGNMENT value to 32 bit
to allocate PMU DMEM space for nvgpu
-Updated PMU version of GV100/APP_VERSION_BIGGPU
to 23440730 & PMU ucode CL is
https://git-master.nvidia.com/r/#/c/1642432/
Change-Id: Ib1e0197b5f3a229a601e810c9c0d93f05b9d69e7
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1642229
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Lots of code paths were split to T19x specific code paths and structs
due to split repository. Now that repositories are merged, fold all of
them back to main code paths and structs and remove the T19x specific
Kconfig flag.
Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added sw method support for SET_BES_CROP_DEBUG4.
In this sw method:
CLAMP_FP_BLEND_TO_MAXVAL forces overflow and
CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval.
Added support for this sw method in gp10b/gp106/gv11b
and gv100.
Bug 2046636
Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1641529
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created nv_pmu_rpc_cmd & nv_pmu_rpc_msg struct, &
added member rpc under pmu_cmd & pmu_msg
- Created RPC header interface
- Created RPC desc struct & added as member to pmu payload
- Defined PMU_RPC_EXECUTE() to convert different RPC
request to make generic RPC call.
- nvgpu_pmu_rpc_execute() function to execute RPC request
by creating required RPC payload & send request to PMU
to execute.
- nvgpu_pmu_rpc_execute() function as default callback handler
for RPC if caller not provided callback
- Modified nvgpu_pmu_rpc_execute() function to include check
of RPC payload parameter.
- Modified nvgpu_pmu_cmd_post() function to handle RPC
payload request.
JIRA GPUT19X-137
Change-Id: Iac140eb6b98d6bae06a089e71c96f15068fe7e7b
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1613266
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-MMU_CTRL_USE_PDB_BIG_PAGE_SIZE is set to TRUE and hence
RAMIN_BIG_PAGE_SIZE should be set to 64KB i.e. val 1.
By default this is set to 128KB i.e. val 0.
-This change will also fix an issue where perfbuffer_enable and
nvgpu_init_hwpm function pass 0 as big page size while initializing
inst_block and due to which ramin_big_page_size does not get updated
to 64KB and remains set to unsupported 128KB value.
-Volta supports 64KB for big pages. Selecting 128KB for
big pages results in an UNBOUND_INSTANCE fault.
Bug 200327596
Change-Id: Ie304e4e5ff7bedaead27e9380d64c59013dd64ca
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639540
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Upon receiving MMU_FAULT error, MMU will forward MMU_NACK to SM
If MMU_NACK is masked out, SM will simply release the semaphores
And if semaphores are released before MMU fault is handled, user space
could see that operation as successful incorrectly
Fix this by handling SM reported MMU_NACK exception
Enable MMU_NACK reporting in gv11b_gr_set_hww_esr_report_mask
In MMU_NACK handling path, we just set the error notifier and clear
the interrupt so that the User Space sees the error as soon as
semaphores are released by SM
And MMU_FAULT handling path will take care of triggering RC recovery
anyways
Also add necessary h/w accessors for mmu_nack
Bug 2040594
Jira NVGPU-473
Change-Id: Ic925c2d3f3069016c57d177713066c29ab39dc3d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1631708
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>