set_patch_addr parameter to nvgpu_gr_ctx_set_patch_ctx was redundant.
Remove it.
Prepare new functions nvgpu_gr_ctx_set_hwpm_pm_mode to set PM mode,
nvgpu_gr_ctx_set_hwpm_ptr to set PM ptr in gr_ctx. Rename subctx
function to nvgpu_gr_subctx_set_hwpm_ptr.
This simplifies the logic in gr_gk20a_update_hwpm_ctxsw_mode to set
the PM mode and PM ptr. Channel loop is needed only for subcontexts.
Bug 3677982
Change-Id: I44acb09f6296ba8d510e278910188864f39e7157
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2743724
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Added implementation for following IOCTLs
NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE
The above ioctls are supported only for users with
R/W permissions.
1) NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE constructs a memory region
via the nvgpu_dma_alloc_sys() API and creates the corresponding
GPU and kernel mappings. Upon successful creation, KMD exports
this buffer to the userspace via a dmabuf fd that the UMD
can use to mmap it into its process address space.
2) Added plumbing to store VMA's corresponding to different users
for event queue in future.
3) Added necessary validation checks for the IOCTLs
4) NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to clear the queues.
5) Using a global queue lock to protect access to the queues. This
could be modified to be more fine-grained in future when there
is more clarity on GSP's implementation and access of queues.
6) Added plumbing to enable user subscription to queues.
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to unsubscribe
the user from the queue. Once, the last user is deleted,
all the queues will be cleared. User must ensure that
any mappings are removed before calling release queue.
7) Set the default queue_size for event queues to
PAGE_SIZE. This can be modified later. For event
queues, UMD shall fetch the queue_size.
Jira NVGPU-8129
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I31633174e960ec6feb77caede9d143b3b3c145d7
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2723198
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add a device node for management of nvs control fifo buffers for
scheduling domains. The current design consists of a master structure
struct nvgpu_nvs_domain_sched_ctrl for management of users as well
as control queues. Initially all users are added as non-exclusive users.
Subsequent changes will add support for IOCTLS to manage opening of
Send/Receive and Event buffers, querying characteristics etc.
In subsequent changes, a user that tries to open a Send/Receive queue
will first try to reserve itself as an exclusive user and only if that
succeeds can proceed with creation of both Send/Receive queues.
Exclusive users will be reset to non-exclusive users just before they
close their device node handle.
Jira NVGPU-8128
Change-Id: I15a83f70cd49c685510a9fd5ea4476ebb3544378
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2691404
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
BVEC changes for nvgpu_rc_pbdma_fault and nvgpu_rc_mmu_fault
started reporting below MISRA issue.
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:522:
1. misra_c_2012_rule_10_4_violation: Essential type of the left hand
operand "error_notifier" (unsigned) is not the same as that of
the right operand "NVGPU_ERR_NOTIFIER_INVAL"(enum).
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:541:
1. misra_c_2012_rule_10_3_violation: Implicit conversion of
"NVGPU_ERR_NOTIFIER_FIFO_ERROR_MMU_ERR_FLT" from essential type
"anonymous enum" to different or narrower essential type
"unsigned 32-bit int".
Change the enum nvgpu_err_notif values to u32 values declared using
the #define macro.
JIRA NVGPU-6772
Change-Id: Icac7f567cea52cde07ca200b21eb3e7dd2b9e645
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2584153
(cherry picked from commit 2f073f341bd55242c857c6c6d35d6015495025e2)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2623634
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
This patch primary separates runlist modification from
runlist submits.
Instead of submitting the runlist(domain) immediately after
modification, a worker thread interface is now being used to
synchronously schedule runlist submits. If the runlist being
scheduled is currently active, the submit happens instantly,
otherwise, it will happen in the next iteration when the nvs
thread will schedule the domain. This external interface uses
a condition variable to wait for the completion of the
synchronous submits.
A pending_update variable is used to synchronize domain memory
swaps just before being submitted.
To facilitate faster scheduling via the NVS thread, nvgpu_dom
itself contains an array of rl_domain pointers. This can then
be used to select the appropriate rl_domain directly for scheduling
as against the earlier approach of maintaining nvs domains and rl
domains in sync everytime.
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I1725c7cf56407cca2e3d2589833d1c0b66a7ad7b
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2739795
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
The HSI error injection utility is an on-bench debug and test utility
which can be used by customers and SQA to test end-to-end error
detection and reporting path.
Inplement callback function to integrate with this utility and allow
injecting GPU HSI related errors.
As part of callback function hsierrrpt_inj(), invoke the driver's
error-reporting logic which uses the EPD MISC_EC APIs. In future,
we can enhance the callback function to trigger driver's error
handling logic incrementally for different errors.
Bug 3413214
Change-Id: I2d050b6c850d6151b40095f243a6733b4ba74f47
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2727198
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
While creating a new channel, ioctls are called in the below sequence:
1. GPU_IOCTL_OPEN_CHANNEL
2. AS_IOCTL_BIND_CHANNEL
3. TSG_IOCTL_BIND_CHANNEL_EX
4. CHANNEL_ALLOC_GPFIFO_EX
5. CHANNEL_ALLOC_OBJ_CTX.
subctx pdbs and valid mask are programmed in the channel instance block
in the channel ioctls AS_IOCTL_BIND_CHANNEL & CHANNEL_ALLOC_GPFIFO_EX.
Programming them in the ioctl AS_IOCTL_BIND_CHANNEL is redundant.
Remove related hal g->ops.mm.init_inst_block_for_subctxs.
The hal init_inst_block will program context pdb and big page size.
The hal init_inst_block_core will program context pdb, big page size
and subctx 0 pdb. This is used by h/w units (fecs, pmu, hwpm, bar1,
bar2, sec2, gsp, perfbuf etc.).
For user channels, subctx pdbs are programmed as part of ramfc setup.
Bug 3677982
Change-Id: I6656b002d513404c1fd7c3d349933e80cca7e604
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2680907
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr ctx buffer in non-cacheable hence there is no need to do L2 cache
flush when updating the buffer. Remove the flushes.
pm ctx buffer is cacheable hence add l2 flush in the function
nvgpu_profiler_quiesce_hwpm_streamout_non_resident since it
updates the buffer.
Bug 3677982
Change-Id: I0c15ec7a7f8fa250af1d25891122acc24443a872
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2713916
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Patch defines a ZBC static table and configure it at sw layer. Later
existing API read this sw configuration and program it to hw.
This is applicable only for ga10b safety build and for other chips/
configuration it will be supported in the legacy way.
Bug 3585766
Change-Id: I00d79162c0b096616e3f555da965e82e47c014d1
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2713821
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Background: In case of a deferred suspend implemented by gk20a_idle,
the device waits for a delay before suspending and invoking
power gating callbacks. This helps minimize resume latency for any
resume calls(gk20a_busy) that occur before the delay.
Now, some APIs spread across the driver requires that if the device
is powered on, then they can proceed with register writes, but if its
powered off, then it must return. Examples of such APIs include
l2_flush, fb_flush and even nvs_thread. We have relied on
some hacks to ensure the device is kept powered on to prevent any such
delayed suspension to proceed. However, this still raced for some calls
like ioctl l2_flush, so gk20a_busy() was added (Refer to commit Id
dd341e7ecbaf65843cb8059f9d57a8be58952f63)
Upstream linux kernel has introduced the API pm_runtime_get_if_active
specifically to handle the corner case for locking the state during the
event of a deferred suspend.
According to the Linux kernel docs, invoking the API with
ign_usage_count parameter set to true, prevents an incoming suspend
if it has not already suspended.
With this, there is no longer a need to check whether
nvgpu_is_powered_off(). Changed the behavior of gk20a_busy_noresume()
to return bool. It returns true, iff it managed to prevent
an imminent suspend, else returns false. For cases where
PM runtime is disabled, the code follows the existing implementation.
Added missing gk20a_busy_noresume() calls to tlb_invalidate.
Also, moved gk20a_pm_deinit to after nvgpu_quiesce() in
the module removal path. This is done to prevent regs access
after registers are locked out at the end of nvgpu_quiesce. This
can happen as some free function calls post quiesce might still
have l2_flush, fb_flush deep inside their stack, hence invoke
gk20a_pm_deinit to disable pm_runtime immediately after quiesce.
Kept the legacy implementation same for VGPU and
older kernels
Jira NVGPU-8487
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I972f9afe577b670c44fc09e3177a5ce8a44ca338
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2715654
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
nvgpu_pmu_rpc_execute takes pmu rpc header address and dereferences
it at address past header based on rpc struct that the header is
part of.
This usage of pointer is not right and confuses CERT checker.
Instead, pass the rpc struct address as char pointer and use
as header or rpc struct as per need.
CID 17141
CID 154223
CID 17557
CID 154226
CID 153904
CID 153926
CID 153929
CID 153925
CID 153925
CID 225346
CID 225355
CID 225356
CID 225360
CID 225361
CID 225365
CID 225367
CID 296735
CID 330244
CID 17557
Bug 3512546
Change-Id: I93b154d4321e75c0d2b41f43d7c2b701682962a3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2710224
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
GVS: Gerrit_Virtual_Submit
This is setting evict_max_ways for L2 cache to the maximum
supported value for safety.
In normal build L2 cache MAX_EVICT_LAST is configure via
KMD and RegOps. RegOps is enabled only on standard build
with CONFIG_DEBUGGER flag. This method we cant use it for
safety build. Safety we can make use of the patch buffer
to patch the register while creating the context.
JIRA NVGPU-8227
Change-Id: Iec5d73197239b9cad31c6b593ca2b87c224aad5e
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2708702
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In current form, the default domain acts like any schedulable
domain. TSGs are bound to it and it can be enumerated via the
public interfaces.
The new expectation for the default domain is meant to change
from the current form to a pseudo domain that cannot act like
an ordinary domain in other ways, i.e. it must not be reachable
by in particular the domain management API, it can't be removed,
does not show up in lists, and TSGs cannot be explicitly bound to
this domain. It won't participate in round-robin domain scheduling.
It is not really a domain, and acts like one only when activated in
the manual mode.
Following changes are made overall to support the above change in
definition.
1) Domain creation and attaching the domain to the scheduler are now
split into two separate functions. The new default domain (having ID
= UINT64_MAX) is created separately from a static function without
linking it with other domains in the scheduler.
2) struct nvgpu_nvs_scheduler explicitely stores the default domain
to support direct lookups.
3) TSGs are initially not bound to default domain/rl_domain.
Jira NVGPU-8165
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I916d11f4eea5124d8d64176dc77f3806c6139695
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2697477
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to support the concept of the default domain, a new
rl domain is created that shadows all the other domains i.e.
all channels of all TSGs are replicated here. This is scheduled
by default during GPU boot.
1) The shadow rl_domain is constructed during poweron sequence via
nvgpu_runlist_alloc_shadow_rl_domain(). struct nvgpu_runlist
is appended to store this separately as 'shadow_rl_domain'.
This is scheduled in background as long as no other user created
rl domains exist.
2) 'shadow_rl_domain' is scheduled out once user created rl domain
exist. At this point, any updates in the user created rl domains
are synchronized with the 'shadow_rl_domain'. i.e. 'shadow_rl_domain'
is also reconstructed to contain active channels and tsgs from the rl
domain.
3) 'shadow_rl_domain' is scheduled back in when the last user created
rl domain is removed.
4) In future for manual mode, driver shall support explicitely
switching to 'shadow_rl_domain'. Also, we will move to an
implementation where 'shadow_rl_domain' is switched out only when
other domains are actively scheduled. These changes will be
implemented later.
Jira NVGPU-8165
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Ia6a07d6bfe90e7f6c9e04a867f58c01b9243c3b0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2704702
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
BVEC changes for nvgpu_rc_pbdma_fault and nvgpu_rc_mmu_fault
started reporting below MISRA issue.
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:321:
1. misra_c_2012_rule_5_1_violation: Declaration with identifier
"nvgpu_tsg_unbind_channel_check_hw_state", which is ambiguous.
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:349:
2. other_declaration: The first 31 characters of identifiers
"nvgpu_tsg_unbind_channel_check_ctx_reload" and
"nvgpu_tsg_unbind_channel_check_hw_state" are identical.
Do below renames to fix the issue. Doing both for consistency.
s/nvgpu_tsg_unbind_channel_check_hw_state/nvgpu_tsg_unbind_channel_hw_state_check
s/nvgpu_tsg_unbind_channel_check_ctx_reload/nvgpu_tsg_unbind_channel_ctx_reload_check
JIRA NVGPU-6772
Change-Id: Ib92cabe11c486621351bf15ddb86e20d16d514c4
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2584152
(cherry picked from commit a619f259c6a4ffccb05550767212989af60c2a90)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2706551
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
nvgpu_device_get can return NULL if supplied invalid ID or instance
ID. We expect GR device struct to be non-NULL there hence just
assert that it is indeed non-NULL in gr_reset_engine and
ga10b_grmgr_init_gr_manager.
CID 224133
CID 250232
Bug 3512546
Change-Id: Id09a1c436a8e49b921111b940d3d013bd66bff7a
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2707018
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The emulate mode support is determined after chip detect and is flagged
by using NVGPU_SUPPORT_EMULATE_MODE flag. The present logic prevents
user from configuring the emulate mode sysfs knobs if this flag is not
set, however the emulate mode usecase requires the user to configure the
syfs knob prior to power-on, hence defer emulate mode check to a later
stage after chip detect.
Bug 3621460
Change-Id: If522527542fa8d7e95ccbcff43b74adbb9e976e6
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2703953
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mayur Poojary <mpoojary@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: David Li <davli@nvidia.com>
Fixed following Coverity Defects:
ioctl_as.c : Bad bit shift operation
mc_tu104.c : Bad bit shift operation
vm.c : Bad bit shift operation
vm_remap.c : Bad bit shift operation
A new linux header file for ilog2 is created.
The files which used the old ilog2 function
have been changed to use the new nvgpu_ilog2
function.
CID 9847922
CID 9869507
CID 9859508
CID 10112314
CID 10127813
CID 10127899
CID 10128004
Signed-off-by: Jinesh Parakh <jparakh@nvidia.com>
Change-Id: Ia201eea7cc426c3d6581e1e5ae3b882dbab3b490
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2700994
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix following Coverity Defect:
nvgpu_init.c : Unused value
The ret variable was being reassigned the error code from
nvgpu_cic_mon_deinit(g) without taking into account the previous ret value.
We need to propagate whether there is an error
(the last known error is returned) or not using ret, the temp_ret
variable helps in verifying this.
Similar coding style followed in the entire function.
CID 10127863
Bug 3460991
Signed-off-by: Jinesh Parakh <jparakh@nvidia.com>
Change-Id: I732ba5269ebbbe68f113e53229df40ae49ccc13c
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2697104
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function nvgpu_profiler_unbind_pm_resources is responsible for
destroying the regops allowlist object, but unfortunately does it
prior to any of the failure-prone operations. Because this function
can be called multiple times, in rare cases it can happen that
object is deallocated twice.
This patch fixes the issue by moving the free operations after the
failure-prone operations.
Bug 3591603
Change-Id: I3415712da561ccf162c9fb7f3ebb942faa9d9420
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2693803
(cherry picked from commit I3415712da561ccf162c9fb7f3ebb942faa9d9420)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2693799
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Fix CERT issue in nvgpu_gr_falcon_bind_fecs_elpg where nvgpu_pmu_pg_buf
could return NULL. nvgpu_pmu_pg_buf is called from context where PG
will be enabled hence remove the NULL return logic as it is dead
code.
Replace nvgpu_pmu_pg_buf and nvgpu_pmu_pg_buf_get_cpu_va functions by
new function nvgpu_pmu_pg_buf_alloc.
CID 17860
Bug 3512546
Change-Id: I09820a966dadeb258167ce1433ca256f94845896
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2692466
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Disable below interrupts on safety as they do not report any error
condition and are not used by CUDA and Graphics(VKSC) on safety
build.
Signoff from CUDA and VKSC is on Bug https://nvbugs/3588603
1. NV_PGRAPH_INTR_NOTIFY: This intr is set when the Notification
style is WRITE_THEN_AWAKEN.
2. NV_PGRAPH_INTR_SEMAPHORE: This is set when a 3d class sempahore is
released as the result ofa SetSemaphoreD method, when the
AwakenEnable field is TRUE.
3. NV_PGRAPH_INTR_BUFFER_NOTIFY: This bit is set when a Mem2mem DMA
completes and the LaunchDma method specifies the interrupt type
as INTERRUPT
4. NV_PGRAPH_INTR_DEBUG_METHODS: This is debug feature and not used
on QNX safety
Bug 3588603
JIRA NVGPU-8166
Change-Id: I6d07dfd2857ac047fac4599421600d364251df76
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2694363
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>