Use CONFIG_KMD_SCHEDULING_WORKER_THREAD instead of
CONFIG_NVS_KMD_BACKEND to remove confusion about the CPU based
KMD scheduling worker thread.
The KMD based scheduling worker thread caters to both Manual Mode
CPU based scheduler as well as Automatic Round Robin CPU based
scheduler.
For the traditional submit path, add correct handling of the
CONFIG_NVS_PRESENT. CPU based worker thread should be part of
CONFIG_NVS_PRESENT. Eventually, when DCONFIG_KMD_SCHEDULING_WORKER_THREAD
is removed, the application must switch to GSP.
Jira NVGPU-8619
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0886ef3b2e0124b6fe22c2bf0bf7d1fa98039d00
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2810217
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any corrected or uncorrected error reported by gpu hw
will be seen by nvgpu-mon. nvgpu-mon will raise a devctl call
to notify nvgpu-rm if its a fatal error interrupt.
nvgpu_cic_mon_handle_fatal_intr is the corresponding handler which will
walk through the entire tree structure of interrupts for all the subunits
and enter quiesce state.
Change-Id: I3c00c61a7f2c52ae1920f84ee7dfb65cba6b683d
Signed-off-by: Kishan <kpalankar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2801693
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
- When we enable AELPG, we send maximum idle threshold value
as one of the parameters.
- This was set to 70 msec whereas the ELPG IDLE threshold
was set to 15 msec.
- Thus, when AELPG is enabled, the threshold is getting modified
to a much higher value. So sometimes ELPG is not getting engaged
and this is leading to timeout in ELPG MODS 101 test on GVS.
Bug 3737783
Change-Id: Iac94f053e19cea5898b962c3d02369d556b8518f
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2786749
(cherry picked from commit 5010eaffda2ecaf6c9cc5f0fed68498cb2947071)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2809154
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
In order to share the TSG across different devices securely, device
instance IDs are to be exchanged for endpoint identification. Add
device instance ID field to gk20a_ctrl_priv which is generated
from gk20a level device instance id value.
Share this ID to userspace via gpu characteristics.
Bug 3677982
JIRA NVGPU-8681
Change-Id: I79d92a81c02272c52e24f5b12c452c8993137037
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2792079
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Currently, the registeration with error injection utility is done
only for GA10b using HAL. But HALs are not initialized during the
probe stage when we try to register the error injection utility.
So, the callback registration does not happen HAL is set to NULL.
Move the callback registration from probe to poweron stage when HAL
is initialized.
Update the nvgpu_cic_mon_init_lut() API name as it is no longer
doing only LUT initialization.
Bug 3828050
Change-Id: Ide718029e9317124749b4a51c423ae70dc8227c8
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2790269
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Implement the ioctls NVGPU_TSG_IOCTL_CREATE_SUBCONTEXT and
NVGPU_TSG_IOCTL_DELETE_SUBCONTEXT. These will allocate and
free the VEID numbers.
Address space association with the VEIDs is verified to
ensure that channels association with VEIDs and address
space remains consistent.
Bug 3677982
JIRA NVGPU-8681
Change-Id: I2d913baf61a6bdeec412c58270c0024b80ca15c6
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2766765
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Added infrastructure for enabling parsing Control-Fifo's
ring buffers(i.e. send/receive).
Initialization of these buffers are handled as part of
nvgpu_nvs_buffer_alloc() call itself.
A follow-up change shall implement the methods defined here
as part of the existing NVS worker thread.
The changes adhered to the design laid out in the header
nvsched/include/nvs/nvs-control-interface.h.
Jira NVGPU-8619
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I2050e6fb681eba80e01cf547ada37a955e58315a
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2764518
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use CONFIG_NVS_KMD_BACKEND to enclose all NVS KMD based scheduling
code.
Current configuration contains all the scheduling code managed within
CONFIG_NVS_PRESENT. Eventually, scheduling code shall only use GSP.
Hence, isolate KMD based scheduling code to a config
CONFIG_NVS_KMD_BACKEND. This shall make it easier to remove this code
later.
Jira NVGPU-8619
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I9dc668e0fa3e7706c111fda7a5e2415e1fc0dd03
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2769465
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The error injection code was enabled only when CONFIG_NVGPU_DGPU = n
so that the dGPUs do not attempt any error injection callback
function registration. But, this introduced dependency on DGPU
config when needs to be explicitly set to n for error injection to
be enabled.
Remove the dependency by moving the error injection callback
registration and deregistration to a HAL which is enabled only
on GA10b.
Bug 3819160
Change-Id: I4f4eb99189b1af3502d719536a91cc5e5d866bce
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2787202
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changes
- Initialize virtual memory for gsp. This space is used for creating
queues for ctrl fifo. Also can be used to ro map sync-pt to this
instance where gsp firmware can poll the sync-pt with sync-pt id.
- Enabled gsp context interface and written the instance block pointer
to nxtctx register for the gsp firmware to access created virtual memory.
- Added required gsp registers for this feature.
NVGPU-8730
Bug 3770916
Change-Id: If538f615eca3f9b7840ffe2787826528b4808886
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2764649
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Perfmon cmds are non-blocking calls and response
may/may-not come during railgate sequence for the
perfmon command sent as part of nvgpu_pmu_destroy call.
- if response is missed then payload allocated will not be
freed and allocation info will be present as part seq data
structure.
- This will be carried forward for multiple railgate/
rail-ungate sequence and that will cause the memleak
when new allocation request is made for same seq-id.
- Cleanup the sequence data struct as part of nvgpu_pmu_destroy
call by freeing the memory if cb_params is not NULL.
Bug 3747586
Bug 3722721
Change-Id: I1a0f192197769acec12993ae575277e38c9ca9ca
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2763054
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Tested-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Only certain combination of channels of GFX/Compute object classes can
be assigned to particular pbdma and/or VEID. CILP can be enabled only
in certain configs. Implement checks for the configurations verified
during alloc_obj_ctx and/or setting preemption mode.
Bug 3677982
Change-Id: Ie7026cbb240819c1727b3736ed34044d7138d3cd
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2719995
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Subcontext PDBs and valid mask in the instance blocks of the channels
in various subcontexts has to be updated when new subcontext is
created or a subcontext is removed.
Replayable fault state is cached in the channel structure. Replayable
fault state for subcontext is set based on first channel's bind
parameter. It was earlier programmed in function channel_setup_ramfc.
init_inst_block_core is updated to setup TSG level pdb map and mask.
Added new hal gv11b_channel_bind to enable the subcontext on channel
bind.
Bug 3677982
Change-Id: I58156c5b3ab6309b6a4b8e72b0e798d6a39c1bee
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2719994
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
This patch introduces following relationships among various nvgpu
objects to support multiple address spaces with subcontexts.
IOCTLs setting the relationships are shown in the braces.
nvgpu_tsg 1<---->n nvgpu_tsg_subctx (TSG_BIND_CHANNEL_EX)
nvgpu_tsg 1<---->n nvgpu_gr_ctx_mappings (ALLOC_OBJ_CTX)
nvgpu_tsg_subctx 1<---->1 nvgpu_gr_subctx (ALLOC_OBJ_CTX)
nvgpu_tsg_subctx 1<---->n nvgpu_channel (TSG_BIND_CHANNEL_EX)
nvgpu_gr_ctx_mappings 1<---->n nvgpu_gr_subctx (ALLOC_OBJ_CTX)
nvgpu_gr_ctx_mappings 1<---->1 vm_gk20a (ALLOC_OBJ_CTX)
On unbinding the channel, objects are deleted according
to dependencies.
Without subcontexts, gr_ctx buffers mappings are maintained in the
struct nvgpu_gr_ctx. For subcontexts, they are maintained in the
struct nvgpu_gr_subctx.
Preemption buffer with index NVGPU_GR_CTX_PREEMPT_CTXSW and PM
buffer with index NVGPU_GR_CTX_PM_CTX are to be mapped in all
subcontexts when they are programmed from respective ioctls.
Global GR context buffers are to be programmed only for VEID0.
Based on the channel object class the state is patched in
the patch buffer in every ALLOC_OBJ_CTX call unlike
setting it for only first channel like before.
PM and preemptions buffers programming is protected under TSG
ctx_init_lock.
tsg->vm is now removed. VM reference for gr_ctx buffers mappings
is managed through gr_ctx or gr_subctx mappings object.
For vGPU, gr_subctx and mappings objects are created to reference
VMs for the gr_ctx lifetime.
The functions nvgpu_tsg_subctx_alloc_gr_subctx and nvgpu_tsg_-
subctx_setup_subctx_header sets up the subcontext struct header
for native driver.
The function nvgpu_tsg_subctx_alloc_gr_subctx is called from
vgpu to manage the gr ctx mapping references.
free_subctx is now done when unbinding channel considering
references to the subcontext by other channels. It will unmap
the buffers in native driver case. It will just release the
VM reference in vgpu case.
Note that TEGRA_VGPU_CMD_FREE_CTX_HEADER ioctl is not called
by vgpu any longer as it would be taken care by native driver.
Bug 3677982
Change-Id: Ia439b251ff452a49f8514498832e24d04db86d2f
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2718760
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
With subcontexts support added, nvgpu has to allocate VEID0 channel
itself to initialize the golden context image. Allocate the channel
and init the golden context image at the beginning of alloc_obj_ctx
call for first user channel.
It can't be initialized at the end of probe as tpc pg settings need
to be updated before golden context image is initialized.
Bug 3677982
Change-Id: Ia82f6ad6e088c2bc1578a6bd32b7c7a707a17224
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2756289
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
The wait_pending HAL is now modified to simply
check the pending status of a given runlist.
The while loop is removed from this HAL.
A new function nvgpu_runlist_wait_pending_legacy() is
added that emulates the older wait_pending() HAL.
nvgpu_runlist_tick() is modified to accept a 64 bit
"preempt_grace_ns" value.
These changes prepare for upcoming control-fifo parser
changes.
Jira NVGPU-8619
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: If3f288eb6f2181743c53b657219b3b30d56d26bc
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2766100
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following changes are added here to simplify the overall
sequence.
1) Remove deferred update for runlists. NVS worker thread
shall submit the updated runlist.
2) Moved Runlist mem swap inside update itself. Protect
the swap() and hw_submit() path with a spinlock. This
is temporary till GSP.
3) Enable Control-Fifo mode from nvgpu driver.
Jira NVGPU-8609
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Icc52e5d8ccec9d3653c9bc1cf40400fc01a08fde
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2757406
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In linux threaded interrupts run with a Realtime priority
of 50. This bumps up the priority of bottom-half handlers
over regular kernel/User threads even during process
context.
In the current implementation scheduler thread still
runs in normal kernel thread priority. In order to
allow a seamless scheduling experience, the worker
thread is now created with a Realtime priority of 1.
This allows for the Worker thread to work at a priority
lower than interrupt handlers but higher than the regular
kernel threads.
Linux kernel allows setting priority with the help of
sched_set_fifo() API. Only two modes are supported
i.e. sched_set_fifo() and sched_set_fifo_low().
For more reference, refer to this article
https://lwn.net/Articles/818388/.
Added an implementation of nvgpu_thread_create_priority()
for linux thread using the above two APIs.
Jira NVGPU-860
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0a5a611bf0e0a5b9bb51354c6ff0a99e42e76e2f
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2751736
Reviewed-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
A previous commit ID 44b6bfbc1 added a hack to prevent
the worker thread from calling nvgpu_runlist_tick()
in post_process if the next domain matches the previous.
This could potentially still face issues with multi-domains
in future.
A better way is to synchronize the thread to suspend/resume
alongwith the device's common OS agnostic suspend/resume
operations. This shall emulate the GSP as well.
This shall also take care of the power constraints
i.e. the worker thread can be expected to always work
with the power enabled and thus we can get rid of the complex
gk20a_busy() lock here for good.
Implemented a state-machine based approach for suspending/
resuming the NVS worker thread from the existing callbacks.
Remove support for NVS worker thread creation for VGPU.
hw_submit method is currently set to NULL for VGPU. VGPU
instead submits its updates via the runlist.reload() method.
Jira NVGPU-8609
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I51a20669e02bf6328dfe5baa122d5bfb75862ea2
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2750403
Reviewed-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
This is adding a mutex lock to synchronise between
stall isr threads.
Orin(t234) has three interrupt lines and three ISR
threads to handle bottom half of the ISR. The threads
sharing same data between them without proper synchronization.
When multiple interrupts trigger simeltaneously, causing the
threads running in parallell like below traces
#0 nvgpu_cic_mon_intr_stall_isr (g=g@entry=0x5ed62a9318)
at /home/dt/automotive-dev-main-20220802T015100095/kernel/nvgpu/drivers/gpu/nvgpu/common/cic/mon/mon_intr.c:158
#1 0x00000013758cae30 in nvgpu_intr_stall (arg=0x5ed62a9120)
at /home/dt/automotive-dev-main-20220802T015100095/qnx/src/resmgrs/nvrm/nvgpu_rmos/os/intr.c:140
#2 0x00000013758ec090 in nvgpu_posix_thread_wrapper (data=<optimized out>)
at /home/dt/automotive-dev-main-20220802T015100095/kernel/nvgpu/drivers/gpu/nvgpu/os/posix/thread.c:77
#3 0x0000001375b01000 in pthread_attr_setdetachstate ()
from /home/dt/automotive-dev-main-20220802T015100095/out/embedded-qnx-t186ref-debug-none/target_rootfs/lib/libc.so.5
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
This is causing some race in shared data access and causing
multiple issues.
Bug 3647988
Change-Id: If40e581635b52cce288d8f4b00af6a040f7f9a6e
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2755874
Reviewed-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit
This change is reading the live pes from the register
"gr_gpc0_gpm_pd_live_physical_pes_r" and set it to
"gr_gpc0_swdx_pes_mask_r".
Every PES needs at least a TPC to work. If any of the TPCs
are floorswept,the live PES mask is read from
"gr_gpc0_gpm_pd_live_physical_pes_r" and the corresponding
active PES mask is updated in "gr_gpc0_swdx_pes_mask_r".
Bug 3677421
Change-Id: I899ac41c4a82beb3ce75c84ad57dcad262a49ba1
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2736560
(cherry picked from commit 85f2ceb3db6eeef925b49553f445d8cc31ec39da)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2759135
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
On tsg.unbind_channel hal failure, channel teardown was being done
again that was done already as part of the function
nvgpu_tsg_unbind_channel_common.
Just abort the TSG and return err in that case. Also, decrement
the TSG ch_count in the fail path.
Bug 3677982
Change-Id: I466f2b2c693d43ed64dc531b08bf740bf00f28a6
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2749970
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Add support for accessing Event Queue for non-exclusive
users. Allows, non-exclusive users to open Event Queues
before exclusive users. Non-Exclusive users can only
use the Event Queue in a read-only mode.
Add VM_SHARED for Event Queues across all users instead of just
Read-Only users. Event queues are shared with multiple processes
and as such require VM_SHARED across all users(exclusive and
observers).
Jira NVGPU-8608
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Id9733c2511ded6f06dd9feea880005bdc92e51a0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2745083
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
- Add AP_INIT RPC to initialize the AELPG feature.
- Add AP_CTRL_INIT_AND_ENABLE RPC to program some
APCTRL values for Adaptive ELPG.
- Add AP_CTRL_ENABLE and AP_CTRL_DISABLE RPCs to
send AELPG enable/disable request to PMU via sysfs
node.
- Re-structure the rpc handler based on PG_LOADING
and PG unit id. This is needed to handle different
types of new RPCs from PMU.
JIRA NVGPU-7182
Change-Id: If00b00730507f17ff1883a67094f7e16da5b81ea
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2728286
(cherry picked from commit fffb58703bd718600e8c983dcd1c81d9abe83802)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2603161
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
KMD needs to send the domain id and GPU_VA corresponding
to the struct runlist_domains to GSP. In the current
implementation, struct nvgpu_runlist_domain contains
the domain name instead of domain id. This requires
an additional search by name everytime an update
is needed to be submitted to the GSP.
Modify the struct nvgpu_runlist_domain to store domain id
instead of domain name. This simplifies the flow and avoids
unnecessary search.
Removed the conditional check for existence of shadow domain
as its a deadcode. Shadow Domain is not searchable in the list
of domains inside the struct nvgpu_runlist.
Jira NVGPU-8610
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0d67cfa93d89186240290e933aa750702b14f4f0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2744890
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Some of the functions with no traceability to unit tests are already
covered by callee API functions. Skip these functions in SWVR by
skipping doxygen for them.
Some of the functions are non-fusa like those in profile.h and
bsearch.h. Those were included as the header was included in
Doxygen sources. Mark then non-safe.
Some of the nvgpu functions were not added to Targets entries for
respective tests. Fix those.
JIRA NVGPU-7211
Change-Id: Iacf22dccdd9340100cf93814566d3979734c455d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2612982
(cherry picked from commit a40f62654747102cc8ef53ddbd9f953c21c2b745)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2737672
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to maintain separate mappings of GR TSG and global context
buffers for different subcontexts, we need to separate the memory
struct and the mapping struct for the buffers. This patch moves
the mappings of all GR ctx buffers to new structure
nvgpu_gr_ctx_mappings.
This will be instantiated per subcontext in the upcoming patches.
Summary of changes:
1. Various context buffers were allocated and mapped separately.
All TSG context buffers are now stored in gr_ctx->mem[] array
since allocation and mapping is unified for them.
2. Mapping/unmapping and querying the GPU VA of the context
buffers is now handled in ctx_mappings unit. Structure
nvgpu_gr_ctx_mappings in nvgpu_gr_ctx holds the maps.
On ALLOC_OBJ_CTX this struct is instantiated and deleted
on free_gr_ctx.
3. Introduce mapping flags for TSG and global context buffers.
This is to map different buffers with different caching
attribute. Map all buffers as cacheable except
PRIV_ACCESS_MAP, RTV_CIRCULAR_BUFFER, FECS_TRACE, GR CTX
and PATCH ctx buffers. Map all buffers as privileged.
4. Wherever VM or GPU VA is passed in the obj_ctx allocation
functions, they are now replaced by nvgpu_gr_ctx_mappings.
5. free_gr_ctx API need not accept the VM as mappings struct
will hold the VM. mappings struct will be kept in gr_ctx.
6. Move preemption buffers allocation logic out of
nvgpu_gr_obj_ctx_set_graphics_preemption_mode.
7. set_preemption_mode and gr_gk20a_update_hwpm_ctxsw_mode
functions need update to ensure buffers are allocated
and mapped.
8. Keep the unit tests and documentation updated.
With these changes there is clear seggregation of allocation and
mapping of GR context buffers. This will simplify further change
to add multiple address spaces support. With multiple address
spaces in a TSG, subcontexts created after first subcontext
just need to map the buffers.
Bug 3677982
Change-Id: I3cd5f1311dd85aad1cf547da8fa45293fb7a7cb3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2712222
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
set_patch_addr parameter to nvgpu_gr_ctx_set_patch_ctx was redundant.
Remove it.
Prepare new functions nvgpu_gr_ctx_set_hwpm_pm_mode to set PM mode,
nvgpu_gr_ctx_set_hwpm_ptr to set PM ptr in gr_ctx. Rename subctx
function to nvgpu_gr_subctx_set_hwpm_ptr.
This simplifies the logic in gr_gk20a_update_hwpm_ctxsw_mode to set
the PM mode and PM ptr. Channel loop is needed only for subcontexts.
Bug 3677982
Change-Id: I44acb09f6296ba8d510e278910188864f39e7157
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2743724
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit