Add these bits in the gpu characteristics flags:
NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_NO_JOBTRACKING - fast
submits with no in-kernel job tracking are supported.
NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_FULL - deterministic
submits also with job tracking and num_inflight_jobs set are supported.
Either of these may get disabled if the particular channel or submit
still requires features that block these.
Make gk20a_channel_sync_needs_sync_framework() take a gk20a pointer
instead of a channel pointer so that it can be called without a channel.
It does not need any per-channel data.
Bug 20029130
Bug 200274674
Change-Id: I5f82510b6d39b53bcf6f1006dd83bdd9053963a0
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1456845
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit ee9733e587 in
dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/1558993
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
If platform probe fails as a result of DEFER_PROBE, it should be
reported as dev_info instead of dev_err.
Bug 1926777
Change-Id: Iba4392abdd6089da9678695b8ee7f2c92bea1505
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: http://git-master/r/1492711
(cherry picked from commit 896dc2b1b979107f968ba91582210216ab8800b7 in
dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/1550437
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Add a new sysfs node pd_max_batches for setting max batches value in
NV_PGRAPH_PRI_PD_AB_DIST_CONFIG_1_MAX_BATCHES register which controls
max number of batches per alpha-beta transition stored in PD.
Bug 1927124
Change-Id: I2817f2d70dab348d8b0b8ba19bf1e9b9d23ca907
Signed-off-by: Sandeep Shinde <sashinde@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1544104
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
GVS: Gerrit_Virtual_Submit
NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire
current timeslice and reschedule from front of runlist.
This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a
channel start sooner after submit rather than waiting for natural
timeslice expiration or block/finish of currently running channel.
Bug 1968813
Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537218
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
When compiling the kernel with an O= option (which stores built files
outside the source tree) the kernel adds various extra source paths to the
system include path. However, this doesn't happen when building in-tree.
Adjust the main nvgpu Makefile to ensure all required paths are part of
the system include path so that all headers can be found.
Bug 1978395
Change-Id: I51ffc78b3863b89ebb5f051c963a8016258534a3
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1544320
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
The new SET_BES_CROP_DEBUG3 sw method is used to flip two fields
in the NV_PGRAPH_PRI_BES_CROP_DEBUG3 register. The sw method is
used by the user space driver to disable enough ROP optimizations
to maintain ZBC state of target tiles.
Bug 1942454
Change-Id: Id4e4d9d06c6c66080d06b6d4694546fe5cba8436
Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1516202
(cherry picked from commit d3415f27c4)
Reviewed-on: https://git-master.nvidia.com/r/1520447
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
Reviewed-by: Mathias Heyer <mheyer@nvidia.com>
Tested-by: Mathias Heyer <mheyer@nvidia.com>
msleep is not recommended for (1ms - 20ms). So use usleep_range
instead to have a more deterministic sleep time.
Also fix the print for target_ref_count that could either be 2 or
1 based on whether GPU rail-gating is enabled or not.
Bug 200294536
Change-Id: I26c9ed8a1badc84db5efa89347a227e6b46f603c
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/1500409
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
In gr_gk20a_handle_sm_exception(), we disable all SM exceptions
if SM debug mode is set and irrespective of exception type
But we should not disable SM exceptions if the only
exception is BPT_INT
Fix this by checking if only interrupt is BPT_INT and
do not disable SM exceptions in that case
Note that for rest of the exceptions we still need to
disable SM exceptions
Also, remove redudant checks of sm_debugger_attached since
we bail out early if this flag is not set anyways
Bug 200264850
Change-Id: I7732567273fc88f6c98f25372fd8619d92339734
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1487040
(cherry picked from commit f76febb962)
Reviewed-on: http://git-master/r/1496601
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
In order to perform timestamps correlation for FECS
traces, we need to collect GPU / GPU timestamps
samples. In virtualization case, it is possible for
a guest to get GPU timestamps by using read_ptimer.
However, if the CPU timestamp is read on guest side,
and the GPU timestamp is read on vm-server side,
then it introduces some latency that will create an
artificial offset for GPU timestamps (~2 us in
average). For better CPU / GPU timestamps correlation,
Added a command to collect all timestamps on vm-server
side.
Bug 1900475
Change-Id: Idfdc6ae4c16c501dc5e00053a5b75932c55148d6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1472447
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
API gk20a_pm_resume() is called during system resume.
While unrailgating in this path, we request MAX EMC,
but then we never request a lower frequency after boot
completes.
To fix this, call gk20a_scale_notify_busy() after boot
completes in gk20a_pm_resume() so that we request
a lower EMC corresponding to current clock value
Also, add corresponding gk20a_scale_notify_idle()
call to gk20a_pm_suspend()
Bug 200308543
Change-Id: I3859222abdf67805673467f383981be519a1cece
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1484660
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Before this commit call to EMC BWMGR in postscale procedure was skipped
when GPU rail is ON (= GPU is running). It should be the other way
around - call should be skipped when GPU is rail-gated.
Bug 200267304
Change-Id: Id4da84b3d0ed0606017cc53a58e2917d486fa13e
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/1479769
(cherry picked from commit ab22d66386)
Reviewed-on: http://git-master/r/1482812
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com>
Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com>
Tested-by: Rajkumar Kasirajan <rkasirajan@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
We use g->busy_lock in gk20a_do_idle() to prevent submitting
more jobs to h/w and to wait for currently running jobs to
finish
But requesting this lock in gk20a_idle() prevents decrementing
runtime counter and hence gk20a_do_idle() can timeout with
below prints
[ 148.904739] gk20a 17000000.gp10b: Timeout detected @
gk20a_do_idle+0x30/0x38
[ 148.912185] gk20a 17000000.gp10b: __gk20a_do_idle: failed to idle -
refcount 4 != 1
Hence skip requesting this lock in gk20a_idle()
Bug 200294536
Change-Id: I060075fdee1b68e1b5fa11baa44a3f5ce4917d94
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1480777
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This change adds a new sysfs node to allow configuring CZF_BYPASS, to
enable platforms with low context-switching latency requirements.
/sys/devices/17000000.gp10b/czf_bypass
Values:
0 - always
1 - lateZ (default)
2 - single pass
3 - never
The specified value will apply only to newly allocated contexts.
Bug 1914014
Change-Id: Ibb9a8e86089acaadaa7260b00eedec5c80762d6f
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-on: http://git-master/r/1478567
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
When CONFIG_DEBUG_FS is not set, lots of gpu_dbg_info spew.
To reduce the debug logs, set GK20A_DEFAULT_DBG_MASK to 0.
Bug 1885240
Change-Id: I3f60ce1b205b316641228a34fa791df7fab48c9e
Signed-off-by: Jake Park <jakep@nvidia.com>
Reviewed-on: http://git-master/r/1474997
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Kamal Balagopalan <kbalagopalan@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
dev_info() is used in place of dev_err() when secure buffer allocation
is failing.
Bug 200302186
Change-Id: Ia43925985b0a0a79e03863b91e475068ee8449c9
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: http://git-master/r/1474476
Tested-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
It is currently possible to set GPCCLK lower than the
minimum allowed frequency.
Clip target GPCCLK/MCLK according to valid min/max range
in arbiter. We could do this before submitting request to
arbiter, but then we would loose information on the
requested target frequency. Instead, we cache the clock
range in arbiter context, and check target frequency when
running arbiter.
Bug 200288036
Change-Id: I29f5176e6365a926d1041430c05a63f0c8447e2b
Reviewed-on: http://git-master/r/1460834
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
(cherry picked from commit eb626903e4fc046fe1f0eaee703c857e9a0f2b4d)
Reviewed-on: http://git-master/r/1461882
(cherry picked from commit 3615061045276c8913057e1c0e8cc443b70d2ad9)
Reviewed-on: http://git-master/r/1467627
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
In gk20a_pm_shutdown(), we do not check return value
of gk20a_pm_prepare_poweroff
In some cases it is possible that gk20a_pm_prepare_poweroff()
returns -EBUSY (this could happen if engines are busy)
so we don't clean up s/w state and directly
trigger GPU railgate
In case some interrupt is triggered simultaneously
we try to access a register while GPU is already railgated
This leads to a hard hang in nvgpu shutdown path
Make below changes in shutdown sequence to fix this:
- check return value of gk20a_wait_for_idle()
- disable activity on all engines with
gk20a_fifo_disable_all_engine_activity()
- ensure engines are idle with gk20a_fifo_wait_engine_idle()
- check return value of gk20a_pm_prepare_poweroff()
- check return value of gk20a_pm_railgate()
Add a print when we bail out early in case
GPU is already railgated
Also, skip shutdown in case of VGPU since we don't need
to clean up virtual GPU, and RM server will take care of
cleaning h/w resources
Bug 200281010
Change-Id: I2856f9be6cd2de9b0d3ae12955cb1f0a2b6c29be
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1454658
(cherry picked from commit 6de456f840)
Reviewed-on: http://git-master/r/1461150
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
For dGPU, instance block is in vidmem, and context_ptr
was not properly computed, leading to reporting pid=0 in
FECS traces.
Use gk20a_mm_inst_block_addr, which handles all cases
to determine instance block physical address.
Bug 1899195
Change-Id: If003d9f00aff66d808e66c06baf6ded38699981a
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1461646
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This register is context_reset , and the value is context-switched with
GPU contexts
Bug 1855307
Bug 1871606
Change-Id: I4f871eaf94b4bb589ae179fc0ea972943acd0881
Signed-off-by: Jon McCaffrey <jmccaffrey@nvidia.com>
Reviewed-on: http://git-master/r/1452947
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Donghan Ryu <dryu@nvidia.com>
(cherry picked from commit f02591a23ab575d8592d3b2bebf94163c1323f17)
Reviewed-on: http://git-master/r/1459948
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Getting a timeout on kernel's own CE channels is unrecoverable.
Vidmem freeing also depends on CE to clear pages that have been
used so that they can be reused.
Disable watchdog on kernel's CE channels.
Bug 200287270
Change-Id: I87e0aa925d6d20485a5a19d2a6bfd050de34e968
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1454208
(cherry picked from commit 2ff3a9f374)
Reviewed-on: http://git-master/r/1457402
GVS: Gerrit_Virtual_Submit
The nvgpu timer API prints a message when the timer expires, but
expiration of that does not necessarily mean here that the job has
actually timed out, which is tested by comparing gp_get. Change the
expiration check to just peek instead of the default which prints to
log on expiration.
Bug 1887569
Bug 200291842
Jira NVGPU-21
Change-Id: Ifde34cff701eaed2f3ea727dba3ec8affeef26b9
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1329731
(cherry picked from commit 580c8112f0)
Reviewed-on: http://git-master/r/1452969
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
On unbind we need to check that interrupts are complete before tearing
down the interrupt threads, but on vgpu those structures are not
initialized as they are managed by the server.
This change makes sure we do not try to free those resources on vgpu
shutdown
Bug 200293510
JIRA: EASS-1753
Change-Id: I77cb8594e1ad2c53f632e18b0dfc88f784a815e4
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
(cherry picked from commit 1a640fa6a3b41c3de7d63e14ee6770679e2c82af)
Reviewed-on: http://git-master/r/1330770
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Jinyoung Park <jinyoungp@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
This is a quick fix. Finally, the common probe code is better be put
in common function btween vgpu and native gpu.
Bug 200293437
Jira EVLR-1152
Change-Id: I55f0d179d7adba556e0cb404766e14405b3e27e5
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1330229
(cherry picked from commit 7691902fec8abdd621ee17561607efeef615499f)
Reviewed-on: http://git-master/r/1331606
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
In gk20a_suspend_all_sms(), we currently loop
over all GPCs and then loop over all TPCs in inner
loop
But this is incorrect and leads to SM with
invalid GPC,TPC ids
Fix this by looping over number of TPCs in each
GPC in inner loop
Also, fix gk20a_gr_wait_for_sm_lock_down() as
per below
- we right now wait infinitely for SM to lock down
- restrict this wait with a timeout on silicon
platforms
- return ETIMEDOUT instead of EAGAIN
- add more debug prints with additional data
for SM lock down failures
Bug 200258704
Change-Id: Id6fe32e579647fd8ac287a4b2ec80cbf98791e0d
Signed-off-by: Cory Perry <cperry@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1316471
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-on: http://git-master/r/1322373
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This change refactors the teardown in remove to ensure that it is
possible to unload the driver while leaving fds open. This is achieved
by making sure that the SW state is kept alive till all fds are closed
and by checking that subsequent calls to ioctls after the teardown fail.
Normally, this would be achieved ny calls into gk20a_busy(), but in
kickoff we dont call into that to reduce latency, so we need to check
the driver status directly, and also in some of the functions
as we need to make sure the ioctl does not dereference the device or
platform struct
bug 200277762
JIRA: EVLR-1023
Change-Id: I163e47a08c29d4d5b3ab79f0eb531ef234f40bde
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1320219
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
(cherry picked from commit e0f2afe5eb)
Reviewed-on: http://git-master/r/1327755
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
After driver remove, the device structure passed in gk20a_busy can be
invalid. To solve this the prototype of the function is modified to pass
the gk20a struct instead of the device pointer.
bug 200277762
JIRA: EVLR-1023
Change-Id: I08eb74bd3578834d45115098ed9936ebbb436fdf
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1320194
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
(cherry picked from commit 2a502bdd5f)
Reviewed-on: http://git-master/r/1327754
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
On the past, we had separate calls for platform and channel busy, but
those got removed. The result is that in the debugger code we have
essentially a double busy call int the powergating enable/disable.
This change removes it
bug 200277762
JIRA: EVLR-1023
Change-Id: Iba70b81700f27b847e1d0222fb69ed1a7a883342
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1323220
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-on: http://git-master/r/1327753
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
The fifo interrupt path was reading the PBDMA interrupt status
after clearing interrupts and this could lead to a situation in
which the host may have advanced to another channel, leading to
the recovery code resetting the wrong channel.
Bug 200278729
JIRA: EVLR-1036
Change-Id: I392423d1eaa8d23acf88454bf113c015e649e13d
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1326461
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
(cherry picked from commit ab401c7068)
Reviewed-on: http://git-master/r/1327292
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
Support for hwpm reservations in the virtual case:
- Add session ops for checking and setting global and context reservations, and
releasing reservations
- in the native case, these just update reservation counts and flags
- in the vgpu case, when the reservation count is 0, check with the RM server
that a reservation is possible: for global reservations, no other guest
can have a reservation; for context reservations, no other guest can have
a global reservation
- in the vgpu case, when the reservation count is decremented to 0, notify
the RM server that the guest no longer has any reservations
Bug 1775465
JIRA VFND-3428
Change-Id: Idf115b730e465e35d0745c96a8f8ab6b645c7cae
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1326166
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The main driver structure is not refcounted properly,
so when the driver unload, file desciptors associated to the
driver are kept open with dangling references to the main object.
This change adds referencing to the gk20a structure.
bug 200277762
JIRA: EVLR-1023
Change-Id: Id892e9e1677a344789e99bf649088c076f0bf8de
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1317420
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
(cherry picked from commit 74fe1caa2b)
Reviewed-on: http://git-master/r/1324637
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
The driver is not properly tearing down the arbiter on the PCI driver
unload. This change makes sure that the workqueues are drained before
tearing down the driver
bug 200277762
JIRA: EVLR-1023
Change-Id: If98fd00e27949ba1569dd26e2af02b75897231a7
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1320147
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
(cherry picked from commit 469308beca)
Reviewed-on: http://git-master/r/1324636
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
We have a gk20a_busy() call in gk20a_buffer_convert_gpu_to_cde()
and we again call gk20a_busy() in gk20a_submit_channel_gpfifo()
If gk20a_do_idle() is triggered in between these two calls,
then this leads to a deadlock and results in idle failure
Hence to avoid this take power refcount in a more fine-grained
way i.e. in gk20a_cde_convert() instead of taking in
gk20a_buffer_convert_gpu_to_cde()
Keep gk20a_cde_execute_buffer() out of the gk20a_busy()/
gk20a_idle() pair since we take power refcount in submit path
anyways
Add correct error handling path in gk20a_cde_convert()
Bug 200287073
Change-Id: Iffea2d4c03f42b47dbf05e7fe8fe2994f9c6b37c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1324329
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
In gk20a_cde_convert(), we hold cde_app.mutex almost
for everything which is unnecessary
Also, this causes a deadlock scenario when gk20a_do_idle()
is called
In some cases it is possible that Thread 1
holds the lock and is trying to power on GPU, and
Thread 2 is trying to power off GPU and then grab
cde_app.mutex to cleanup GPU state
To fix this, grab the mutex only for gk20a_cde_get_context()
and then release it
Bug 200287073
Change-Id: Ic4856604652d0d4024abdeb5c2f8f03910c601a5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1324328
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
JIRA: EVLR-1004
(*) Refactor the non-stalling interrupt path to execute clear on the
top half, so on dGPU case processing of stalling interrupts does not
block non-stalling one.
(*) Use a worker thread to do semaphore wakeups and allow batching of
the non-stalling operations.
(*) Fix a bug where some gpus will not properly track the completion
of interrupts, preventing safe driver unloads
Change-Id: Icc90a3acba544c97ec6a9285ab235d337ab9eefa
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1312796
Reviewed-on: http://git-master/r/1320848
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sumeet Gupta <sumeetg@nvidia.com>
Currently callbacks from the PM_QOS framework (for
thermal events), result in a RPC call to set GPU frequency.
Since the governor will now be responsible for setting desired
rate, the max PM_QOS callback will now cap the possible
GPU frequency w/ a new RPC call to the server. The server
is responsible for setting the ultimate frequency
based on the cap & desired rates.
Jira VFND-3699
Change-Id: I806e309c40abc2f1381b6a23f2d898cfe26f9794
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1295543
(cherry picked from commit e81693c6e087f8f10a985be83715042fc590d6db)
Reviewed-on: http://git-master/r/1282467
(cherry picked from commit 7b4e0db647572e82a8d53e823c36b465781f4942)
Reviewed-on: http://git-master/r/1321836
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add devfreq governor support in order to allow frequency scaling
in virtualization config. GPU clock frequency operations are
re-directed to the server over RPC.
Bug 200237433
Change-Id: I1c8e565a4fff36d3456dc72ebb20795b7822650e
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1295542
(cherry picked from commit d5c956fc06697eda3829c67cb22987e538213b29)
Reviewed-on: http://git-master/r/1280968
(cherry picked from commit 25e2b3cf7cb5559a6849c0024d42c157564a9be2)
Reviewed-on: http://git-master/r/1321835
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_channel_clean_up_jobs, move removal of job from channel's job list
to before fences are cleaned up; this will prevent gk20a_channel_abort from
asynchronously trying to dereference an already freed job.
Bug 1844305
JIRA EVLR-849
Change-Id: I1ba05237aa74be1350007630bfa5eba9988f859a
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
(cherry picked from commit 2a9ce58b1b318b95ecfcdf78462f918d090eab99)
Reviewed-on: http://git-master/r/1319026
(cherry picked from commit 990f070b0a363159ce1b21f936b7512f469018ca)
Reviewed-on: http://git-master/r/1322332
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Query the RM server to retrieve gpu preemption policy
of guest based on pct configuration. If guest is not
allowed to request wfi preemption mode then set context
with either gfxp or cta preemption mode only.
Jira VFND-3079
Jira VFND-3081
Change-Id: I60cbf121d6f0e2373568cf40b3dfdb4df76fe02d
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: http://git-master/r/1280903
(cherry picked from commit 16ee09bb59)
Reviewed-on: http://git-master/r/1321661
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add support for clearing single SM error state for CUDA debugger.
In addition to clearing local copy of SM error state,
vgpu_gr_clear_sm_error_state now sends a command to RM server
(TEGRA_VGPU_CMD_CLEAR_SM_ERROR_STATE), to clear global ESR and
warp ESR.
Bug 1791111
Change-Id: I3a1f0644787fd900ec59a0e7974037d46a603487
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1296311
(cherry picked from commit fd07e03c3d086f396e4d65575c576a4dd68c920a)
Reviewed-on: http://git-master/r/1299060
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Cory Perry <cperry@nvidia.com>
Tested-by: Cory Perry <cperry@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add ability to suspend/resume contexts for a debug session
(NVGPU_DBG_GPU_IOCTL_SUSPEND_RESUME_CONTEXTS), in virtualized
case:
- added hal function to resume contexts.
- added vgpu support for suspend contexts, i.e. build a list
of channel ids, and send TEGRA_VGPU_CMD_SUSPEND_CONTEXTS
- added vgpu support for resume contexts, i.e. build a list
of channel ids, and send TEGRA_VGPU_CMD_RESUME_CONTEXTS
Bug 1791111
Change-Id: Icc1c00d94a94dab6384ac263fb811c00fa4b07bf
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1294761
(cherry picked from commit d17a38eda312ffa92ce92e5bafc30727a8b76c4e)
Reviewed-on: http://git-master/r/1299059
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Cory Perry <cperry@nvidia.com>
Tested-by: Cory Perry <cperry@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>