Multiple threads could be unbinding different channels from
the same tsg at the same time. At the point where we
remove the channel from the tsg's channel list, call
disable_channel again, in case another thread had
re-enabled the channel after we had disabled it.
Bug 200404549
Change-Id: I9abbc08dc11fe1f7a0abada88376c0ef96b56610
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083337
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Satish Arora <satisha@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The channel timeout ends up in a strange state during timeout handling
for a brief moment; it can become stopped and started again, and the
timeout lock is released in the middle. Add a more explicit rewind
function to reset the timeout to start if it's active. The active check
allows to use this from gk20a_channel_timeout_restart_all_channels(), so
that's also modified.
Also replace the return statements with more readable control flow in
gk20a_channel_timeout_handler().
Bug 200484795
Change-Id: Ia7d67242dfc149ace1f4f841a837e90b6c985308
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989327
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
(cherry picked from commit 8979a97af3
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2017922
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Tested-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently has_timedout variable is protected by wmb at places
where it is being set and there is no correspoding rmb whenever
has_timedout variable is read. This is prone to errors for
concurrent execution. This change is supposed to fix this issue.
Rename has_timedout variable of channel struct to ch_timedout.
Also to avoid rmb every time ch_timedout is read,
ch_timedout_spinlock is added to protect ch_timedout
variable for taking care of concurrent execution.
Bug 2404865
Bug 2092051
Change-Id: I0bee9f50af0a48720aa8b54cbc3af97ef9f6df00
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1930935
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 1f54ea09e3
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2016975
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
preempt_channel needs to use the channel to pass it to other
public functions, get access to a tsg etc. This qualifies it to take a
pointer to a channel as an input parameter instead of a chid.
Increment the channel ref counter using the function
gk20a_channel_from_id in functions where we get the chid from the h/w
registers directly. Once the prempt_channel function call is done,
use a gk20a_channel_put on the referenced channel.
Jira NVGPU-1461
Change-Id: I6c87c8104cfcb418d468c8c590087fd4aeabf4bd
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1963200
(cherry picked from commit 9abe9fe062
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2013728
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function gk20a_fifo_recover_tsg has to pass a valid struct tsg to
other functions from within. This qualifies it to have a pointer to
struct tsg_gk20a as an input parameter.
Tsg specific parts of the gk20a_fifo_preempt_timeout_rc are now moved
into another function gk20a_fifo_preempt_timeout_rc_tsg
that takes a tsg as an input and passes it to gk20a_fifo_recover_tsg.
The pointer to a tsg is also used to enumerate channels from within.
The function gk20a_fifo_preempt_timeout_rc now contains only channel
specific code.
Jira NVGPU-1461
Change-Id: Ice0a9921567841fb5586a7e4e010c442ca6cf172
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1961675
(cherry picked from commit e19cea7ab3
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2013726
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.
The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.
However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.
The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.
Jira NVGPU-1460
Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 7df3d58750
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2008515
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The context switch timeout works by triggering a hardware timeout at 10
Hz. When handling these, we check whether a channel has actually timed
out. Currently the timeout limit can be shorter than the 10 Hz interval
which always causes us to recover a channel but would also cause
detection of progress if there was any in the interval.
Handling both situations at the same time would reuse the channel
pointer local to the function after a loop has finished and would cause
memory corruption. Fix this by making the two branches mutually
exclusive, and move the recover case to happen first because that's how
our tests assume things to work.
Jira NVGPU-967
Bug 2502074
Change-Id: I26aa0fa7fd80ab42a9a1a93a6cca2cd29c9d3f3f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932449
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 8ac9a53d816a3d012a6948a9a96ac6db699c662di
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/1997597
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Tested-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- In vGPU code path, function vgpu_channel_abort_cleanup() does not
obtain channel reference before using the channel structure
(channel_gk20a)
- vgpu_channel_abort_cleanup() is called by vgpu_intr_thread() which
runs commands obtained from interrupt queue from RM server.
- If there is a scenario where gk20a_channel_release() function runs
before guest receives notification from RM server to abort channel
cleanup, channel gets freed before vgpu_channel_abort_cleanup() runs.
- However, because vgpu_channel_abort_cleanup() does not take explicit
reference to the channel, it ends up accessing structures
(such as ch->g) which are set to NULL and thus we end up in a crash.
- This patch explicitly takes reference of channel before
vgpu_channel_abort_cleanup() is called.
- If gk20a_channel_release() runs before vgpu_channel_abort_cleanup()
and ends up freeing channel, we dont get reference to freed
channel in vgpu_channel_abort_cleanup() and thus we return from
function rather than continuing with freed channel as was the case
previously.
Bug 200453473
JIRA EVLR-3411
Change-Id: I311043b2231336616b28246531cf8a0dc151b0cd
Signed-off-by: Deepak Bhosale <dbhosale@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932028
(cherry picked from commit b91228e506)
Reviewed-on: https://git-master.nvidia.com/r/1970807
Reviewed-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Karl Ding <kding@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This change adds RDMA supports for tegra iGPU.
1. Cuda Process allocates the memory and passes
the VA and size to the custom kernel driver.
2. The custom kernel driver maps the user allocated
buf and does the DMA to/from it.
3. Only supports iGPU + cudaHostAlloc sysmem
4. Works only for a given process.
5. Address should be sysmem page aligned and size should
be multiple of sysmem page size.
6. The custom kernel driver must register a free_callback when get_page()
function is called.
Bug 200438879
Signed-off-by: Preetham Chandru R <pchandru@nvidia.com>
Change-Id: I43ec45734eb46d30341d0701550206c16e051106
Reviewed-on: https://git-master.nvidia.com/r/1953780
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Instead of using num_inflight_jobs to determine whether to pre-alloc
resources for a channel use the c->deterministic flag and the
number of inflight jobs field. Non-determinsitic channels do not
require pre-alloced resources and deterministic channels with 0
in flight jobs (i.e no kernel job tracking, AKA fast path sumits)
also do not require pre-alloced resources.
Bug 2327792
Change-Id: I7e8eb0478c22e005ca2c46c555415afa0ded0be1
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850123
(cherry picked from commit 05ec7b80eb)
Reviewed-on: https://git-master.nvidia.com/r/1949219
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: James Norton <jnorton@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a reboot handler to make sure that nvgpu does not try to busy
the GPU if the system is going down. If the system is going down
then any number of subsystems nvgpu depends on may already have
been deinitialized.
Bug 200333709
Bug 200454316
Change-Id: I2ceaf7ca4fb88643310874b5b26937ef44c6e3dd
Signed-off-by: Kary Jin <karyj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1927018
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinayak Pane <vpane@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When turning off CONFIG_DEBUG_FS, there are build errors:
drivers/gpu/nvgpu/os/linux/os_ops_gp106.o: In function `nvgpu_fecs_trace_init_debugfs':
os_ops_gp106.c:(.text+0x8): multiple definition of `nvgpu_fecs_trace_init_debugfs'
drivers/gpu/nvgpu/os/linux/os_ops_gp10b.o:os_ops_gp10b.c:(.text+0x0): first defined here
drivers/gpu/nvgpu/os/linux/os_ops_gv100.o: In function `gp106_therm_init_debugfs':
os_ops_gv100.c:(.text+0x0): multiple definition of `gp106_therm_init_debugfs'
drivers/gpu/nvgpu/os/linux/os_ops_gp106.o:os_ops_gp106.c:(.text+0x0): first defined here
drivers/gpu/nvgpu/os/linux/os_ops_tu104.o: In function `gv100_clk_init_debugfs':
os_ops_tu104.c:(.text+0x0): multiple definition of `gv100_clk_init_debugfs'
drivers/gpu/nvgpu/os/linux/os_ops_gv100.o:os_ops_gv100.c:(.text+0x10): first defined here
This is because those functions aren't marked as static.
So this patch just simply fixes the bug.
Bug 2284925
Change-Id: I1da39345c653dfb50c509adb0c822b4657646c56
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929355
(cherry picked from commit 0fd9c84f87)
Reviewed-on: https://git-master.nvidia.com/r/1933889
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinayak Pane <vpane@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The PMM type-specific broadcast->unicast expansion calculation
was using incorrect values. This caused the invalid register
accesses to be generated.
This change HAL-ifies the values, so that the expansion will be
performed correctly.
Bug 200454109
Change-Id: I96c15de27b5e16e4db2e788fd98e6bf7d6e7d564
Signed-off-by: Matthew Braun <matthewb@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921717
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gops.fb.dump_vpr_wpr_info() accesses both VPR and WPR registers.
Split this into two different HALs gops.fb.dump_vpr_info() and
gops.fb.dump_wpr_info()
Also unset HALs accessing VPR registers on dGPUs
We don't support VPR on dGPUs
Remove fb_mmu_vpr_info_r() register and all its accessors from
dGPU headers
Bug 2173122
Change-Id: I5b2712f8c5389e422a84c375a7e836add48bfd1c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850947
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We don't support big page size beginning Pascal, so set HAL
gops.fb.set_mmu_page_size() to NULL on all those platforms
Also remove these accessors from corresponding platforms
fb_mmu_ctrl_use_pdb_big_page_size_v()
fb_mmu_ctrl_use_pdb_big_page_size_true_f()
fb_mmu_ctrl_use_pdb_big_page_size_false_f()
Bug 2173122
Change-Id: I7353412860a7a6f8a993ca9184a0dc3ca9d749af
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850946
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>