Now that GR buffers always have a kernel mapping, remove the unnecessary
calls to nvgpu_mem_begin() and nvgpu_mem_end() on these buffers:
- global ctx buffer mem in gr
- gr ctx mem in a tsg
- patch ctx mem in a gr ctx
- pm ctx mem in a gr ctx
- ctx_header mem in a channel (subctx header)
Change-Id: Id2a8ad108aef8db8b16dce5bae8003bbcd3b23e4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1760599
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Separate HAL added in gv11b and gv100 for
init_gpc_mmu function.
In gv11b HAL, RMW is enabled for gpu atomics
as default.
In gv100 HAL, GPC atomic capability mode will
get set based on the FB MMU capability.
If GPU is connected through NVLINK then mmu
will be set to RMW mode, else it will be in
L2 mode.
Bug 200390336
Change-Id: I224934f83d1762ec864ef8da7265dd01d86893a0
Signed-off-by: Ashish Srivastava <assrivastava@nvidia.com>
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735137
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
membar.sys does synchronization with the whole system (GPU and CPU),
membar.gl does synchronization within the GPU.
In gv11b, fb flush is generating membar.gl instead of membar.sys, which
is an issue. To fix this issue. following WAR is used:
1. Use bar1 engine id and bind it to a particular pdb,
2. Then instead of a fb_flush, issue a tlb invalidate of the bar1 pdb.
Now allocation of vm for bar1 instance block and bar1 binding is done
without check for bar1 support. Only bar1 register mapping is done
based on bar1 support enabled.
Bug 2112790
Change-Id: I76f43f1178a68f10823d48bc9da55d2bd686dd52
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1750257
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Add support for aborting runlist/s. Aborting runlist/s,
will abort all active tsgs and associated active channels
within these active tsgs
-Bare channels are no longer supported. Remove recovery
support for bare channels. In case there are bare
channels, recovery will trigger runlist abort
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: I6bec8a0004508cf65ea128bf641a26bf4c2f236d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640567
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Even though tsg preempt timed out, teardown sequence would by
design require s/w to issue another preempt.
If recovery includes an ENGINE_RESET, to not have race conditions,
use RUNLIST_PREEMPT to kick all work off, and cancel any context
load which may be pending. This is also needed to make sure
that all PBDMAs serving the engine are not loaded when engine is
reset
-Add max retries for pre-si platforms for runlist preempt done
polling loop
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: If9d1731fc17e7e7281b24a696ea0917cd269498c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1709902
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-is_preempt_pending hal does not need timeout_rc_type input param as
for volta, reset_eng_bitmask is saved if preempt times out. For
legacy chips, recovery triggers mmu fault and mmu fault handler
takes care of resetting engines.
-For volta, no special input param needed to differentiate between
preempt polling during normal scenario and preempt polling during
recovery. Recovery path uses preempt_ch_tsg hal to issue preempt.
This hal does not issue recovery if preempt times out.
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: Ie76a18ae0be880cfbeee615859a08179fb974fa8
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1709799
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For Si platforms, gk20a_get_gr_idle_timeout returns
3000 ms i.e. 3 sec. Currently this time is used for
polling each pbdma, eng and runlist and this conflicts
channel timeout if preempt fails.
Use 1000 ms timeout for polling preempt timeout when
timeouts are enabled else use gk20a_get_gr_idle_timeout.
For non-si platforms, polling loop is depending on
max number of retries.
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: Icdb69b7b7d17292f2b6a43f1d8e9d75ff545d0ae
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1739543
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-During polling eng preempt done, reset eng only if eng stall
intr is pending. Also stop polling for eng preempt done
if eng intr is pending.
-Add max retries for pre-si platforms for poll pbdma and eng
preempt done polling loops.
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: I66b07be9647f141bd03801f83e3cda797e88272f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694137
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of mmu nack error interrupt is received twice through SM
reported mmu nack interrupt and mmu fault in undertermined order.
Recover on the first received interrupt to avoid semaphore release
and skip doing a second recovery.
Also fix NULL pointer dereference in function
gv11b_fifo_reset_pbdma_and_eng_faulted when channel reference is
invalid in teardown path.
Bug 200382235
Change-Id: I361a5725d7b6355ebf02b2870727f647fbd7a37e
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1739804
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add remove_gr_sys() op to gpu_ops to reverse steps
done in create_gr_sysfs().
Make gv11b_tegra_remove() specific to gv11b instead
to properly remove sysfs nodes. This also helps in
having gv11b specific remove steps.
Also, update platform remove function of dGPU i.e.
nvgpu_pci_tegra_remove() to remove sysfs nodes. This
adds parity with iGPU platform remove.
Bug 1987855
Change-Id: Ibbaffac5c24346709347f86444a951461894354d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735987
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- removed inclusion of linux includes.
- replaced with nvgpu/*.h's
- reformated the function signature of
"css_hw_get_pending_snapshot" and
"css_hw_get_overflow_status" be global instead of
static.
- added get_pending_snapshot and get_overflow_status
to ops->css.
JIRA: VQRM-3699
Change-Id: I177904c263e143b414924c2c28ad6fd3cfd00132
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1732783
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below HALs to setup mmu_fault configuration registers and to read
information registers and set them on Volta
gops.fb.write_mmu_fault_buffer_lo_hi()
gops.fb.write_mmu_fault_buffer_get()
gops.fb.write_mmu_fault_buffer_size()
gops.fb.write_mmu_fault_status()
gops.fb.read_mmu_fault_buffer_get()
gops.fb.read_mmu_fault_buffer_put()
gops.fb.read_mmu_fault_buffer_size()
gops.fb.read_mmu_fault_addr_lo_hi()
gops.fb.read_mmu_fault_inst_lo_hi()
gops.fb.read_mmu_fault_info()
gops.fb.read_mmu_fault_status()
Jira NVGPUT-13
Change-Id: Ia99568ff905ada3c035efb4565613576012f5bef
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1744063
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- On t186, ucode expects physical address to be
programmed for FECS trace buffer.
- On t194, ucode expects GPU VA to be programmed
for FECS trace buffer. This patch adds extra
support to handle this change for linux native.
- Increase the size of FECS trace buffer (as few
entries were getting dropped due to overflow of
FECS trace buffer.)
- This moves FECS trace buffer handling in global
context buffer.
- This adds extra check for updation of mailbox1
register. (Bug 200417403)
EVLR-2077
Change-Id: I7c3324ce9341976a1375e0afe6c53c424a053723
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1536028
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Starting with Volta, one TPC could have more than 1 SMs. So
.record_sm_error_state needs to have sm number as parameter.
Logic tpc id should be read from gr_gpc0_gpm_pd_sm_id_r.
Let the function return logical sm_id. RM server will need it to nofify
client.
Jira EVLR-2643
Bug 200405202
Change-Id: Iffaff05b89b1c5058616b8a6bf50dd73bd4e52f6
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1742165
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs to allocate/map/commit global context buffers
gops.gr.alloc_global_ctx_buffers()
gops.gr.map_global_ctx_buffers()
gops.gr.commit_global_ctx_buffers()
Set these HALs for all the supported GPUs
We right now re-use below APIs to set these HALs
gr_gk20a_alloc_global_ctx_buffers()
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_commit_global_ctx_buffers()
Jira NVGPUT-27
Change-Id: I975a54e8d1716af057f982d543787748d35a256e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1743362
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The WPR will be divided into several sub-WPRs,
one for each Falcon and one common for sharing
between Falcons which bootstrap falcons
- Defined & used flag NVGPU_SUPPORT_MULTIPLE_WPR
to know M-WPR support.
- Added struct lsfm_sub_wpr to hold subWPR header info
- Added struct lsf_shared_sub_wpr_header to hold subWPR
info & copied to WPR blob after LSF_WPR_HEADER
- Set NVGPU_SUPPORT_MULTIPLE_WPR to false for gp106,
gv100 & gv11b.
- Added methods to support to multiple WPR support &
called by checking flag NVGPU_SUPPORT_MULTIPLE_WPR
in ucode blob preparation flow.
JIRA NVGPUTU10X / NVGPUT-99
Change-Id: I81d0490158390e79b6841374158805f7a84ee6cb
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1725369
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add g->fifo_eng_timeout_us to define engine timeout in microseconds.
It is initialized with GRFIFO_TIMEOUT_CHECK_PERIOD_US. In RM server
case, it can be overriden with value defined in device tree.
Jira EVLR-2674
Change-Id: I69ac2ce779fe575566c8ba48e8cd2d0e6b2d93cf
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1728391
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fb.enable_hub_intr() to enable hub interrupts
gops.fb.disable_hub_intr() to disable hub interrupts
Set existing APIs gv11b_fb_enable/disable_hub_intr() to these HALs
Call the HALs everywhere instead of calling the APIs directly
Jira NVGPUT-44
Change-Id: Id299c6d228733ed365a71be6b180186776cc1306
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1725977
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
As part of the MISRA fixes, moving all the
gating_reglist files to common/clock_gating dir,
the new directory structure suggested to follow.
Removed unused gating_reglist files for gk20a
JIRA NVGPU-646
Change-Id: I388855befcf991ee68eeffed10fe9ac456210649
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1722330
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Code related to MC module is updated for handling
MISRA violations
Rule 10.1: Operands shalln't be an inappropriate
essential type.
Rule 10.3: Value of expression shalln't be assigned
to an object with a narrow essential type.
Rule 10.4: Both operands in an operator shall have
the same essential type.
Rule 14.4: Controlling if statement shall have
essentially Boolean type.
Rule 15.6: Enclose if() sequences with braces.
JIRA NVGPU-646
JIRA NVGPU-659
JIRA NVGPU-671
Change-Id: Ia7ada40068eab5c164b8bad99bf8103b37a2fbc9
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1720926
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add NVGPU_IOCTL_CHANNEL_RESCHEDULE_RUNLIST ioctl to reschedule runlist,
and optionally check host and FECS status to preempt pending load of
context not belonging to the calling channel on GR engine during context
switch.
This should be called immediately after a submit to decrease worst case
submit to start latency for high interleave channel.
There is less than 0.002% chance that the ioctl blocks up to couple
miliseconds due to race condition of FECS status changing while being read.
For GV11B it will always preempt pending load of unwanted context since
there is no chance that ioctl blocks due to race condition.
Also fix bug with host reschedule for multiple runlists which needs to
write both runlist registers.
Bug 1987640
Bug 1924808
Change-Id: I0b7e2f91bd18b0b20928e5a3311b9426b1bf1848
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Release runlist_lock and then initiate recovery if preempt
timed out. Also do not issue preempt if ch, tsg or runlist
id is invalid. tsgid could be invalid for below call trace
gk20a_prepare_poweroff->gk20a_channel_suspend->
*_fifo_preempt_channel->*_fifo_preempt_tsg
Bug 2065990
Bug 2043838
Change-Id: Ia1e3c134f06743e1258254a4a6f7256831706185
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662656
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Semaphore methods currently being used in Volta are deprecated for future chips
And on Volta we support both old and new methods
So replace old methods by new methods on Volta itself so that new methods
get tested on silicon
Implement below HALs for Volta with new semaphore methods
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Also use new methods in these APIs
gv11b_fifo_add_syncpt_wait_cmd()
gv11b_fifo_add_syncpt_incr_cmd()
And change corresponding APIs to reflect correct size of command buffer
gv11b_fifo_get_syncpt_wait_cmd_size()
gv11b_fifo_get_syncpt_incr_cmd_size()
Jira NVGPUT-16
Change-Id: Ia3a37cd0560ddb54761dfea9bd28c4384cd8a11c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704518
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/
release sequence and set it to gops.fifo.add_sema_cmd()
Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size()
to return respective command buffer sizes
Jira NVGPUT-16
Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704490
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>