-pd, scc, ds, ssync, mme and sked exceptions are
enabled. This will be useful for debugging
-Handle enabled interrupts
-Add gr ops to handle ssync hww. For legacy
chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr.
Since ssync hww is not enabled on legacy chips, added
ssync hww exception handling for volta only.
Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644751
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The message to tell RM server to unbind channel has to be sent after
client unbinds the channel and before client calls tsg release. The
channel has to belong to a tsg on RM server before client submit a
runlist to remove the channel. Or there's a bare channel problem.
By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel()
will be common functions for all chip, and it'll call tsg release after
call .tsg_unbind_channel. So vgpu won't need to worry about tsg was
released before sending msg to RM server.
Bug 200382695
Bug 200382785
Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1645383
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Dump client type/id and protected mode as error prints.
This will help figuring out which client is causing mmu fault.
-Removed extra print for unbound instance block fault as
it is already printed as fault type
-Changed few extra prints from info prints to
prints protected by gpu_dbg_intr
Change-Id: I9e87e2a701372b47200f85149e040176365bd71c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643817
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Pascal has support for more comptags than Maxwell, but we were using
gm20b definitions for cbc_ctrl on all chips. Specifically field
clear_upper_bound is one bit wider in Pascal.
Implement gp10b version of cbc_ctrl and take that into use in Pascal
and Volta.
Bug 200381317
Change-Id: I7d3cb9e92498e08f8704f156e2afb34404ce587e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1642574
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The gv11b and gm20b gr status reg dumps can get printed so early that
this array is null, so don't access it in that case.
Commit 946f1e6359 ("gpu: nvgpu: don't read
missing gpc_tpc_count in dump") fixed this for gp10b only.
Bug 2049965
Change-Id: I9739fd63b5a153f43000d719a5c509e3be5135cf
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643692
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Lots of code paths were split to T19x specific code paths and structs
due to split repository. Now that repositories are merged, fold all of
them back to main code paths and structs and remove the T19x specific
Kconfig flag.
Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added sw method support for SET_BES_CROP_DEBUG4.
In this sw method:
CLAMP_FP_BLEND_TO_MAXVAL forces overflow and
CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval.
Added support for this sw method in gp10b/gp106/gv11b
and gv100.
Bug 2046636
Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1641529
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following fb iso register is not valid for gv11b but
hw headers has it. So, removing it manually from
gating register list:
0x00100D1C
Following sm blcg register not hooked up correctly
in gv11b. So, removing it manually from gating register
list:
0x00419c84
Once hw headers are updated, gating register tool will
automatically remove them from kernel code.
Bug 2042775
Change-Id: I4839b857656220566e53b66d3aead676893aaa59
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1636787
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add HAL for dumping ctxsw statistics. The statistics are dependent on
the architecture, and the function that calls this operation needs to
be moved to gk20a.
Bug 1842197
Change-Id: I285c74b8ddc8c7854c85b3fef4cbfc582098919e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1632681
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Upon receiving MMU_FAULT error, MMU will forward MMU_NACK to SM
If MMU_NACK is masked out, SM will simply release the semaphores
And if semaphores are released before MMU fault is handled, user space
could see that operation as successful incorrectly
Fix this by handling SM reported MMU_NACK exception
Enable MMU_NACK reporting in gv11b_gr_set_hww_esr_report_mask
In MMU_NACK handling path, we just set the error notifier and clear
the interrupt so that the User Space sees the error as soon as
semaphores are released by SM
And MMU_FAULT handling path will take care of triggering RC recovery
anyways
Also add necessary h/w accessors for mmu_nack
Bug 2040594
Jira NVGPU-473
Change-Id: Ic925c2d3f3069016c57d177713066c29ab39dc3d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1631708
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
runlist event interrupt is not needed to be enabled as
s/w polls for preemption completion for preempts issued
in RUNLIST_PREEMPT. Even though it is not enabled, intr
will get set in fifo_intr_0 status register whenever
RUNLIST_PRREMPT is successfully completed. Since intr
is disabled, fifo intr will not be triggered but it will
be handled during handling of other fifo interrupts
whenever fifo intr is triggered.
Bug 2039371
Change-Id: I0817c2b6e9f3f14958ca7c738392bc67875be5d5
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1630283
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Used chip specific attrib_cb_gfxp_default_size and
attrib_cb_gfxp_size buffer sizes during committing
global callback buffer when gfx preemption is requested.
These sizes are different for gv11b from gp10b.
For gp10b used smaller buffer sizes than specified
value in hw manuals as per sw requirement.
Also used gv11b specific preemption related functions:
gr_gv11b_set_ctxsw_preemption_mode
gr_gv11b_update_ctxsw_preemption_mode
This is required because preemption related buffer
sizes are different for gv11b from gp10b. More optimization
will be done as part of NVGPU-484.
Another issue fixed is: gpu va for preemption buffers
still needs to be 8 bit aligned, even though 49 bits
available now. This done because of legacy implementation
of fecs ucode.
Bug 1976694
Change-Id: I2dc923340d34d0dc5fe45419200d0cf4f53cdb23
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1635027
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit caf168e33e.
Might be causing an intermittency in quill-c03 graphics submit. Super
weird since the only change that seems like it could affect it is the
header file update but that seems rather safe.
Bug 2044830
Change-Id: I14809d4945744193b9c2d7729ae8a516eb3e0b21
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1634349
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Timo Alho <talho@nvidia.com>
Tested-by: Timo Alho <talho@nvidia.com>
Used chip specific attrib_cb_gfxp_default_size and
attrib_cb_gfxp_size buffer sizes during committing
global callback buffer when gfx preemption is requested.
These sizes are different for gv11b from gp10b.
Also used gv11b specific preemption related functions:
gr_gv11b_set_ctxsw_preemption_mode
gr_gv11b_update_ctxsw_preemption_mode
This is required because preemption related buffer
sizes are different for gv11b from gp10b. More optimization
will be done as part of NVGPU-484.
Another issue fixed is: gpu va for preemption buffers
still needs to be 8 bit aligned, even though 49 bits
available now. This done because of legacy implementation
of fecs ucode.
Bug 1976694
Change-Id: I284e29e0815d205c150998b07d0757b5089d3267
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1630520
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Tested-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
During bringup and before nvlink is up GV100 on the DDPX platform operates
with a very, very slow sysmem link. In order to get sysmem test to pass
it is neccesary to significantly increase most timeouts by an order the
magnitude.
Bug 2040544
Change-Id: I26858afde4ae80c70f86b47cfff674b6b00b5bf8
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627417
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We send a broadcast request to invoke scrubbing on all TPCs, but we
check only TPC0 for scrubbing to finish. This likely produces correct
results, because each TPC should take exactly the same number of cycles
for scrubbing, but it's not certain.
Change the polling loop to check all TPCs to make sure there are no
timing glitches.
Change-Id: Id3add77069743890379099a44aec8994f59d9a5e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1625349
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gv11b_set_gpc_tpc_mask(), we calculate tpc_count_mask based on
number of TPCs
But since we could change number of TPCs runtime, we would end up
calulating incorrect tpc_count_mask
Hence instead of calculating tpc_count_mask, just hard code it to
width of fuse register i.e. hard code tpc_count_mask to 4-bit value
Bug 2031635
Change-Id: Ia6f74d39d066775a5d133897305554df1e54157e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1617917
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Pavan Kunapuli <pkunapuli@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Convert tpc number from pes-aware to non-pes-aware
number. tpc id is converted to one that is numbered
in order starting from the active tpcs within PES0
followed by the active tpcs in subsequent PESs.
Bug 1842197
Change-Id: I18d4b20ee4998e5a2ca5439793fe2479b4326c1a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1615419
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gv11b_init_fs_state() calls gr_gm20b_init_fs_state() which
disables vdc_4to2. This should no longer be done on gv11b, so
instead of calling gr_gm20b_init_fs_state() copy the relevant
lines to gr_gv11b_init_fs_state() and drop vdc_4to2 disable.
gv11b_ltc_init_fs_state() also disables it to match the state.
Remove that disable, too.
Change-Id: I3a3fd87a3e8836e495cb818570c971b3d29a6dd1
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1619966
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Wei Sun <wsun@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
For gv11b, update thermal settings as per hw POR:
1.Created gv11b specific HAL for init_therm_setup_hw
2.Update steps for gradual slowdown to 1x,1.5x,2x,4x,8x,16x,32x.
3.Modified gradual step duration cycles to 4.
Bug 200365110
Change-Id: I93c28a3394857aacdf3d304103c9e7c25d4ad344
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1616600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Check the availability of ecc units by checking
relevant ecc fuse and fuse overrides.
During gpu boot, initialize ecc units by scrubbing
individual ecc units available. ECC initialization
should be done before gr initialization.
Following ecc units are scrubbed:
SM LRF
SM L1 DATA
SM L1 TAG
SM CBU
SM ICACHE
Bug 200339497
Change-Id: I54bf8cc1fce639a9993bf80984dafc28dca0dba3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1612734
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Pre-gv11b we only had 2 TPCs in a GPC. But on gv11b we have 4 TPCs in a GPC.
Hence update gr_gv11b_set_gpc_tpc_mask() as per new configuration and allow
setting bits based on number of TPCs
Bug 2031635
Change-Id: I44f5f6ce5f3e2501c229c9fcda36fb330ebf8bd0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1614044
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use NVGPU_DMA_FORCE_CONTIGUOUS for non-wpr blob alloc.
CPU writes some data to non WPR blob (sysmem). ACR binary executing
from PMU, first copies that data to DMEM and then copies that data into WPR.
Without NVGPU_DMA_FORCE_CONTIGUOUS, secure boot fails due to ACR writing
wrong bootloader data to PMU DMEM.
Bug 200355756
Change-Id: I18982caff62b2e7cbe64ea98c1bb935496cfe91c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1610491
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add an alignment check for compressible-kind fixed-address
mappings. If we're using page size smaller than the comptag line
coverage window, the GPU VA and the physical buffer offset must be
aligned in respect to that window.
Bug 1995897
Bug 2011640
Bug 2011668
Change-Id: If68043ee2828d54b9398d77553d10d35cc319236
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1606439
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>