GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in
ctxsw buffer to be 256 byte aligned but same change is not applied to other
chip ucodes
ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and
define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other
chips except GV100
Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix
the required alignment in this function
Bug 1998067
Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676655
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For LIST_nv_pm_fbpa_ctx_regs, we right now call
add_ctxsw_buffer_map_entries_subunits() to add registers corresponding
to all the FBPAs
But while configuring total number of registers, we do not consider
floorswept FBPAs and that causes misalignment in subsequent lists for GV100
Fix this by reading disabled/floorswept FBPAs from fuse and consider only those
FBPAs which are active for GV100
Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a
common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100
Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned
implementation to consider floorsweeping
Bug 1998067
Change-Id: Id560551bb0b8142791c117b6d27864566c90b489
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
/sys/devices/gpu.0/gfxp_wfi_timeout_unit
usec - microseconds
sysclk - gpu clock count
Treat gr_fe_gfxp_wfi_timeout_r as context-switched
register on gv11b.
Set default gfxp_wfi_timeout to 100 usec to match
gp10b at 1GHz.
bug 1888344
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed
Reviewed-on: https://git-master.nvidia.com/r/1651517
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Lots of code paths were split to T19x specific code paths and structs
due to split repository. Now that repositories are merged, fold all of
them back to main code paths and structs and remove the T19x specific
Kconfig flag.
Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On gv11b we can have multiple SMs per TPC. Add sm_per_tpc in
vgpu constants to properly dimension the virtual SM to TPC/GPC
mapping in virtualization case.
Use TEGRA_VGPU_CMD_GET_SMS_MAPPING to query current mapping.
Bug 2039676
Change-Id: I817be18f9a28cfb9bd8af207d7d6341a2ec3994b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1631203
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Used chip specific attrib_cb_gfxp_default_size and
attrib_cb_gfxp_size buffer sizes during committing
global callback buffer when gfx preemption is requested.
These sizes are different for gv11b from gp10b.
For gp10b used smaller buffer sizes than specified
value in hw manuals as per sw requirement.
Also used gv11b specific preemption related functions:
gr_gv11b_set_ctxsw_preemption_mode
gr_gv11b_update_ctxsw_preemption_mode
This is required because preemption related buffer
sizes are different for gv11b from gp10b. More optimization
will be done as part of NVGPU-484.
Another issue fixed is: gpu va for preemption buffers
still needs to be 8 bit aligned, even though 49 bits
available now. This done because of legacy implementation
of fecs ucode.
Bug 1976694
Change-Id: I2dc923340d34d0dc5fe45419200d0cf4f53cdb23
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1635027
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit caf168e33e.
Might be causing an intermittency in quill-c03 graphics submit. Super
weird since the only change that seems like it could affect it is the
header file update but that seems rather safe.
Bug 2044830
Change-Id: I14809d4945744193b9c2d7729ae8a516eb3e0b21
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1634349
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Timo Alho <talho@nvidia.com>
Tested-by: Timo Alho <talho@nvidia.com>
Used chip specific attrib_cb_gfxp_default_size and
attrib_cb_gfxp_size buffer sizes during committing
global callback buffer when gfx preemption is requested.
These sizes are different for gv11b from gp10b.
Also used gv11b specific preemption related functions:
gr_gv11b_set_ctxsw_preemption_mode
gr_gv11b_update_ctxsw_preemption_mode
This is required because preemption related buffer
sizes are different for gv11b from gp10b. More optimization
will be done as part of NVGPU-484.
Another issue fixed is: gpu va for preemption buffers
still needs to be 8 bit aligned, even though 49 bits
available now. This done because of legacy implementation
of fecs ucode.
Bug 1976694
Change-Id: I284e29e0815d205c150998b07d0757b5089d3267
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1630520
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Tested-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
variable g->gr.ctx_vars.regs_base_index is declared as "int", but it is assigned
value from unsigned int pointer
Since we expect it to be unsigned at all the places, declare it as "u32" instead
of "int"
Jira NVGPU-449
Change-Id: I2a5b35698c655fa0caa3e38e37ed4d84569c996a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1612446
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now store dmabuf fd and dma_buf pointer for gk20a_cs_snapshot_client
But since dma_buf and all related APIs are linux specific, we need to remove
them from common code and move them to linux specific code
Add new linux specific structure gk20a_cs_snapshot_client_linux which includes
struct gk20a_cs_snapshot_client and linux specific dma_buf pointer
In gk20a_attach_cycle_stats_snapshot(), we first handle all dma_buf related
operations and then call gr_gk20a_css_attach()
Move gk20a_channel_free_cycle_stats_snapshot() to ioctl_channel.c
In gk20a_channel_free_cycle_stats_snapshot(), we call gr_gk20a_css_detach()
and then free up dma_buf in linux specific code
We also need to call gk20a_channel_free_cycle_stats_snapshot() while closing
the channel, so call it from linux specific nvgpu_channel_close_linux()
Jira NVGPU-397
Jira NVGPU-415
Change-Id: Ida27240541f6adf31f28d7d7ee4f51651c6d3de2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1603908
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch removes the dependency on the header file "uapi/linux/nvgpu.h"
for regops_gk20a.c. The original structure and definitions in the
uapi/linux/nvgpu.h is maintained for userspace libnvrm_gpu.h. The
following changes are made in this patch.
1) Defined common versions of the NVGPU_DBG_GPU_REG_OP* definitions inside
regops_gk20a.h.
2) Defined common version of struct nvgpu_dbg_gpu_reg_op inside
regops_gk20a.h naming it struct nvgpu_dbg_reg_op.
3) Constructed APIs to convert the NVGPU_DBG_GPU_REG_OP* definitions from
linux versions to common and vice versa.
4) Constructed APIs to convert from struct nvgpu_dbg_gpu_reg_op to
struct nvgpu_dbg_reg_op and vice versa.
5) The ioctl handler nvgpu_ioctl_channel_reg_ops first copies from
userspace into a local storage based on struct nvgpu_dbg_gpu_reg_op which
is copied into the struct nvgpu_dbg_reg_op using the APIs above and
after executing the regops handler passes the data back into userspace
by copying back data from struct nvgpu_dbg_reg_op to struct
nvgpu_dbg_gpu_reg_opi.
JIRA NVGPU-417
Change-Id: I23bad48d2967a629a6308c7484f3741a89db6537
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596972
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We use linux specific graphics/compute preemption modes defined in uapi header
(and of below form) in all over common code
NVGPU_GRAPHICS_PREEMPTION_MODE_*
NVGPU_COMPUTE_PREEMPTION_MODE_*
Since common code should be independent of linux specific code, define new modes
of the form in common code and used them everywhere
NVGPU_PREEMPTION_MODE_GRAPHICS_*
NVGPU_PREEMPTION_MODE_COMPUTE_*
Add required parser functions to convert both the modes into each other
For linux IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE, we need to convert
linux specific modes into common modes first before passing them to common code
And to pass gpu characteristics to user space we need to first convert common
modes into linux specific modes and then pass them to user space
Jira NVGPU-392
Change-Id: I8c62c6859bdc1baa5b44eb31c7020e42d2462c8c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596930
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Add update_patch_count parameter to ctx_patch_write_begin/end functions
If True, the main_image_patch_count register will be updated. Previously,
the patch count would be updated if the cpu_va for the graphics context
was non-NULL, but this only works for sysmem (cpu_va is always 0 for vidmem)
- Remove unused patch parameter for the commit_global_timeslice functions
JIRA ESRM-74
Bug 2012077
Change-Id: I35d0a9eb48669a227833bba1d2e63e9fe8fd8aa9
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
max_css_buffer_size was accessed directly from GPU characteristics,
which added a dependency to Linux. Move the field to gr_gk20a and
copy it to GPU characteristics at query time.
JIRA NVGPU-259
Change-Id: Ied19e33bf1a79a9ce45e33df57fe5bbe3a3c4f9d
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593689
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All the event ids NVGPU_IOCTL_CHANNEL_EVENT_ID_* are defined in linux
specific user header uapi/linux/nvgpu.h and can't be used in common
code
Hence add new definitions of type NVGPU_EVENT_ID_* for all the events
in common code and use them wherever required in common code
For future additions to event ids, we need to update both
NVGPU_IOCTL_CHANNEL_EVENT_ID_* and NVGPU_EVENT_ID_* fields
Also add new API nvgpu_event_id_to_ioctl_channel_event_id() to convert
common event_id of the form NVGPU_EVENT_ID_* to Linux specific event_id
of the form NVGPU_IOCTL_CHANNEL_EVENT_ID_*
Use this API in gk20a_channel/tsg_event_id_post_event() to get correct
event_id
Jira NVGPU-259
Change-Id: I15a7f41181fdbb8f1876f88bbcd044447d88325f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1591434
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use nvgpu_alloc_obj_ctx_args structure specific to Linux code only.
Pass the fields of the structure as separate arguments to all common
functions.
gr_ops_gp10b.h referred to the struct, but it's not used anywhere,
so delete the file.
JIRA NVGPU-259
Change-Id: Idba78d48de1c30f205a42da2fe47a9f8c03735f1
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586563
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a sysfs node to allow root user to set PRI_FE_GFXP_WFI_TIMEOUT, for gp10b
only, in units of sysclk cycles. Store the set value in a variable, and write
the set value to register after GPU is un-railgated.
NV_PGRAPH_PRI_FE_GFXP_WFI_TIMEOUT is engine_reset after Bug 1623341.
Change default value to be specified in cycles, rather than time. This value
is almost the current value in cycles calculated each boot.
Bug 1932782
Change-Id: I0a4207e637cd1413a1be95abe2bcce3adccf76fa
Reviewed-on: https://git-master.nvidia.com/r/1540939
Signed-off-by: Jonathan McCaffrey <jmccaffrey@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1580999
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move functions to query preemption type names to the user of the
function: ioctl_channel.c. This removes a dependency to
<uapi/linux/nvgpu.h> from gr_gk20a.h.
JIRA NVGPU-259
Change-Id: I6cafda986eb4659fcfc1b19eac77e43aaaeaec76
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577248
Patch buffer can hold 128 u32 entries. Each patch write
takes total of 2 u32 entries, 1 u32 for addr and 1 u32
for data. Ideally 64 entries could be written before buffer
overflows. Driver patch some things when creating the channel,
and later when context switch type is changed after channel is loaded.
Reset patch_ctx.data_count before beginning patch
write otherwise system might not be in a state to accept all
patch writes even if patch buffer has valid entries.
If the patch buffer has non-zero entries, then the patch buffer
would be read and all pri writes would be sent out. Once done,
ucode updates the main header patch buffer count to 0.
Without this fix, below priv errors seen on t186 platforms
SYS Write error for ADR 0, INFO 0d000200 and CODE badf1100
Error info decodes as:
NV_PPRIV_SYS_PRIV_ERROR_INFO R[0x00122128]
SUBID [29:24] 13 (?)
LOCAL_ORDERING [22:22] 0 (I)
PRIV_LEVEL [21:20] 0 (I)
SENDING_RS [17:12] 0 (I)
PENDING [ 9: 9] 1 (?)
ORPHAN [ 8: 8] 0 (I)
PRIV_MASTER [ 5: 0] 0 (I)
Ctxsw ucode(subid 13 i.e. 0xd) makes only few pri transactions
at priv level 0. Patch buffer pri writes are one of those.
Bug 200350539
Change-Id: If9e71b5fef4d85600d72a8a633a082d9261c3e1b
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1581591
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move most of the dma_buf usage present in the mm_gk20a.c code
out to Linux specific code and some commom/mm code. There's
two primary groups of code:
1. dma_buf priv field code (for holding comptag data)
2. Comptag usage that relies on dma_buf pointers
For (1) the dma_buf code was simply moved to common/linux/dmabuf.c
since most of this code is clearly Linux specific.
The comptag code was a bit more complicated since there is two
parts to the comptag code. Firstly there's the code that manages
the comptag memory. This is essentially a simple allocator. This
was moved to common/mm/comptags.c since it can be shared across
all chips. The second set of code is moved to
common/linux/comptags.c since it is the interface between dma_bufs
and the comptag memory.
Two other fixes were done as well:
- Add struct gk20a to the comptag allocator init so that
the proper nvgpu_vzalloc() function could be used.
- Add necessary includes to common/linux/vm_priv.h.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566628
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Initialize following counters in context header
for all legacy chips:
ctxsw_prog_main_image_num_save_ops
ctxsw_prog_main_image_num_restore_ops
This was already present in the code but move to a function
gk20a_gr_init_ctxsw_hdr_data, so that it can be re-used across
chips.
Additionally initialize following preemption related counters
for gp10b onwards in context header:
ctxsw_prog_main_image_num_wfi_save_ops
ctxsw_prog_main_image_num_cta_save_ops
ctxsw_prog_main_image_num_gfxp_save_ops
ctxsw_prog_main_image_num_cilp_save_ops
Bug 1958308
Change-Id: I0e45ec718a8f9ddb951b52c92137051b4f6a8c60
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1562654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
We right now remove a channel from TSG list and disable all the channels in
TSG while removing a channel from TSG
With this sequence if any one channel in TSG is closed, rest of the channels
are set as timed out and cannot be used anymore
We need to fix this sequence as below to allow removing a channel from active
TSG so that rest of the channels can still be used
- disable all channels of TSG
- preempt TSG
- check if CTX_RELOAD is set if support is available
if CTX_RELOAD is set on channel, it should be moved to some other channel
- check if FAULTED is set if support is available
- if NEXT is set on channel then it means channel is still active
print out an error in this case for the time being until properly handled
- remove the channel from runlist
- remove channel from TSG list
- re-enable rest of the channels in TSG
- clean up the channel (same as regular channels)
Add below fifo operations to support checking channel status
g->ops.fifo.tsg_verify_status_ctx_reload
g->ops.fifo.tsg_verify_status_faulted
Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106
as gm20b_fifo_tsg_verify_status_ctx_reload()
This API will check if channel to be released has CTX_RELOAD set, if yes
CTX_RELOAD needs to be moved to some other channel in TSG
Remove static from channel_gk20a_update_runlist() and export it
Bug 200327095
Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add lock_down_sm and wait_for_sm_lock_down gr ops
Required to support multiple SM and t19x SM register
address changes
JIRA GPUT19X-75
Change-Id: I529babde51d9b2143fe3740a4f67c582b7eb404b
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1514042
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This is required to take care of t19x changes to support
multiple SM
JIRA GPUT19X-75
Change-Id: Ifd2cb28ae442462fef1d2c4439baa817f00c2c9e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1514041
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This is required to support multiple SM and t19x
sm register address changes
JIRA GPUT19X-75
Change-Id: I844b5cf02a75ba397891a1100d917875e5a3e181
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1512217
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
This is required to support multiple SM and t19x
sm register address changes
JIRA GPUT19X-75
Change-Id: If8805bcc042c75ea70c1689306feb3c8bf011655
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1512216
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
This is required to support multiple SM and t19x
sm register address changes
JIRA GPUT19X-75
Change-Id: Icdae3b6ed67a3d3deeb17f29528184b2d7a70af5
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1512215
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
This is required to support multiple SM and t19x
sm register address changes
JIRA GPUT19X-75
Change-Id: Id104f611736535874cdaa5a2f768f692d799c2c5
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master/r/1512214
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>