- Created "mem_unlock" HAL under fb to support memory
unlock
- Called as part of gk20a_finalize_poweron() if memory unlock
support needed by checking HAL
- Assigned "mem_unlock" HAL to NULL for chips which don't
need memory unlocks.
Change-Id: I68d0910f15d293feaacfcbf6bd17ecccd3b5219d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
(cherry picked from commit 586894eb84860bbbe4c75dae4715bdf27432a480)
Reviewed-on: https://git-master.nvidia.com/r/1564703
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added "nvgpu_flacon nvdec_flcn" member to gk20a
- Added base address & flacon id of NVDEC falcon
- Included nvdec falcon to access common falcon code
- Enabled nvdec falcon support for GP106
- Disabled nvdec falcon support for iGPU
- Made call to enable nvdec falcon support if supported
Change-Id: Ia928d082275a720e4e8c6852384e489c8ec444f8
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
(cherry picked from commit 3d80aeff295bad8365af6022555ad151f1a32cf6)
Reviewed-on: https://git-master.nvidia.com/r/1564305
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move sched parameter APIs to be Linux specific implementation. At
the same time the sched_ctrl fields were moved to nvgpu_os_linux.
JIRA NVGPU-259
Change-Id: I2397e2602e1c4783f2bebf3aec462634b7f86d4a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1580649
GVS: Gerrit_Virtual_Submit
Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This
removes some usages of dma_buf from the mm_gk20a.c code, too, which
helps make mm_gk20a.c less Linux specific.
Also delete some header files that are no longer necessary in
gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now
quite close to being Linux free.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566629
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
linux/debugfs.h was included in gk20a.h because of the debugfs entry
bios_blob, which can be used for checking contents of VBIOS. That
has never been used, so instead of abstracting it, this patch removes
the feature altogether.
Two files were using debugfs but did not #include <linux/debugfs.h>.
They failed to build now that gk20a.h no longer #includes it, so
added explit #include.
JIRA NVGPU-259
Change-Id: Ie1ea9be1a8920441b1616f34e64e505e6e10e38c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1570404
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a.h, we use these multichar constants
'DONE', '0R32', '0W32'
But these constants fail compilation on some cross-os compilers
Hence remove them by creating a specific MULTICHAR_TAG()
All the User space components also form these constants in similar
fashion
Jira NVGPU-259
Change-Id: Ibe83617a8d15b657e450fdd59dbc171b3d822842
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1576933
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add abstraction of IO aperture accessors. Add new functions
gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to
aperture fields from common code.
Implement Linux version of the abstraction by moving gk20a_readl()
and gk20a_writel() to new Linux specific io.c. Move the fields
defining IO aperture to nvgpu_os_linux.
Add t19x specific IO aperture initialization functions and add t19x
specific section to nvgpu_os_linux.
JIRA NVGPU-259
Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569698
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For sync-point read map:
1. Added nvgpu_mem memory allocator in gk20a struct and
allocated memory for this in gk20a_finalize_poweron()
and freed this memory in gk20a_remove().
2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct
for read map in vm.
Added nvgpu_quiesce() in nvgpu_remove() before freeing
syncpoint read map to ensure that nvgpu is idle.
JIRA GPUT19X-2
Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1514338
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Pre-os firmware takes care, among others, of the control of FAN till
the driver takes over its control. On some GPUs not enabling this FW can lead
tp physical board damage, hence it is needed to run this firmware.
JIRA: NVGPUGV100-9
Change-Id: I18d54cfd5eb64ecec79c5dae67ac8d5bb1facf36
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549035
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
We right now call gk20a_fifo_tsg_unbind_channel_verify_status() to verify
channel status while unbinding a channel from TSG while closing
Add support to do this verification per-platform and keep this disabled
for vgpu platforms
Bug 200327095
Change-Id: I19fab41c74d10d528d22bd9b3982a4ed73c3b4ca
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1572368
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.
JIRA NVGPU-68
Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added new defines for following litter values:
GPU_LIT_SMPC_PRI_BASE
GPU_LIT_SMPC_PRI_SHARED_BASE
GPU_LIT_SMPC_PRI_UNIQUE_BASE9
GPU_LIT_SMPC_PRI_STRIDE
Calculate offsets for ctx operations considering
sm per tpc. Following functions are modified for this:
gr_gk20a_get_ctx_buffer_offsets
gr_gk20a_get_pm_ctx_buffer_offsets
__gr_gk20a_exec_ctx_ops
Bug 200337994
Change-Id: I3a4ca470a4107d3078b708f38601762626ce1bf1
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1539069
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
kthread_run can fail if SIGKILL is triggered on an application during
driver load.
On this change we defer the channel worker init to the enqueue to avoid
this condition during driver power on which would cause the driver state to be
corrupted leaving subsequent attempts to load the driver unsuccesful.
By moving this code to a later time, it is now needed to protect the task
structure with a mutex.
JIRA: EVLR-956
Bug 1816515
Change-Id: I3a159de2d1f03e70b2a3969730a927532ede2d6e
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1462490
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1460689
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This change solves two problems:
(*) the possibility of a crash due to interrupting the gpu
initialization following a bind
(*) a IOVA memory leak that could prevent the GPU from binding after
about 200 bind/unbind cycles
A detailed list of fixes:
- chek that arbiter is initialized before freeing it.
- do not re-enable interrupts when MSI is enabled on unbind.
- free the semaphore sea on unbind.
- ensure we dont double load the vbios.
- check return value of nvgpu_mutex_init for semaphores.
- add corresponding nvgpu_mutex_destroy calls.
bug 1816516
Change-Id: Ia8af73019e0e1183998855d55bb3eea09672a8b7
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1465302
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: David Jarrett <djarrett@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1563019
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.
This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.
Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.
Jira NVGPU-186
Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.
The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.
To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.
This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.
Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added function to read WPR info from FB
MMU registers
- Added HAL to point wpr info read function
- Replaced wpr info read from MC with HAL
- Removed debugfs header include from acr files.
JIRA NVGPU-128
Change-Id: I5ebec46bfe03b9200f2aa569f2e5a780a715616d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1564683
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Initialize following counters in context header
for all legacy chips:
ctxsw_prog_main_image_num_save_ops
ctxsw_prog_main_image_num_restore_ops
This was already present in the code but move to a function
gk20a_gr_init_ctxsw_hdr_data, so that it can be re-used across
chips.
Additionally initialize following preemption related counters
for gp10b onwards in context header:
ctxsw_prog_main_image_num_wfi_save_ops
ctxsw_prog_main_image_num_cta_save_ops
ctxsw_prog_main_image_num_gfxp_save_ops
ctxsw_prog_main_image_num_cilp_save_ops
Bug 1958308
Change-Id: I0e45ec718a8f9ddb951b52c92137051b4f6a8c60
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1562654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
We right now remove a channel from TSG list and disable all the channels in
TSG while removing a channel from TSG
With this sequence if any one channel in TSG is closed, rest of the channels
are set as timed out and cannot be used anymore
We need to fix this sequence as below to allow removing a channel from active
TSG so that rest of the channels can still be used
- disable all channels of TSG
- preempt TSG
- check if CTX_RELOAD is set if support is available
if CTX_RELOAD is set on channel, it should be moved to some other channel
- check if FAULTED is set if support is available
- if NEXT is set on channel then it means channel is still active
print out an error in this case for the time being until properly handled
- remove the channel from runlist
- remove channel from TSG list
- re-enable rest of the channels in TSG
- clean up the channel (same as regular channels)
Add below fifo operations to support checking channel status
g->ops.fifo.tsg_verify_status_ctx_reload
g->ops.fifo.tsg_verify_status_faulted
Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106
as gm20b_fifo_tsg_verify_status_ctx_reload()
This API will check if channel to be released has CTX_RELOAD set, if yes
CTX_RELOAD needs to be moved to some other channel in TSG
Remove static from channel_gk20a_update_runlist() and export it
Bug 200327095
Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add platform specific operations to enable/disable a TSG and use them instead
of directly calling enable/disable APIs
For gm20b/gp106/gp10b we continue to use gk20a_enable_tsg() and
gk20a_disable_tsg() as platform specific operations
Bug 1739362
Change-Id: I2dd0f38c8303757e8c7a47d8da0e30a790e514f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560635
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire
current timeslice and reschedule from front of runlist.
This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a
channel start sooner after submit rather than waiting for natural
timeslice expiration or block/finish of currently running channel.
Bug 1968813
Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: http://git-master/r/#/c/1537198
Reviewed-on: https://git-master.nvidia.com/r/1537198
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Prints addresses of device-specific HAL functions to debugfs file
hal/gops.
The list of functions is produced by dumping the contents of the
gpu_ops substruct of the gk20a struct. This interface makes the
assumption that there are only function pointers in gpu_ops.
Companion Python script nvgpu_debug_hal.py analyzes gk20a.h to
determine operation counts and prettyify debugfs interface's
output.
Jira NVGPU-107
Change-Id: I0910e86638d144979e8630bbc5b330bccfd3ad94
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542990
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move non-function pointer members out of the pmu and pmu_ver
substructs of gpu_ops. Ideally gpu_ops will have only function
ponters, better matching its intended purpose and improving
readability.
- g.ops.pmu_ver.cmd_id_zbc_table_update has been changed to
g.pmu_ver_cmd_id_zbc_table_update
- g.ops.pmu.lspmuwprinitdone has been changed to
g.pmu_lsf_pmu_wpr_init_done
- g.ops.pmu.lsfloadedfalconid has been changed to
g.pmu_lsf_loaded_falcon_id
Boolean flags have been implemented using the enabled.h API
- g.ops.pmu_ver.is_pmu_zbc_save_supported moved to
common flag NVGPU_PMU_ZBC_SAVE
- g.ops.pmu.fecsbootstrapdone moved to
common flag NVGPU_PMU_FECS_BOOTSTRAP_DONE
Jira NVGPU-74
Change-Id: I08fb20f8f382277f2c579f06d561914c000ea6e0
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530981
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reorganize HAL initialization to remove inheritance and construct
the gpu_ops struct at compile time. This patch only covers the
mm sub-module of the gpu_ops struct.
Perform HAL function assignments in hal_gxxxx.c through the
population of a chip-specific copy of gpu_ops.
Jira NVGPU-74
Change-Id: Ieb87a62f047510e51c52e6563d8e3fd5a65b5f28
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537753
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reorganize HAL initialization to remove inheritance and construct
the gpu_ops struct at compile time. This patch only covers the
mm sub-module of the gpu_ops struct.
Perform HAL function assignments in hal_gxxxx.c through the
population of a chip-specific copy of gpu_ops.
Jira NVGPU-74
Change-Id: I289284e6e528fc7951c959c8765ccf9349eec33b
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1533351
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.
Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.
JIRA NVGPU-30
Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Pass struct gk20a pointer instead of struct device to
gk20a_wait_for_idle(). The code is not Linux specific and does not
need pointer to struct device.
Change-Id: I2cafd6c7db019c9de76b6e68a1ae73f0b4cea37d
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1533173
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Refactor the sync_debugfs LTC HAL op so that the logic to enable
or disable LTC goes to common code nvgpu_ltc_sync_enabled() and
the LTC HAL set_enabled only performs the hardware register access.
Create a new common function nvgpu_init_ltc_support() to initialize
the LTC software variable, and move hardware initialization of LTC to
be called from it.
JIRA NVGPU-62
Change-Id: Ib1cf4f5b83ca3dac08407464ed56a732e0a33923
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1528262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move fields in struct gk20a related to interrupt handling into
Linux specific nvgpu_os_linux. At the same time move the counter
logic from function in HAL into Linux specific code, and two Linux
specific power management functions from generic gk20a.c to Linux
specific module.c.
JIRA NVGPU-123
Change-Id: I0a08fd2e81297c8dff7a85c263ded928496c4de0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1528177
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
GVS: Gerrit_Virtual_Submit