KMD needs to send the domain id and GPU_VA corresponding
to the struct runlist_domains to GSP. In the current
implementation, struct nvgpu_runlist_domain contains
the domain name instead of domain id. This requires
an additional search by name everytime an update
is needed to be submitted to the GSP.
Modify the struct nvgpu_runlist_domain to store domain id
instead of domain name. This simplifies the flow and avoids
unnecessary search.
Removed the conditional check for existence of shadow domain
as its a deadcode. Shadow Domain is not searchable in the list
of domains inside the struct nvgpu_runlist.
Jira NVGPU-8610
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0d67cfa93d89186240290e933aa750702b14f4f0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2744890
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Some of the functions with no traceability to unit tests are already
covered by callee API functions. Skip these functions in SWVR by
skipping doxygen for them.
Some of the functions are non-fusa like those in profile.h and
bsearch.h. Those were included as the header was included in
Doxygen sources. Mark then non-safe.
Some of the nvgpu functions were not added to Targets entries for
respective tests. Fix those.
JIRA NVGPU-7211
Change-Id: Iacf22dccdd9340100cf93814566d3979734c455d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2612982
(cherry picked from commit a40f62654747102cc8ef53ddbd9f953c21c2b745)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2737672
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to maintain separate mappings of GR TSG and global context
buffers for different subcontexts, we need to separate the memory
struct and the mapping struct for the buffers. This patch moves
the mappings of all GR ctx buffers to new structure
nvgpu_gr_ctx_mappings.
This will be instantiated per subcontext in the upcoming patches.
Summary of changes:
1. Various context buffers were allocated and mapped separately.
All TSG context buffers are now stored in gr_ctx->mem[] array
since allocation and mapping is unified for them.
2. Mapping/unmapping and querying the GPU VA of the context
buffers is now handled in ctx_mappings unit. Structure
nvgpu_gr_ctx_mappings in nvgpu_gr_ctx holds the maps.
On ALLOC_OBJ_CTX this struct is instantiated and deleted
on free_gr_ctx.
3. Introduce mapping flags for TSG and global context buffers.
This is to map different buffers with different caching
attribute. Map all buffers as cacheable except
PRIV_ACCESS_MAP, RTV_CIRCULAR_BUFFER, FECS_TRACE, GR CTX
and PATCH ctx buffers. Map all buffers as privileged.
4. Wherever VM or GPU VA is passed in the obj_ctx allocation
functions, they are now replaced by nvgpu_gr_ctx_mappings.
5. free_gr_ctx API need not accept the VM as mappings struct
will hold the VM. mappings struct will be kept in gr_ctx.
6. Move preemption buffers allocation logic out of
nvgpu_gr_obj_ctx_set_graphics_preemption_mode.
7. set_preemption_mode and gr_gk20a_update_hwpm_ctxsw_mode
functions need update to ensure buffers are allocated
and mapped.
8. Keep the unit tests and documentation updated.
With these changes there is clear seggregation of allocation and
mapping of GR context buffers. This will simplify further change
to add multiple address spaces support. With multiple address
spaces in a TSG, subcontexts created after first subcontext
just need to map the buffers.
Bug 3677982
Change-Id: I3cd5f1311dd85aad1cf547da8fa45293fb7a7cb3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2712222
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
set_patch_addr parameter to nvgpu_gr_ctx_set_patch_ctx was redundant.
Remove it.
Prepare new functions nvgpu_gr_ctx_set_hwpm_pm_mode to set PM mode,
nvgpu_gr_ctx_set_hwpm_ptr to set PM ptr in gr_ctx. Rename subctx
function to nvgpu_gr_subctx_set_hwpm_ptr.
This simplifies the logic in gr_gk20a_update_hwpm_ctxsw_mode to set
the PM mode and PM ptr. Channel loop is needed only for subcontexts.
Bug 3677982
Change-Id: I44acb09f6296ba8d510e278910188864f39e7157
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2743724
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Added implementation for following IOCTLs
NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE
The above ioctls are supported only for users with
R/W permissions.
1) NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE constructs a memory region
via the nvgpu_dma_alloc_sys() API and creates the corresponding
GPU and kernel mappings. Upon successful creation, KMD exports
this buffer to the userspace via a dmabuf fd that the UMD
can use to mmap it into its process address space.
2) Added plumbing to store VMA's corresponding to different users
for event queue in future.
3) Added necessary validation checks for the IOCTLs
4) NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to clear the queues.
5) Using a global queue lock to protect access to the queues. This
could be modified to be more fine-grained in future when there
is more clarity on GSP's implementation and access of queues.
6) Added plumbing to enable user subscription to queues.
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to unsubscribe
the user from the queue. Once, the last user is deleted,
all the queues will be cleared. User must ensure that
any mappings are removed before calling release queue.
7) Set the default queue_size for event queues to
PAGE_SIZE. This can be modified later. For event
queues, UMD shall fetch the queue_size.
Jira NVGPU-8129
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I31633174e960ec6feb77caede9d143b3b3c145d7
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2723198
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add a device node for management of nvs control fifo buffers for
scheduling domains. The current design consists of a master structure
struct nvgpu_nvs_domain_sched_ctrl for management of users as well
as control queues. Initially all users are added as non-exclusive users.
Subsequent changes will add support for IOCTLS to manage opening of
Send/Receive and Event buffers, querying characteristics etc.
In subsequent changes, a user that tries to open a Send/Receive queue
will first try to reserve itself as an exclusive user and only if that
succeeds can proceed with creation of both Send/Receive queues.
Exclusive users will be reset to non-exclusive users just before they
close their device node handle.
Jira NVGPU-8128
Change-Id: I15a83f70cd49c685510a9fd5ea4476ebb3544378
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2691404
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
On host copy engine PBDMA interrupt, channel is aborted as part of the
recovery and its syncpt value is set to the max threshold.
Syncpoint may then get incremented by PBDMA (incr cmd gets processed)
after this interrupt is handled leading to syncpoint value becoming
greater than the max threshold.
Again while unbinding the channel, syncpoint value is incremented until
it reaches max threshold. Since syncpoint value is already greater than
max threshold, host1x version of nvgpu_nvhost_syncpt_set_minval will
loop for entire u32 range until it reaches max threshold and this
will hang the channel unbind.
nvgpu_nvhost_syncpt_set_minval can ensure the syncpoint value is greater
than or equal to max threshold. Hence update the check for syncpoint
value from not equal to less than.
Bug 3681100
Change-Id: I96e7a1f53d4037e9ed858a2e90dd5a8d17ed6bb0
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2742604
(cherry picked from commit f246facd01)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2742603
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
BVEC changes for nvgpu_rc_pbdma_fault and nvgpu_rc_mmu_fault
started reporting below MISRA issue.
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:522:
1. misra_c_2012_rule_10_4_violation: Essential type of the left hand
operand "error_notifier" (unsigned) is not the same as that of
the right operand "NVGPU_ERR_NOTIFIER_INVAL"(enum).
kernel/nvgpu/drivers/gpu/nvgpu/common/fifo/tsg.c:541:
1. misra_c_2012_rule_10_3_violation: Implicit conversion of
"NVGPU_ERR_NOTIFIER_FIFO_ERROR_MMU_ERR_FLT" from essential type
"anonymous enum" to different or narrower essential type
"unsigned 32-bit int".
Change the enum nvgpu_err_notif values to u32 values declared using
the #define macro.
JIRA NVGPU-6772
Change-Id: Icac7f567cea52cde07ca200b21eb3e7dd2b9e645
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2584153
(cherry picked from commit 2f073f341bd55242c857c6c6d35d6015495025e2)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2623634
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
This patch primary separates runlist modification from
runlist submits.
Instead of submitting the runlist(domain) immediately after
modification, a worker thread interface is now being used to
synchronously schedule runlist submits. If the runlist being
scheduled is currently active, the submit happens instantly,
otherwise, it will happen in the next iteration when the nvs
thread will schedule the domain. This external interface uses
a condition variable to wait for the completion of the
synchronous submits.
A pending_update variable is used to synchronize domain memory
swaps just before being submitted.
To facilitate faster scheduling via the NVS thread, nvgpu_dom
itself contains an array of rl_domain pointers. This can then
be used to select the appropriate rl_domain directly for scheduling
as against the earlier approach of maintaining nvs domains and rl
domains in sync everytime.
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I1725c7cf56407cca2e3d2589833d1c0b66a7ad7b
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2739795
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
bpmp will floorsweep GPCs as per parameters to tpc_pg_mask sysfs.
While doing that corresponding GPC clocks are also disabled.
nvgpu should re-initialize the clocks every time the
GPC/TPC pg_masks are passed to bpmp mrq.
Also print error when clk_prepare_enable fails.
Introduce platform->clks_lock to protect access to platform->clks
and platform->num_clks done from unrailgate/railgate and bpmp
mrq set calls from sysfs.
Acquire static_pg_lock in railgate path to synchronize railgate
with sysfs.
Bug 3688506
Change-Id: I3203d78b87289e7a847d78b3117e2d3119be3425
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2738920
(cherry picked from commit 28ddb0996f)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2741029
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Enable/disable LSPMU interrupt in MC, as required LSPMU
interrupts are configured as part of LSPMU ucode init and
don't need any additional PMU IRQ register to set/clear as
part of GPU power-on/off sequence.
Bug 3681561
Change-Id: Ifb47bc9cc83e16e46649b0eef5f257acb02f302c
Signed-off-by: mkumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2739476
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
The HSI error injection utility is an on-bench debug and test utility
which can be used by customers and SQA to test end-to-end error
detection and reporting path.
Inplement callback function to integrate with this utility and allow
injecting GPU HSI related errors.
As part of callback function hsierrrpt_inj(), invoke the driver's
error-reporting logic which uses the EPD MISC_EC APIs. In future,
we can enhance the callback function to trigger driver's error
handling logic incrementally for different errors.
Bug 3413214
Change-Id: I2d050b6c850d6151b40095f243a6733b4ba74f47
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2727198
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
While creating a new channel, ioctls are called in the below sequence:
1. GPU_IOCTL_OPEN_CHANNEL
2. AS_IOCTL_BIND_CHANNEL
3. TSG_IOCTL_BIND_CHANNEL_EX
4. CHANNEL_ALLOC_GPFIFO_EX
5. CHANNEL_ALLOC_OBJ_CTX.
subctx pdbs and valid mask are programmed in the channel instance block
in the channel ioctls AS_IOCTL_BIND_CHANNEL & CHANNEL_ALLOC_GPFIFO_EX.
Programming them in the ioctl AS_IOCTL_BIND_CHANNEL is redundant.
Remove related hal g->ops.mm.init_inst_block_for_subctxs.
The hal init_inst_block will program context pdb and big page size.
The hal init_inst_block_core will program context pdb, big page size
and subctx 0 pdb. This is used by h/w units (fecs, pmu, hwpm, bar1,
bar2, sec2, gsp, perfbuf etc.).
For user channels, subctx pdbs are programmed as part of ramfc setup.
Bug 3677982
Change-Id: I6656b002d513404c1fd7c3d349933e80cca7e604
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2680907
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr ctx buffer in non-cacheable hence there is no need to do L2 cache
flush when updating the buffer. Remove the flushes.
pm ctx buffer is cacheable hence add l2 flush in the function
nvgpu_profiler_quiesce_hwpm_streamout_non_resident since it
updates the buffer.
Bug 3677982
Change-Id: I0c15ec7a7f8fa250af1d25891122acc24443a872
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2713916
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Enable DEVFREQ for OOT module unconditionally as the podgov governor
module.
linux/pm_qos is only used for downstream supported modifications
which is currently determined by CONFIG_GK20A_PM_QOS.
struct devfreq_dev_status doesn't have any field 'busy' in the upstream
driver hence enable it only for when downstream driver is in use
activated by CONFIG_GK20A_PM_QOS.
governor.h is only needed for android platforms which depend on 4.9
version of the kernel in downstream builds. Hence, added an compile
time flag to remove it for kernels versions greater than 4.9.
Jira LS-418
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Id242bd28e66ed187208f0d7975ee0bc508730a88
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2705766
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Fix the following Coverity Defects:
clk_mon_tu104.c : Out-of-bounds write
clk_mon_tu104.c : Out-of-bounds read
clk_mon_tu104.c : Out-of-bounds access
Fix the following CERT-C Defects:
clk_mon_tu104.c : CERT STR31-C
For fixing an older Coverity defect,
we had changed datatype of domain mask
from u32 to unsigned long.
This thing generates another issue.
bit_pos range changes from [0,32) to [0,64).
Changing CLK_CLOCK_MON_DOMAIN_COUNT from 0x32U to 0x40U
solves the issue.
CID 10138023
CID 10138024
CID 10138025
CID 518885
CID 518887
CID 518890
Bug 3460991
Bug 3512546
Signed-off-by: Jinesh Parakh <jparakh@nvidia.com>
Change-Id: I2a4853d87d7bb316db3de56ef34a039bf02486d7
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2728545
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Patch updates the ZBC table values as per the POR values for safety
build.
- Fix the color table default values initialization for standard build
which was being done in floating point format for CROP while it should
be in FB format. As per the documentation "CROP ZBC table should be
programmed exactly the way the L2 table is programmed".
Bug 3585766
Change-Id: I47d11b6a230189ee0c818f850d36b93c0aea0e54
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2724935
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit
There are some configs which are set for the stable kernel and
it is identified from the NV_BUILD_KERNEL_OPTIONS.
The stable kernel build nvgpu as out-of-tree module and
pass the environment config CONFIG_TEGRA_OOT_MODULE during
build.
Hence, it is not required to use the NV_BUILD_KERNEL_OPTIONS to
identify the kstable build. It uses CONFIG_TEGRA_OOT_MODULE for
setting the configs for build as module.
Bug 3652905
Change-Id: I6570760e91ca98a4c83d7691fad517b2c772e629
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2720729
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
vidmem buffers are using fd as buffer handles and we need to allocate more
than 1024 fds. tegra_alloc_fd was exported by TEGRA_MC driver that allowed
allocating more than 1024 fds, however that function is to be removed from
that driver.
Hence use now kernel exported function __alloc_fd directly from nvgpu.
This is currently to be used only for dgpu on downstream kernel 5.10.
Bug 3535321
Change-Id: I10cfc41a6439f07309cda9eb2f22746f3fbac996
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2702794
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Tested-by: Sagar Kamble <skamble@nvidia.com>
GVS: Gerrit_Virtual_Submit
This patch creates a sysfs node which is easier to parse than
the existing mig_mode_config_list node. This new node outputs
in the following format:
active: -1
num_configs: 2
num_instances: 2
id:000000000001 gr:000000000000 gpc:0003
id:000000000002 gr:000000000004 gpc:0003
num_instances: 3
id:000000000001 gr:000000000000 gpc:0003
id:000000000005 gr:000000000004 gpc:0002
id:000000000013 gr:000000000006 gpc:0001
Bug 200740852
Change-Id: I8a3d4425ccb88dd4e58bbe1908e0f7cc577ff191
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2704349
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
To be able to access the full physical memory range, gpu's dma_mask
needs to be set to the max value of H/W compatible range.
For example. In order to support from 2GB to 66 GB, GV11B's dma_mask
needs to be atleast 37 bits. Set GV11B's dma_mask to 38 bit
and T23X's dma_mask to 39 bit. These values are supported by H/W
Bug 3656729
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Icfff3c36a8c9cf074a254fa773c42e18020ae5de
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2723640
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Brad Griffis <bgriffis@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Brad Griffis <bgriffis@nvidia.com>
When MMU fault happens, if the id_type = 1, that means
fault happened in TSG. So in that path we set the error
notifier and let userspace know about faulty channel.
During this, we check if debugger is attached or not by
reading gr_gpc0_tpc0_sm0_dbgr_control0_r() register.
During this time ELPG is enabled and this read causes
IDLE SNAP error for ELPG.
To resolve this, move CG/PG disable function call
early in fifo recover code path. This ensures that
ELPG is disabled early before any read happens for any
GR register.
Bug 3660592
Change-Id: Ie5d01b7ccf00167b58f260e9142aa5deb2a08be4
Signed-off-by: Divya <dsinghatwari@nvidia.com>
(cherry picked from commit f09e429f2d142c20529bedc05acf193805e1bb25)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2720655
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
GVS: Gerrit_Virtual_Submit