Following changes are added
1) nvgpu_gr_config->gpc_tpc_mask_physical is now indexed by physical
gpc id instead of logical id.
2) Removed the conversion of logical fbp ids and replace them with
physical ids.
3) nvgpu_gpu_instance->fbp_en_mask now contains the mask of physical fbp ids.
4) gk20a_ctrl_ioctl_gpu_characteristics returns gpu.gpc_mask returns mask
of physical ids.
Bug 200712091
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0e066df76e07203ff4a5be5bfff2cef8566b425d
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2648831
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
This patch updates the interaction between the VAB
packet polling code and the VAB_ERROR MMU fault handling
code. A shared atomic flag is used to determine if a
VAB_ERROR MMU fault has happened while polling, which will
result in polling be terminated immediately instead of
waiting on a timeout to happen. This allows testing VAB_ERROR
MMU fault handling in environments where a timeout may never
happen or happen very slowly.
The sequence for this to work is the following:
1) before requesting a VAB dump, which may trigger a fault,
the atomic flag is atomically reset to 0.
2) polling eventually starts which atomically checks the flag
in the loop. If flag is set, polling exits because the VAB
result will never be available.
3) If a VAB_ERROR MMU fault is raised, this sets the flag to 1
atomically.
Note that while there could be a race in this sequence if the
VAB_ERROR MMU fault handling is somehow delayed, the chance is
extremely slim because:
1) the race could only happen if the VAB dump code is re-entered
before the earlier VAB_ERROR MMU fault is still pending.
2) the polling code has a large timeout
3) re-entering means a new ioctl/devctl
Bug 3425981
Change-Id: I422b15b581b0c3417abd4c66fbcdde9a0ff8cd9b
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2664103
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
In DRIVE 6.0, NvGPU is allowed to report only 32-bit metadata to
Safety_Services. So, there is no need to have distinct APIs for
reporting errors from units like GR, MM, FIFO to SDL unit. All
these error reporting APIs will be replaced with a single API. To
meet this objective, this patch does the following changes:
- Replaces nvgpu_report_*_err with nvgpu_report_err_to_sdl.
- Removes the reporting of error messages.
- Replaces nvgpu_log() with nvgpu_err(), for error reporting.
- Removes error reporting to Safety_Services from nvgpu_report_*_err.
However, nvgpu_report_*_err APIs and their related files are not
removed. During the creation of nvgpu-mon, they will be moved under
nvgpu-rm, in debug builds.
Note:
- There will be a follow-up patch to fix error IDs.
- As discussed in https://nvbugs/3491596 (comment #12), the high
level expectation is to report only errors.
JIRA NVGPU-7450
Change-Id: I428f2a9043086462754ac36a15edf6094985316f
Signed-off-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2662590
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- implemented device info cmd to send device info to the gsp for
runlist submission. Currently GSP scheduler support only GR
engine '0' instance.
- implemented runlist submit cmd. GSP firmware will submit the
corresponding runlist by writing into submit registers. This
command is direct replacement of hw_submit ga10b hal for GR engine.
NVGPU-6790
Signed-off-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Change-Id: I5dc573a6ad698fe20b49a3466a8e50b94cae74df
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2608923
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Separated gsp unit into three unit:
- GSP unit which holds the core functionality of GSP RISCV core,
bootstrap, interrupt, etc.
- GSP Scheduler to hold the cmd/msg management, IPC, etc.
- GSP Test to hold stress test ucode specific support.
NVGPU-7492
Signed-off-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Change-Id: I12340dc776d610502f28c8574843afc7481c0871
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2660619
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, VAB implementation is using fixed number of access bits. This
value can be computed using fb_mmu_vidmem_access_bit_size_f() value.
- Modify VAB implementation to compute number of access bits.
- Modify nvgpu_vab structure to hold VAB entry size corresponding to
number of access bits.
- Information given by nvgpu_vab structure is more related to the GPU
than nvgpu_mm structure. Move nvgpu_vab struct element to gk20a struct.
- Add fb.set_vab_buffer_address to update vab buffer address in hw
registers.
- Rename gr.vab_init HAL to gr.vab_reserve to avoid any confusion about
when this HAL should be used.
-Replace gr.vab_release and gr.vab_recover with gr.vab_configure HAL.
Bug 3465734
Change-Id: I1b67bfa9be7728be5bda978c6bb87b196d55ab65
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2659467
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Martin Radev <mradev@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
At present, for each resume cycle the driver sends the
"nvgpu_cbc_op_clear" command to L2 cache controller, this causes the
contents of the compression bit backing store to be cleared, and results
in corrupting the metadata for all the compressible surfaces already allocated.
Fix this by updating cbc.init function to be aware of resume state and
not clear the compression bit backing store, instead issue
"nvgpu_cbc_op_invalide" command, this should leave the backing store in a
consistent state across suspend/resume cycles.
The updated cbc.init HAL for gv11b is reusable acrosss multiple chips, hence
remove unnecessary chip specific cbc.init HALs.
Bug 3483688
Change-Id: I2de848a083436bc085ee98e438874214cb61261f
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2660075
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In DRIVE 6.0, NvGPU needs to support error reporting in QNX-Safety,
QNX-Standard, and Linux. To support error reporting in all these
platform variants, SDL unit will be moved from QNX to common code.
As part of this refactoring activity, this patch removes ops assignment
for report error. Also, it removes API calls that are used to take
time-stamp for stall interrupt thread. This time-stamp APIs will be
brought back later, if required to support periodic diagnostics.
JIRA NVGPU-7353
Change-Id: I38536019dc7165e6a97674863b37d009854af948
Signed-off-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2655958
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
gops.gr.init.set_default_compute_regs() HAL configures compute specific
settings in safety build and this eliminates need of using SW methods.
Define this HAL for Orin safety build and configure sked check related
registers from the HAL. Other settings done on gv11b are no more
applicable for ga10b safety.
Bug 3456240
Change-Id: Ic125cdf414a5402511949015e3424b8cb2dab1e0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2646284
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch performs the following improvements for VAB:
1) It avoids an infinite loop when collecting VAB information.
Previously, nvgpu incorrectly assumed that the valid bit would
be eventually set for the checker when polling. It may not be set
if a VAB-related fault has occurred.
2) It handles the VAB_ERROR mmu fault which may be caused for various
reasons: invalid vab buffer address, tracking in protected mode,
etc. The recovery sequence is to set the vab buffer size to 0 and
then to the original size. This clears the VAB_ERROR bit. After
reseting, the old register values are again set in the recovery
code sequence.
3) Use correct number of VAB buffers. There's only one VAB buffer on
ga10b, not two.
4) Simplify logic.
Bug 3374805
Bug 3465734
Bug 3473147
Change-Id: I716f460ef37cb848ddc56a64c6f83024c4bb9811
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2621290
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The flag pmu->pg->golden_image_initialized is set to
true during initial GPU context creation and is not
cleared while the GPU goes into pm_suspend (during railgate).
Hence, when the GPU resumes after un-railgate it retains
the previous value which can cause ELPG to kick in immediately.
Due to this, when ELPG and Railgating are enabled, IDLE_SNAP
is seen for read access of gr_gpc0_tpc0_sm_arch_r reg.
To resolve this, if golden image is ready set the
pmu->pg->golden_image_initialized to suspend state during railgate,
to delay the early enable of ELPG. Add a new
pmu_init_golden_img_state hal in the NVGPU_INIT_TABLE_ENTRY.
This will be called after all the GR access is done and GPU resumes
completely after un-railgate. This hal will then check if
golden_image_initialized flag is in suspend state, it will set it
to ready state and then re-enable ELPG.
Bug 3431798
Change-Id: I1fee83e66e09b6b78d385bbe60529d0724f79e79
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2639188
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
With the existing implemenatation ELPG was disabled and
enabled once for handling stall isr and then again ELPG is
disabled and aenabled for writing to gr retrigger register.
This increased number of ELPG cycles and degraded perf of various
graphics test with ELPG enabled.
This change now disables ELPG, then handles stall_isr and
write to gr retrigger register and then enables ELPG.
Thus, number of ELPG cycles are reduced.
Bug 3451615
Change-Id: Iadac0c7b01eb711878280cd1503ba0f26000937c
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2638175
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Most of the Orin chip specific code is compiled out of safety build
with CONFIG_NVGPU_NON_FUSA and CONFIG_NVGPU_HAL_NON_FUSA. Remove the
config protection from Orin/GA10B specific code. Currently all code
is enabled. Code not required in safety will be compiled out later
in separate activity.
Other noteworthy changes in this patch related to safety build:
- In ga10b_ce_request_idle(), add a log print to dump num_pce so that
compiler does not complain about unused variable num_pce.
- In ga10b_fifo_ctxsw_timeout_isr(), protect variables active_eng_id and
recover under CONFIG_NVGPU_KERNEL_MODE_SUBMIT to fix compilation
errors of unused variables.
- Compile out HAL gops.pbdma.force_ce_split() from safety since this HAL
is GA100 specific and not required for GA10B.
- Compile out gr_ga100_process_context_buffer_priv_segment() with
CONFIG_NVGPU_DEBUGGER.
- Compile out VAB support with CONFIG_NVGPU_HAL_NON_FUSA.
- In ga10b_gr_intr_handle_sw_method(), protect left_shift_by_2 variable
with appropriate configs to fix unused variable compilation error.
- In ga10b_intr_isr_stall_host2soc_3(), compile ELPG function calls
with CONFIG_NVGPU_POWER_PG.
- In ga10b_pmu_handle_swgen1_irq(), move whole function body under
CONFIG_NVGPU_FALCON_DEBUG to fix unused variable compilation errors.
- Add below TU104 specific files in safety build since some of the code
in those files is required for GA10B. Unnecessary code will be
compiled out later on.
hal/gr/init/gr_init_tu104.c
hal/class/class_tu104.c
hal/mc/mc_tu104.c
hal/fifo/usermode_tu104.c
hal/gr/falcon/gr_falcon_tu104.c
- Compile out GA10B specific debugger/profiler related files from
safety build.
- Disable CONFIG_NVGPU_FALCON_DEBUG from safety debug build temporarily
to work around compilation errors seen with keeping this config
enabled. Config will be re-enabled in safety debug build later.
Jira NVGPU-7276
Change-Id: I35f2489830ac083d52504ca411c3f1d96e72fc48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2627048
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below sequence leads to the condition where nvgpu logs error
about spurious stall intr occurence:
1. PMU interrupt is set after ELPG command is sent to PMU.
2. Stalling irqhandler sees the interrupt and schedules
the stalling thread.
3. PMU isr gets executed from nvgpu_pmu_wait_fw_ack_status
just before the stalling irq thread is run. Due to this,
top level interrupt gets cleared.
4. When stalling irq thread gets executed it sees no
interrupt and logs as spurious interrupt.
This condition is not actually about "spurious interrupt",
hence change error log to gpu_dbg_intr log type and
rephrase it.
Bug 200780211
Change-Id: Idab62f61007012f7022a836473562795c24821ef
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2628275
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
On certain platforms, not all copy engine instances are usable. The user
shouldn't submit any work to these engines. To enforce this, remove
these engines from active/host_engine list, this should ensure that these
engines do not get advertised to userspace. In order to accomplish this
introduce the following functions:
- nvgpu_engine_remove_one_dev: This function removes the specified device
entry from following device lists: fifo->host_engines, fifo->active_engines,
runlist->rl_dev_list, runlist->eng_bitmask.
Replace iteration over LCE device type entries using
nvgpu_device_for_each(g, dev, NVGPU_DEVTYPE_LCE), along with this introduce
macro nvgpu_device_for_each_safe.
Introduce gpu_dbg_ce flag for CE debugging.
Bug 3370462
Change-Id: I2e21f18363c6e53630d129da241c8fece106cd33
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2616711
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Following error message is getting printed even when there are no fecs
ecc errors:
nvgpu: 17000000.ga10x gv11b_gr_intr_handle_fecs_ecc_error:114
[ERR] error count corrected: 0, uncorrected 0
To avoid confusion, print error messages only when fecs errors
are reported.
Bug 3417834
Change-Id: I96317555b11e1976f33add4b1dc8d84c936c26fb
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2625723
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Introduce HAL function gops.mssnvlink.get_links, this function retrieves
the number of nvlinks supported by the chip along with their base
addresses.
Update ga10b_mssnvlink_init_soc_credits to call mssnvlink.get_links.
Jira NVGPU-6641
Change-Id: I4ff857925f126bf41dc83eebc5723403244f66b0
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618368
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make ga10b_init_nvlink_soc_credits OS agnostic by replacing OS
specific functions with corresponding nvgpu wrappers. This function is now
assigned to gops.mssnvlink.init_soc_credits HAL.
Introduce nvgpu wrapper, nvgpu_io_map/unmap to map/unmap specified
physical address range.
Jira NVGPU-6641
Change-Id: I337bc75b8ec36552fe471bf5e42f62c19f67ed4a
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618237
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Earlier, buffer metadata support was made dependent on compression.
However that is not required.
Update the enabled flag NVGPU_SUPPORT_BUFFER_METADATA setup for
various hals. Enable it for all from linux characteristics init.
Update REGISTER_BUFFER and GET_BUFFER_INFO ioctls to seggregate
the compile/runtime compression functionality.
If compression is disabled, return error in case comptags are
required else don't fail the REGISTER_BUFFER ioctl.
Bug 200767700
Change-Id: I3850ccc879f180c97b830fb3d652c094b9d28a5b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2614378
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Start transitioning from an assumption of a single runlist buffer to the
domain based approach where a TSG is a participant of a scheduling
domain that then owns has a runlist buffer used for hardware scheduling.
Concretely, move the concept of a runlist domain up to the users of the
runlist code. Modifications to a runlist need to specify which domain is
modified.
There is still only the default domain that is created at boot.
Jira NVGPU-6425
Change-Id: Id9a29cff35c94e0d7e195db382d643e16025282d
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2621213
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Move the active_channels and active_tsgs bitmaps from struct
nvgpu_runlist to struct nvgpu_runlist_domain. A TSG and its channels are
currently active as part of a runlist; in the future, a runlist may be
switched from multiple domains that each are a collection of TSGs.
The changes are still internal to the runlist code. Users of runlists
need no modifications.
Jira NVGPU-6425
Change-Id: I2d0e98e97f04b9716bc3f4890cf881735d0ab664
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618387
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The current runlist code assumes a single runlist buffer to hold all TSG
and channel entries. Create separate RL domain and domain memory types
to hold data that is related to only a scheduling domain and not
directly to the runlist hardware; in the future, more than one domains
may exist and one of them is enabled at a time.
The domain is used only internally by the runlist code at this point and
is functionally equivalent to the current runlist memory that houses the
round robin entries.
The double buffering is still kept, although more domains might benefit
from some cleverness. Although any number of created domains may be
edited in runtime, nly one runlist memory is accessed by the hardware at
a time. To spare some contiguous memory, this should be considered an
opportunity for optimization in the future.
Jira NVGPU-6425
Change-Id: Id99c55f058ad56daa48b732240f05b3195debfb1
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618386
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>