Bugs in current version are listed below:
1. Function alloc() or alloc_pte() allocate memory for len=0.
2. Function alloc() or alloc_pte() don't unlock nvgpu_allocator if
pte_size is invalid.
3. Function alloc_pte() and alloc_fixed() set alloc_made flag
unconditionally.
4. Function release_carveout() tries to acquire nvgpu_allocator lock
twice causing unresponsive state.
5. If buddy allocation fails in balloc_do_alloc() or
balloc_do_alloc_fixed() function, previously allocated buddies are not
merged. This causes seg fault in ops->fini().
6. With gva_space enabled and base=0, buddy_allocator updated base not
checked for pde alignment.
7. In balloc_do_alloc_fixed(), align_order computed using __fls()
results in one order higher than requested.
8. Initializing buddy allocator with size=0, initializes very large
memory and will trigger seg fault with the changes in this patch.
Setting size=1G so that further execution is successful.
This patch fixes above listed bugs and updates following:
1. With gva_space enabled, BALLOC_PTE_SIZE_ANY is considered as
BALLOC_PTE_SIZE_SMALL which allows alloc() to be used.
2. GPU_BALLOC_MAX_ORDER changed to 63U. Condition added to check that
max_order is never greater than GPU_BALLOC_MAX_ORDER.
3. BUG() changed to nvgpu_do_assert().
JIRA NVGPU-3005
Change-Id: I20c28e20aa3404976d67f7884b4f8cbd5c908ba7
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075646
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Created PMU super surface unit & moved structs/functions related to
super surface under a unit, separated super surface structs into
private/public based on its usage/access, made changes to supper
surface dependent files to reflect supper surface changes
respective to unit.
JIRA NVGPU-3045
Change-Id: I6ac426052eb60f00b432d9533460aa0afd939fe3
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088405
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
lsfm-LS falcon manager
Created lsfm unit under common/pmu/lsfm, moved functions &
variables related to lsfm functionality under lsfm unit,
within lsfm unit created separate files based on init which
does chip specific s/w init, separated private/public
functionality.
JIRA NVGPU-3021
Change-Id: Iad4a4e5533122fb2387a4980581a0d7bcdb37d67
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2080546
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move following functions from gr_gk20a.c to common.gr.init
gk20a_init_gr_support ---> nvgpu_gr_init_support
gk20a_gr_reset ---> nvgpu_gr_reset
gk20a_enable_gr_hw ---> nvgpu_gr_enable_hw
Move all static functions called from those functions to
common.gr.init under gr.c file.
JIRA NVGPU-1885
Change-Id: I695235f97738654e7c686a345d3f84d1daaacd72
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2082363
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch fixes below two issues.
1.Currently clk arb exit is called after GPU registers are released.
This causes crash when clk arb WQ accesses GPU HW register for status.
The ideal way is to exit the clk_arb which removes the WQ from running
before calling lockout register.
2.Check if dGPU is dying during processing of PMU Commands.
This prevents race condition when PMU is waiting for response and device
is shutdown.
Bug 200488054
Change-Id: I812b07af7db4494d5ea2ed6197742ceb23d30a4b
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081916
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Enable the reporting of PFIFO related errors such as engine syncpoint error,
memop timeout error, lb error to 3LSS framework.
- Remove the reporting of bind_error from gk20a since we already report it
from gv11b related fifo hal file.
Jira NVGPU-3087
Change-Id: Ic002be3a12a049010165870b861cdfb13a7f33d8
Signed-off-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088579
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new hal handle_exceptions in hal.gr.intr
This hal handles all the gr exceptions which involves register read and
write.To keep the code simple, handle gpc_exception outside this hal
as gpc exception involves common intr function call and variables
not needed by other exceptions.
JIRA NVGPU-3016
Change-Id: Ie1fb60e46419ee20a10ac9cfb4874cb6eb3739b9
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090406
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move the gk20a_gr_handle_gpc_exception function from gr_gk20a.c
to gr_intr.c as nvpu_gr_intr_handle_gpc_exception.
Move static function gk20a_gr_handle_tpc_exception to
gr_intr.c as gr_intr_handle_tpc_exception
JIRA NVGPU-3016
Change-Id: I42862b00d1946e029673d8f95e0262a44244a87a
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090405
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added the following HALs
- ramin.base_shift
- ramin.alloc_base
Use above HALs in mm, instead of using hw definitions.
Defined nvgpu_inst_block_ptr to
- get inst_block address,
- shift if by base_shift
- assert upper 32 bits are 0
- return lower 32 bits
Added missing #include for <nvgpu/mm.h>
Jira NVGPU-3015
Change-Id: I558a6f4c9fbc6873a5b71f1557ea9ad8eae2778f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077840
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved the following HALs
- fifo.alloc_inst
- fifo.free_inst
To channel HALs:
- channel.alloc_inst
- channel.free_inst
Moved the following fifo code:
- gk20a_fifo_alloc_inst
- gk20a_fifo_free_inst
To common channel code:
- nvgpu_channel_alloc_inst
- nvgpu_channel_free_inst
vgpu already implements
- vgpu_channel_alloc_inst
- vgpu_channel_free_inst
Jira NVGPU-3015
Change-Id: Id01cb34958281f43e3064d2754c0ab896809548d
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089107
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
g->ops.gr.commit_inst() HAL is used to commit gr context to engine
There is nothing h/w specific in HAL implementation anymore and the
sequence can be unified by checking support for subcontext feature
Remove gr_gv11b_commit_inst() and gr_gk20a_commit_inst() and unify
the sequence in nvgpu_gr_obj_ctx_commit_inst() API in common.gr.obj_ctx
unit. Use this API instead of hal.
Channel subcontext is now directly allocated in gk20a_alloc_obj_ctx()
vGPU code will directly call vGPU implementation vgpu_gr_commit_inst()
Delete the hal apis Since they are no longer needed
Jira NVGPU-1887
Change-Id: Iae1f6be4ab52e3e8628f979f477a300e65c92200
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090497
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_gr_init_fs_state is right now defined in common.gr.gr unit
This API also needs to be called from common.gr.obj_ctx unit so obj_ctx
unit depends on gr unit for this.
common.gr.gr unit already depends on common.gr.obj_ctx for context
initialization. So this causes a circular dependency
Fix this by moving this API to new standalone unit common.gr.fs_state
Rename it to nvgpu_gr_fs_state_init
Jira NVGPU-1887
Change-Id: I88ca8e1a7bc3c544459462493116f95d92b9ab01
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090496
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved fecs_host_int_enable ops from gr to gr falcon.
Created required hals in gm20b and gv11b gr falcon units.
gr_gk20a_fecs_host_int_enable -> gm20b_gr_falcon_fecs_host_int_enable
gr_gv11b_fecs_host_int_enable -> gv11b_gr_falcon_fecs_host_int_enable
JIRA NVGPU-1881
Change-Id: Ice9d5170928068b0447cc4644e6668f7ff75b8d6
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved following functionality from gr to gr falcon common
gr_gk20a_init_ctxsw -> nvgpu_gr_falcon_init_ctxsw
gr_gk20a_init_ctx_state -> nvgpu_gr_falcon_init_ctx_state
gk20a_init_gr_bind_fecs_elpg -> nvgpu_gr_falcon_bind_fecs_elpg
Replaced code in gr_gk20a.c by calling corresponding gr falcon common
calls and moved all relevant code to gr falcon unit.
Moved following gr ops from gr to gr falcon:
int (*init_ctx_state)(struct gk20a *g);
Moved functionality from gr to relevant gr falcon hals:
gr_gk20a_init_ctx_state -> gm20b_gr_falcon_init_ctx_state
gr_gp10b_init_ctx_state -> gp10b_gr_falcon_init_ctx_state
JIRA NVGPU-1881
Change-Id: I027e1972a7747275311df99679235804dc0e16fe
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084391
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move all fecs methods related code to gr falcon unit and handle it
through generic gr.falocn.ctrl_ctxsw hal.
Following methods are moved from gr_gk20a.c to gr falcon unit.
fecs method and corresponding new fecs method def in gr_falcon.h:
gr_fecs_method_push_adr_discover_image_size_v ->
NVGPU_GR_FALCON_METHOD_CTXSW_DISCOVER_IMAGE_SIZE
gr_fecs_method_push_adr_discover_pm_image_size_v ->
NVGPU_GR_FALCON_METHOD_CTXSW_DISCOVER_PM_IMAGE_SIZE
gr_fecs_method_push_adr_discover_reglist_image_size_v ->
NVGPU_GR_FALCON_METHOD_REGLIST_DISCOVER_IMAGE_SIZE
gr_fecs_method_push_adr_set_reglist_bind_instance_v ->
NVGPU_GR_FALCON_METHOD_REGLIST_BIND_INSTANCE
gr_fecs_method_push_adr_set_reglist_virtual_address_v ->
NVGPU_GR_FALCON_METHOD_REGLIST_SET_VIRTUAL_ADDRESS
Following fecs methods are moved from obj_ctx.c to gr falcon unit.
gr_fecs_method_push_adr_bind_pointer_v ->
NVGPU_GR_FALCON_METHOD_ADDRESS_BIND_PTR
gr_fecs_method_push_adr_wfi_golden_save_v ->
NVGPU_GR_FALCON_METHOD_GOLDEN_IMAGE_SAVE
Following fecs methods are moved from gr_gp10b.c to gr falcon unit.
gr_fecs_method_push_adr_discover_preemption_image_size_v ->
NVGPU_GR_FALCON_METHOD_PREEMPT_IMAGE_SIZE
gr_fecs_method_push_adr_configure_interrupt_completion_option_v ->
NVGPU_GR_FALCON_METHOD_CONFIGURE_CTXSW_INTR
Following fecs method is moved from zcull_gm20b.c:
gr_fecs_method_push_adr_discover_zcull_image_size_v ->
NVGPU_GR_FALCON_METHOD_CTXSW_DISCOVER_ZCULL_IMAGE_SIZE
Following fecs method is moved from fecs_trace_gp10b.c:
gr_fecs_method_push_adr_write_timestamp_record_v
-> NVGPU_GR_FALCON_METHOD_FECS_TRACE_FLUSH
Added new HAL in gr falcon for moving fecs_current_ctx_data from
gr_gk20a.c to gr_falcon_gm20b.c.
u32 (*get_fecs_current_ctx_data)(struct gk20a *g,
struct nvgpu_mem *inst_block);
Added overlay for gm20b_gr_falcon_ctrl_ctxsw in newly added in
gr_falcon_gp10b.c for handling gp10b+ specific fecs methods:
gp10b_gr_falcon_ctrl_ctxsw
JIRA NVGPU-1881
Change-Id: I662d06f5176b29e6837d63c25e42de67505d48f5
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2087148
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The os_sched unit was recently unified with QNX and as a result some
new code was added to the POSIX build. This code works fine with the
Tmake compiler, but on x86 local builds of the POSIX code it triggers
a fmt-security warning (which is subsequently treated as an error).
Thus the build breaks.
The fix is to explicitly define a format of "%s" instead of passing
in a format from a locally defined char array.
Also fix the MISRA issues due to lack of curly braces.
Change-Id: Ia5bfda39e486acde22f16e338ef0d390e5b50e3c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089081
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Sagar Kadamati <skadamati@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move handle_semaphore_pending to hal.gr.intr
gr_gk20a_handle_semaphore_pending function is moved from
gr_gk20a.c to common.gr.intr as nvgpu_gr_handle_semaphore_pending
JIRA NVGPU-3016
JIRA NVGPU-1891
Change-Id: Id731bb4169de9dcfff012e401165ad5a7f43bffa
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089173
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move handle_notify_pending hal to hal.gr.intr
Move gk20a_gr_handle_notify_pending code from gr_gk20a.c to
common.gr.intr as nvgpu_gr_intr_handle_notify_pending function.
JIRA NVGPU-1891
JIRA NVGPU-3016
Change-Id: Ib3284a83253b03e5708674fce683331ee20b8213
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089172
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
pmu_ipc.c had functionality for rpc response handling and message cond
checks. This patch moves them to msg unit. And prepare cmd.h to group
together structs and functions related PMU commands.
JIRA NVGPU-1970
Change-Id: Iec5d72d02ab3ee51963631c828b301c56af8dc48
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2079146
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU message handling unit can't be part of command handling unit as it
creates circular dependencies with the PMU tasks (clk, therm etc.)
PMU allocator unit shall encompass DMEM allocator and other allocators
used by PMU.
JIRA NVGPU-1970
Change-Id: I6ae3fa189d553eb9f79adf1abc753e1bb536241b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2079144
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU mutexes used by FIFO and runlists is functionality independent of
the PMU command and message management.
Remove related functionality from pmu_ipc.c and prepare pmu_mutex.c.
Prepare PMU HAL unit that contains gk20a specific PMU mutexes
handling.
JIRA NVGPU-1970
Change-Id: I0204be2ef9d2c000004667af3c18dc527d7ac25f
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2079142
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU commands and messages management is based on sharing the data
through sequences. Functions for sending commands/allocating
payload update sequence data acquiring lock and those for
working on received messages read/free the sequence data
releasing lock.
JIRA NVGPU-1970
Change-Id: I4204dbfbf6f57b0f5a7016aed74ffea6e91ab06c
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2079141
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With hw minimal headers, lot of unwanted hw registers are stripped.
SW needed few updates to use minimal headers:
1. Use stride value to get non zero instance offset:
gr_pri_gpc0_tpc1_tpccs_tpc_activity_0_r() =
gr_pri_gpc0_tpc0_tpccs_tpc_activity_0_r() +
nvgpu_get_litter_value(g, GPU_LIT_TPC_IN_GPC_STRIDE);
gr_pri_be1_becs_be_activity0_r() = gr_pri_be0_becs_be_activity0_r() +
nvgpu_get_litter_value(g, GPU_LIT_ROP_STRIDE);
2. Broadcast registers should not be used for reading status and
they should be used only for broadcast register writes. Removed
following register reads from gm20b register dump:
NV_PGRAPH_PRI_GPCS_TPC0_TPCCS_TPC_ACTIVITY0
NV_PGRAPH_PRI_GPCS_TPC1_TPCCS_TPC_ACTIVITY0
Above optimizations are done for gm20b, gp10b and gv11b.
JIRA NVGPU-2917
JIRA NVGPU-2918
JIRA NVGPU-2919
Change-Id: Ia8c736639f7cada0cf9f0d227dac372bdf09e55b
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088128
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- delete vgpu_is_reduced_bar1(). Current implementation maps only
that portion of BAR1 that is reserved for guest in case of
reduced BAR1. However this code is obsolete and reduced BAR1
check is always false. Delete related function vgpu_is_reduced_bar1()
and conditional mapping.
- move vgpu_mm_bar1_map_userd() delcaration from vgpu.h
to mm_vgpu.h
- move vgpu_gp10b_init_hal() and vgpu_gv11b_init_hal()
declarations from vgpu.h to new header files
vgpu/gp10b/vgpu_hal_gp10b.h and vgpu/gv11b/vgpu_hal_gv11b.h
respectively.
Jira GVSCI-334
Change-Id: I11a297a0aba1afd8b0ad022169ba7f734bcd952c
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081152
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create vgpu unit init. Move init related functions from
vgpu.c to init_vgpu.c under common/vgpu/init path and
create corresponding header file.
Create vgpu child unit init hal. Move functions
vgpu_init_hal() and vgpu_detect_chip() to a new
file init_hal_vgpu.c under common/vgpu/init path and
create corresponding header file.
Also move os specific hal init vgpu function declaration
vgpu_init_hal_osi() to a new file
include/nvgpu/vgpu/os_init_hal_vgpu.h separating it from
generic vgpu.h
Jira GVSCI-334
Change-Id: I07290e3be5061a2349689228265c8b28ebadab88
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081153
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
New tpc_exception_sm_disable hal to disable and
tpc_exception_sm_enable hal to enable the sm bit in tpc_exception
register.
These hals are added to avoid the register access in common gr code.
JIRA NVGPU-3016
Change-Id: I21634e2cd3b2b8007081e6f7608ec2da9c74813f
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088311
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_gr_obj_ctx_alloc_golden_ctx_image() right now uses global variable
g->gr.ctx_vars.golden_image_size to get size of golden image which is
then used to initialize local golden image
Use nvgpu_gr_obj_ctx_get_golden_image_size() API to get the size instead
of using global variable
Jira NVGPU-1887
Change-Id: I39b0cfe8f051c828e2b279c1836a259962c3d3bd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2089581
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>