Execute gr_init_setup_sw() for each GR instance.
Update all of the functions called from this function to receive
nvgpu_gr pointer explicitly.
Separate out nvgpu_gr_zbc_init() call to gr_init_setup_sw() and rename
gr_init_ctx_and_map_zbc() to gr_init_ctx_bufs() for more clarity.
Call gr_init_ecc_init() from nvgpu_gr_init_support() since this does not
need to be executed per GR instance.
Initialize mutex etc in nvgpu_gr_alloc() for consistency.
Jira NVGPU-5648
Change-Id: I8e990e11458c05c1b53a4d6710cc2ec3545762a8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2410701
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Ctxsw state will be lost after gr reset. After gr reset
in recovery sequence, re-initialize ctxsw state to send
below fecs methods:
gr_fecs_method_push_adr_discover_image_size_v()
gr_fecs_method_push_adr_discover_pm_image_size_v()
gr_fecs_method_push_adr_discover_zcull_image_size_v()
gr_fecs_method_push_adr_discover_preemption_image_size_v()
Without these methods sent to ctxsw, fecs will generate
host error interrupts indicating mismatches in ctxsw
image. Above fecs methods needs to be sent even if they
are already sent during golden context creation.
Bug 3109773
Change-Id: I2aeb92da8fa1961903ab95ef90f47906a1bb32c4
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406685
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Expose below two new APIs from common.grmgr unit
nvgpu_grmgr_get_gr_num_gpcs() - get per instance number of GPCs
nvgpu_grmgr_get_gr_gpc_phys_id() - get physical GPC id for MIG engine
local id in corresponding instance
Execute gr_init_config() for each GR instance.
Add gr_config_init_mig_gpcs() to initialize GPC data in case MIG is
enabled. Separate out gr_config_init_gpcs() for legacy GPC data
initialization.
These functions will inititialize below data in struct nvgpu_gr_config:
max_gpc_count
gpc_count
gpc_mask
gpc_tpc_mask[gpc_count]
max_tpc_per_gpc_count
Rest of the values in struct nvgpu_gr_config are either based on above
values, or read from HW after setting GPC PRI window.
In gr_config_alloc_struct_mem(), rename total_gpc_cnt to total_tpc_cnt
since it represents total TPC count and not GPC. Remove use of temp3
variable since it does not give any idea on usage.
Jira NVGPU-5648
Change-Id: I646cac2ddc312e72b241b1b2a0e51a5cce141535
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2406390
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_gr_engine_interrupt_mask() earlier returned mask of all GR engine
instance interrupts. During device refactor series, this got changed to
return interrupt of only first instance.
Change this again to return interrupt mask of all the GR engine
instances since common.mc unit does not yet support APIs to enable
interrupt of individual GR instance.
Update nvgpu_gr_get_syspipe_id() API to take gr_instance_id as parameter
instead of struct nvgpu_gr pointer. Definition of struct nvgpu_gr is not
available outside of common.gr unit.
Jira NVGPU-5648
Change-Id: I5320d1515eea6054150dc14706a16475bd650da7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405409
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Rename gr_init_reset_enable_hw() to gr_init_prepare_hw() since this
function does not actually do reset, but just prepares the HW
after reset for other SW/HW initialization.
Add a new function gr_init_prepare_hw_impl() that executes per-instance
sequence to prepare GR hardware. Execute this inside
nvgpu_gr_exec_with_ret_for_each_instance().
Note that enabling GR engine interrupts in MC is still expected to
be done in one shot hence keep that code outside of
gr_init_prepare_hw_impl()
Remove redundant calls to gops.gr.init.fifo_access() and
enable_gr_interrupts() from gr_init_setup_hw().
gr_init_prepare_hw() does this already and executes before
gr_init_setup_hw()
Jira NVGPU-5648
Change-Id: If0b7207f80c2fb00d894afebce04b06b7b61d432
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2405408
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add a new header file <nvgpu/gr/gr_instances.h> that supports below
macros to execute various functions for GR instances
1) nvgpu_gr_exec_for_each_instance
Execute a function for each GR instance by configuring GR remap
window for that instance. Function being executed returns void.
2) nvgpu_gr_exec_with_ret_for_each_instance
Execute a function for each GR instance by configuring GR remap
window for that instance. Function being executed returns an error.
3) nvgpu_gr_exec_for_all_instances
Execute a function for all GR instances at once. For this GR remap
window needs to be disabled temporarily.
If CONFIG_NVGPU_MIG is disabled, all above macros will turn into simple
funciton calls.
If CONFIG_NVGPU_MIG is disabled or if runtime flag NVGPU_SUPPORT_MIG is
disabled, all above macros will turn into simple function calls that
configure single GR instance.
Separate out GR engine reset code into new API gr_reset_engine() and
execute it with nvgpu_gr_exec_with_ret_for_each_instance().
PROD values need to be loaded in legacy mode, hence call
nvgpu_cg_init_gr_load_gating_prod() inside
nvgpu_gr_exec_for_all_instances().
Rename gr_init_prepare_hw() to more appropriate
gr_reset_hw_and_load_prod()
Moe gops.gr.init.fifo_access() call to gr_init_reset_enable_hw().
Add new API nvgpu_grmgr_get_gr_syspipe_id() to query GR instance syspipe
id from common.grmgr unit. Add nvgpu_gr_get_syspipe_id() that returns
same value stored in nvgpu_gr struct.
Add cur_gr_instance field to struct nvgpu_gr to track current GR
instance being programmed under remap window.
Jira NVGPU-5648
Change-Id: I86920303427a6e6547ebf195daa37438365bb38e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403550
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GPC reset is right now triggered from common.mc unit for NVGPU_NEXT.
Move the triggers to common code in common.gr unit. This way it is much
more cleaner to handle multiple GR instances (added in subsequent patch)
Hardcode GR engine instance to 0 for now since by default there is only
one GR engine instance.
Jira NVGPU-5648
Change-Id: I3fd4d0a50db5a8c4b3decf1df881af323cea50c1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403549
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following changes are made in this patch.
1) Change unnamed structs within gpu_ops to named structs
with the prefix gops_*.
2) Each named struct gops_ are moved into a separate gops specific file
under include/nvgpu/gops/
3) struct gpu_ops is moved into a separate file include/nvgpu/gpu_ops.h
and all other dependent struct gops_* are included in this header.
4) Direct references to include/nvgpu/gops are removed from files as its enough
to include gk20a.h.
Change-Id: Ieb22cb853be567e3bef14f5f8a04674eebd902ea
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398776
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
common.gr unit exports a separate API nvgpu_gr_prepare_sw to
initialize some SW pieces required for nvgpu_gr_enable_hw().
A separate API is really unnecessary since same initialization
can be performed in nvgpu_gr_alloc().
Remove nvgpu_gr_prepare_sw() and HAL gops.gr.gr_prepare_sw().
Initialize falcon and interrupt structures in loop from
nvgpu_gr_alloc().
Move nvgpu_netlist_init_ctx_vars() from nvgpu_gr_prepare_sw() to
common init path since netlist parsing need not be done from
common.gr unit. It just needs to happen before nvgpu_gr_enable_hw().
Also, trigger nvgpu_gr_free() from gr_remove_support() instead
of OS specific paths. Also remove nvgpu_gr_free() calls from
probe error paths since nvgpu_gr_alloc is no longer called in
probe path.
Move interrupt and falcon data structure free calls to nvgpu_gr_free().
Also remove corresponding unit testing code that tests
nvgpu_gr_prepare_sw() specifically.
Update some unit tests to initialize ecc counters and netlist.
Disable some unit tests that fail for reasons unknown.
Jira NVGPU-5648
Change-Id: I82ec8160f76530bc40e0c11a9f26ba1c8f9cf643
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400166
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new API nvgpu_grmgr_get_num_gr_instances() that returns number of
GR instance enumerated by GR manager. This just returns number of sys
pipes enabled since it is same as number of GR instances.
For consistency until common.gr supports multiple GR instances
completely, add a temporary macro NVGPU_GR_NUM_INSTANCES and set it
to 1. If this macro is changed to 0 (for local MIG testing), fall
back to use nvgpu_grmgr_get_num_gr_instances() to get enumerated number
of GR instances.
Use a for loop to initialize other variables of struct nvgpu_gr.
Remove unnecessary NULL check in nvgpu_gr_alloc() since struct gk20a
pointer can never be NULL in this path. Also remove corresponding unit
test code.
Jira NVGPU-5648
Change-Id: Id151d634a23235381229044f2a9af89e390886f2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400151
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Delete the struct nvgpu_engine_info as it's essentially identical to
struct nvgpu_device. Duplicating data structures is not ideal as it's
terribly confusing what does what.
Update all uses of nvgpu_engine_info to use struct nvgpu_device. This
is often a fairly straight forward replacement. Couple of places though
where things got interesting:
- The enum_type that engine_info uses is defined in engines.h and
has a bit of SW abstraction - in particular the GRCE type. The only
place this seemed to be actually relevant (the IOCTL providing device
info to userspace) the GRCE engines can be worked out by comparing
runlist ID.
- Addition of masks based on intr_id and reset_id; those can be
computed easily enough using BIT32() but this is an area that
could be improved on.
This reaches into a lot of extraneous code that traverses the fifo
active engines list and dramtically simplifies this. Now, instead of
having to go through a table of engine IDs that point to the list of
all host engines, the active engine list is just a list of pointers to
valid engines. It's now trivial to do a for-all-active-engines type
loop. This could even be turned into a generic macro or otherwise
abstracted in the future.
JIRA NVGPU-5421
Change-Id: I3a810deb55a7dd8c09836fd2dae85d3e28eb23cf
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2319895
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
struct nvgpu_gr is right now initialized during probe and from OS
specific code. To support multiple instances of graphics engine,
nvgpu needs to initialize nvgpu_gr after number of engine instances
have been enumerated in poweron path.
Hence move nvgpu_gr_alloc() to poweron path and after gr manager has
been initialized.
Some of the members of nvgpu_gr are initialized in probe path and they
too are in OS specific code. Move them to common code in
nvgpu_gr_alloc()
Add field fecs_feature_override_ecc_val to struct gk20a to store the
override flag read from device tree. This flag is later copied to
nvgpu_gr in poweron path.
Update tpc_pg_mask_store() to check for g->gr being NULL before
accessing golden image pointer.
Update tpc_fs_mask_store() to return error if g->gr is not initialized.
This path needs nvgpu_gr struct initialized. Also fix the incorrect
NULL pointer check in tpc_fs_mask_store() which breaks the write path
to this sysfs.
Jira NVGPU-5648
Change-Id: Ifa2f66f3663dc2f7c8891cb03b25e997e148ab06
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2397259
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, NVGPU_SUPPORT_FECS_CTXSW_TRACE enabled flag is set to true
when fecs_trace s/w setup is executed successfully. Sometimes,
fecs_trace is required to be disabled for debugging. This change will
help disable/enable fecs_trace feature by modifying one of the enabled
flags.
Enable NVGPU_SUPPORT_FECS_CTXSW_TRACE during chip specific hal init.
Control fec_trace init and ctxsw dev open depending on
NVGPU_SUPPORT_FECS_CTXSW_TRACE flag status.
JIRA NVGPU-5616
Change-Id: Id0754a5af7cd95a67a1f0ae5de36115d44e1111b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2357501
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The driver configures the sm hww global, warp ESR report masks during poweron
as part of gops_gr.gr_init_support. However, during golden context init, these
are overwritten with default entries from sw_ctx_load list; this leaves the
report masks in a state inconsistent with the driver expectation.
The driver should configure the sm hww warp, global ESR report masks during
golden context init and not before it; Hence, move set_hww_esr_report_mask from
power-on path to golden context init.
In addition, update set_hww_esr_report_mask to do RMW, so as to retain the
values loaded from sw_ctx_load list.
Update global ESR report mask to enable all exceptions.
Bug 3029888
Bug 2997718
Change-Id: Id7ad4cff5409982143f49695c95c5e1d1c9fdec9
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2367466
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
While booting LS falcons, gr.falcon.bind_instblk gops is
used to bind WPR VA to gr falcon. Only FECS_METHOD must be
used to bind instblks. But at this point FECS falcon is not loaded
and running. Hence FECS_METHOD cannot be used to bind this instblk.
Besides that, this code is not required
for successful falcon boot and functioning of chips other
than gm20b.
JIRA NVGPU-5323
Change-Id: I148ccc77d65d5f01adbba6261369e7a292dccfc3
Signed-off-by: smadhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2369736
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add the following gr gops functions:
- enable_gpc_crop_hww
- enable_gpc_zrop_hww
- handle_gpc_crop_hww
- handle_gpc_zrop_hww
- handle_gpc_rrh_hww
These gr gops will be used in nvgpu-next.
Add function: nvgpu_gr_rop_offset to compute rop pri offsets.
Jira: NVGPU-5237
Change-Id: I9e2437c1d2893238b16ec7a134543e20c81b49f7
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335687
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, we are performing obj ctx alloction for bellow classes
1. VOLTA_COMPUTE_A
2. VOLTA_DMA_COPY_A
3. VOLTA_CHANNEL_GPFIFO_A
In safety, we use Async CE but not GRCE.
So allocating obj context only for COMPUTE_A and return success(0) for
all other valid classes, after setting class in the channel struct.
Jira NVGPU-4378
Change-Id: Ie99872e062cc66f9ddf699397a13df85c3d8d59e
Signed-off-by: sagar <skadamati@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2287486
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Currently, size of zbc index table is defined as a macro. This macro is
independent of the number of address bits in the ltc zbc index register.
Adding below hal will update zbc index table size as per number of
address bits.
Add hal to get gr_zbc_index_table_size:
u32 (*zbc_table_size)(struct gk20a *g);
ZBC index table address 0 is reserved. Logic to start zbc table index
from 1 is moved to corresponding hals.
JIRA NVGPU-4838
Change-Id: I700cadfdd1f3dc5f323055b8f44d769d6627920a
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2288479
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below functions in common.gr.obj_ctx subunit include unnecessary
asserts to ensure value is not truncated when parsing into U32 size.
nvgpu_gr_obj_ctx_gr_ctx_alloc()
Make use of nvgpu_safe_cast_u64_to_u32() and remove unnecessary
asserts
Jira NVGPU-4778
Change-Id: Ic06c2f4131b3bba35222f7de5441f82ecee6d83d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2277158
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit test to cover gops.gr.init.ecc_scrub_reg HAL function
gops.gr.init.ecc_scrub_reg HAL can generate TIMEOUT errors which are
not returned to caller currently. Update this HAL to return int value
for error propagation.
Jira NVGPU-4458
Change-Id: I98f4d5af2ef17cc4301951fec4d660638c8ef72c
Signed-off-by: dnibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2265456
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>