Most of the Orin chip specific code is compiled out of safety build
with CONFIG_NVGPU_NON_FUSA and CONFIG_NVGPU_HAL_NON_FUSA. Remove the
config protection from Orin/GA10B specific code. Currently all code
is enabled. Code not required in safety will be compiled out later
in separate activity.
Other noteworthy changes in this patch related to safety build:
- In ga10b_ce_request_idle(), add a log print to dump num_pce so that
compiler does not complain about unused variable num_pce.
- In ga10b_fifo_ctxsw_timeout_isr(), protect variables active_eng_id and
recover under CONFIG_NVGPU_KERNEL_MODE_SUBMIT to fix compilation
errors of unused variables.
- Compile out HAL gops.pbdma.force_ce_split() from safety since this HAL
is GA100 specific and not required for GA10B.
- Compile out gr_ga100_process_context_buffer_priv_segment() with
CONFIG_NVGPU_DEBUGGER.
- Compile out VAB support with CONFIG_NVGPU_HAL_NON_FUSA.
- In ga10b_gr_intr_handle_sw_method(), protect left_shift_by_2 variable
with appropriate configs to fix unused variable compilation error.
- In ga10b_intr_isr_stall_host2soc_3(), compile ELPG function calls
with CONFIG_NVGPU_POWER_PG.
- In ga10b_pmu_handle_swgen1_irq(), move whole function body under
CONFIG_NVGPU_FALCON_DEBUG to fix unused variable compilation errors.
- Add below TU104 specific files in safety build since some of the code
in those files is required for GA10B. Unnecessary code will be
compiled out later on.
hal/gr/init/gr_init_tu104.c
hal/class/class_tu104.c
hal/mc/mc_tu104.c
hal/fifo/usermode_tu104.c
hal/gr/falcon/gr_falcon_tu104.c
- Compile out GA10B specific debugger/profiler related files from
safety build.
- Disable CONFIG_NVGPU_FALCON_DEBUG from safety debug build temporarily
to work around compilation errors seen with keeping this config
enabled. Config will be re-enabled in safety debug build later.
Jira NVGPU-7276
Change-Id: I35f2489830ac083d52504ca411c3f1d96e72fc48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2627048
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below sequence leads to the condition where nvgpu logs error
about spurious stall intr occurence:
1. PMU interrupt is set after ELPG command is sent to PMU.
2. Stalling irqhandler sees the interrupt and schedules
the stalling thread.
3. PMU isr gets executed from nvgpu_pmu_wait_fw_ack_status
just before the stalling irq thread is run. Due to this,
top level interrupt gets cleared.
4. When stalling irq thread gets executed it sees no
interrupt and logs as spurious interrupt.
This condition is not actually about "spurious interrupt",
hence change error log to gpu_dbg_intr log type and
rephrase it.
Bug 200780211
Change-Id: Idab62f61007012f7022a836473562795c24821ef
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2628275
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
On certain platforms, not all copy engine instances are usable. The user
shouldn't submit any work to these engines. To enforce this, remove
these engines from active/host_engine list, this should ensure that these
engines do not get advertised to userspace. In order to accomplish this
introduce the following functions:
- nvgpu_engine_remove_one_dev: This function removes the specified device
entry from following device lists: fifo->host_engines, fifo->active_engines,
runlist->rl_dev_list, runlist->eng_bitmask.
Replace iteration over LCE device type entries using
nvgpu_device_for_each(g, dev, NVGPU_DEVTYPE_LCE), along with this introduce
macro nvgpu_device_for_each_safe.
Introduce gpu_dbg_ce flag for CE debugging.
Bug 3370462
Change-Id: I2e21f18363c6e53630d129da241c8fece106cd33
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2616711
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Following error message is getting printed even when there are no fecs
ecc errors:
nvgpu: 17000000.ga10x gv11b_gr_intr_handle_fecs_ecc_error:114
[ERR] error count corrected: 0, uncorrected 0
To avoid confusion, print error messages only when fecs errors
are reported.
Bug 3417834
Change-Id: I96317555b11e1976f33add4b1dc8d84c936c26fb
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2625723
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Introduce HAL function gops.mssnvlink.get_links, this function retrieves
the number of nvlinks supported by the chip along with their base
addresses.
Update ga10b_mssnvlink_init_soc_credits to call mssnvlink.get_links.
Jira NVGPU-6641
Change-Id: I4ff857925f126bf41dc83eebc5723403244f66b0
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618368
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make ga10b_init_nvlink_soc_credits OS agnostic by replacing OS
specific functions with corresponding nvgpu wrappers. This function is now
assigned to gops.mssnvlink.init_soc_credits HAL.
Introduce nvgpu wrapper, nvgpu_io_map/unmap to map/unmap specified
physical address range.
Jira NVGPU-6641
Change-Id: I337bc75b8ec36552fe471bf5e42f62c19f67ed4a
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618237
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Earlier, buffer metadata support was made dependent on compression.
However that is not required.
Update the enabled flag NVGPU_SUPPORT_BUFFER_METADATA setup for
various hals. Enable it for all from linux characteristics init.
Update REGISTER_BUFFER and GET_BUFFER_INFO ioctls to seggregate
the compile/runtime compression functionality.
If compression is disabled, return error in case comptags are
required else don't fail the REGISTER_BUFFER ioctl.
Bug 200767700
Change-Id: I3850ccc879f180c97b830fb3d652c094b9d28a5b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2614378
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Start transitioning from an assumption of a single runlist buffer to the
domain based approach where a TSG is a participant of a scheduling
domain that then owns has a runlist buffer used for hardware scheduling.
Concretely, move the concept of a runlist domain up to the users of the
runlist code. Modifications to a runlist need to specify which domain is
modified.
There is still only the default domain that is created at boot.
Jira NVGPU-6425
Change-Id: Id9a29cff35c94e0d7e195db382d643e16025282d
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2621213
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Move the active_channels and active_tsgs bitmaps from struct
nvgpu_runlist to struct nvgpu_runlist_domain. A TSG and its channels are
currently active as part of a runlist; in the future, a runlist may be
switched from multiple domains that each are a collection of TSGs.
The changes are still internal to the runlist code. Users of runlists
need no modifications.
Jira NVGPU-6425
Change-Id: I2d0e98e97f04b9716bc3f4890cf881735d0ab664
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618387
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The current runlist code assumes a single runlist buffer to hold all TSG
and channel entries. Create separate RL domain and domain memory types
to hold data that is related to only a scheduling domain and not
directly to the runlist hardware; in the future, more than one domains
may exist and one of them is enabled at a time.
The domain is used only internally by the runlist code at this point and
is functionally equivalent to the current runlist memory that houses the
round robin entries.
The double buffering is still kept, although more domains might benefit
from some cleverness. Although any number of created domains may be
edited in runtime, nly one runlist memory is accessed by the hardware at
a time. To spare some contiguous memory, this should be considered an
opportunity for optimization in the future.
Jira NVGPU-6425
Change-Id: Id99c55f058ad56daa48b732240f05b3195debfb1
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2618386
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The macros defined within the C file in the form
(\
fb_mmu_l2tlb_ecc_status_corrected_err_l2tlb_sa_data_m() |\
fb_mmu_l2tlb_ecc_status_corrected_err_l2tlb1_sa_data_m() \
)
are difficult to detect correctly in libclang based static analyzers.
As a consequence, Hal Checker might be missing some coverage.
Such masks are converted into a static function format to help
mitigate this issue.
Change-Id: Id43e25abda8db4c79f7f6fc604eb6e76e9f6282c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2598063
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
nvgpu_timeout_init() returns an error code only when the flags parameter
is invalid. There are very few possible values for flags, so extract the
two most common cases - cpu clock based and a retry based timeout - to
functions that cannot fail and thus return nothing. Adjust all callers
to use those, simplfying error handling quite a bit.
Change-Id: I985fe7fa988ebbae25601d15cf57fd48eda0c677
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2613833
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix PMM litter values for ROP and LTC units.
The ROP unit has been moved from FBP to GPC, hence, introduce new litter
constants:
- GPU_LIT_PERFMON_PMMGPC_ROP_DOMAIN_START
- GPU_LIT_PERFMON_PMMGPC_ROP_DOMAIN_COUNT
Previous PMMFBP_ROP litter constants are removed.
Update GPU_LIT_PERFMON_PMMFBP_LTC_DOMAIN_COUNT to 4.
Jira NVGPU-7204
Change-Id: If3b5e278099ac0d503a3535f1b9b328dc105488b
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2607544
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GPU HW expects physically contiguous addresses when clearing
the compression bit store in memory. Currently on hypervisor setup,
the DMA_ATTR_FORCE_CONTIGUOUS flag ensures contiguous IPA, but it
is not possible to ensure contiguous physical memory.Disable
compression on virtualized environments until physically contiguous
memory is feasible.
Buffer Metadata support is dependent on compression support.
Move the initialization of NVGPU_SUPPORT_BUFFER_METADATA flag to
common code where NVGPU_SUPPORT_COMPRESSION is initialized.
Bug 200780546
Change-Id: Id94bffc878e275a80948880f0475162d0bb4ddae
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2607830
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- For older chips, nvgpu used to set some registers for ELPG
sequencer settings
- These writes are no longer required post-Turing as these
programming have been updated into the HW default values
itself and the register definitions have been changed to
help improve security as well.
- Set the pmu_setup_elpg HAL to NULL
Bug 200766930
Bug 3389932
Change-Id: I3820b14c8491f8180b2feb28cb38e23462546655
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2607599
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reduce debug logs printed when gpu_dbg_info or gpu_dbg_fn is set.
- Add gpu_dbg_verbose flag for more verbose debug prints. Update prints
in to ga10b_gr_init_wait_idle(), gm20b_gr_init_wait_fe_idle(),
gv11b_gr_init_write_bundle_veid_state() and
gv11b_gr_init_load_sw_veid_bundle().
- Add gpu_dbg_hwpm flag for hwpm specific debug prints. Update print in
nvgpu_gr_hwpm_map_create().
- Add gpu_dbg_mm for MM specific debug prints. Update prints in
gm20b_fb_tlb_invalidate(), gk20a_mm_fb_flush(),
gk20a_mm_l2_invalidate_locked(), gk20a_mm_l2_flush() and
gv11b_mm_l2_flush().
- Remove gpu_dbg_fn mask print in gr_ga10b_create_priv_addr_table(),
gr_gk20a_get_pm_ctx_buffer_offsets(), gr_gv11b_decode_priv_addr() and
gr_gv11b_create_priv_addr_table().
Jira NVGPU-7183
Change-Id: I9842d567047cb95a42e23b5907ae324214eed606
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2602797
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Removing SW methods completely requires CUDA changes in perforce.
Because of stage-main and module codelines infrastructure this cannot be
done in single series. Hence keep alive couple of SW methods for CUDA
compatibility until SW methods are removed from CUDA code. SW methods
do not perform any action.
Next steps are for CUDA to stop using SW methods. Once that is
integrated this patch can be reverted to completely remove SW method
support in safety build.
Bug 200748548
Change-Id: I2c70a647bec126bab2d8cac4fe10393be8ba0fb9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2604749
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Improved SDL heartbeat mechanism detects the interrupts triggered by
SW method and treats them as errors. Hence remove the SW method support
completely from safety build. Registers set by SW methods are now set
by default for all the contexts.
Implement new HAL gops.gr.init.set_default_compute_regs() to set the
registers in patch context. Call this HAL while creating each context.
Update gv11b_gr_intr_handle_sw_method() to treat all compute SW methods
as invalid.
Update unit test test_gr_intr_sw_exceptions() so that it now expects
failure for any method/data.
Bug 200748548
Change-Id: I614f6411bbe7000c22f1891bbaf06982e8bd7f0b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2527249
(cherry picked from commit bb6e0f9aa1404f79bcfbdd308b8c174a4fc83250)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2602638
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Introduce nvgpu_gmmu_map_partial() to map a specific size of a buffer
represented by nvgpu_mem, or what nvgpu_gmmu_map() used to do. Delete
the size parameter from nvgpu_gmmu_map() such that it now maps the
entire buffer. The separate size parameter is a historical artifact from
when nvgpu_mem did not exist yet; the typical use is to map the entire
buffer.
Mapping at a certain address with nvgpu_gmmu_map_fixed() still takes the
size parameter.
The returned address still has to be stored somewhere, typically to
mem.gpu_va by the caller so that the matching unmap variant finds the
right address.
Change-Id: I7d67a0b15d741c6bcee1aecff1678e3216cc28d2
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601788
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Introduce nvgpu_gmmu_unmap_addr() to unmap a nvgpu_mem that was mapped
at some other address than mem.gpu_va, which can be the case for buffers
that are shared across different address spaces. Delete the address
parameter from nvgpu_gmmu_unmap(), as the common case is to store the
address to mem.gpu_va when mapping the buffer.
Modify some instances of consecutive unmap + free calls to call just
nvgpu_dma_unmap_free().
Change-Id: Iecd7c9aa41d04e9f48e055f6bc0c9227cd759c69
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601787
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix MISRA rule 13.2 violations of below type from common.gr unit:
nvgpu/drivers/gpu/nvgpu/common/gr/gr_intr.c:108
Type: MISRA C-2012 Side Effects (MISRA C-2012 Rule 13.2, Required)
nvgpu/drivers/gpu/nvgpu/common/gr/gr_intr.c:108:
1. misra_c_2012_rule_13_2_violation:
In "nvgpu_safe_add_u32(nvgpu_gr_gpc_offset(g, gpc), nvgpu_gr_tpc_offset(g, tpc))",
there are 2 function calls in the arguments for which the order of
evaluation is undefined.
Jira NVGPU-7127
Change-Id: Ie867fb62098eed3a45ec01b941eda93b94220b4b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2598696
(cherry picked from commit 15483df6ca1017e5b9d6f2dff35f7e57094a2b4d)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601976
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: V M S Seeta Rama Raju Mudundi <srajum@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
- Accessing any PGRAPH registers in GR intr retrigger
ISR routine when ELPG is engaged causes idle snap.
- This idle snap is caught when nvgpu_submit_illegal_class
test is run.
- To avoid access to PGRAPH registers when ELPG is engaged
add elpg protected call for GR intr retrigger and CE ISR
and retrigger HALs
Bug 200777033
Change-Id: Ieef4a423faf79f09476d696c3078b113750548bb
Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2586449
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- On pre-silicon platform, static pg will be
done by nvgpu driver. For this, retain structs
and HALs of static pg.
- Add the static pg support under pre-silicon code.
- On silicon, the static pg will be done by BPMP.
- Rename variables used in static pg for better
readability and consistency
Bug 200768322
JIRA NVGPU-6433
Change-Id: Ib31c0f83b751c2b1563a36bd51af78a0bd12a117
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2594801
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
common.fbp has two interfaces to initialize FBP:
1. Public API nvgpu_fbp_init_support
2. HAL fbp.fbp_init_support
nvgpu_fbp_init_support() is only used to initialize HAL
fbp.fbp_init_support. Remove the HAL and use the API directly.
JIRA NVGPU-6644
Change-Id: I2c455e09dbcf5e4fb1dc370b284e4f0d5c678b40
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2592047
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add Setter and Getter methods for accessing tsg->sm_error_states.
Getter returns a constant pointer for struct nvgpu_tsg_sm_error_state.
This renders it unnecessary to add BVEC for above fields for the struct
in multiple locations. The current design ensures that only a constant
pointer is obtained from the owner unit i.e. FIFO.
The following new methods are added. Both unit tests and BVEC tests
are added for them as well.
nvgpu_tsg_store_sm_error_state
nvgpu_tsg_get_sm_error_state
Jira NVGPU-6947
Change-Id: I82c22a2774862c8579baa41b6fb8292fa164704a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 79574638671a0c6efe41cd3423668fcd1bd96826)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2556938
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit