To avoid the other hal calls from gr_gv11b_init_fs_state
and gr_gm20b_init_fs_state hal, move the load_tpc_mask and
load_smid_config hal to nvgpu_gr_init_fs_state common gr function.
bes_zrop_setting and bes_crop_setting for active_ltcs is moved before
the nvgpu_gr_init_fs_state call from those hals.
replace gk20a_writel and gk20a_readl in modified hal function with
nvgpu_writel and nvgpu_readl.
JIRA NVGPU-1885
Change-Id: Ic0bf4a4bfa4da032f33bbe4af89031bbbdd9cd94
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072414
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new hal operation g->ops.gr.init.fe_go_idle_timeout() in hal.gr.init
unit to enable/disable fe_go_idle timeout
Use this hal in gr_gk20a_init_golden_ctx_image() instead of direct
register access
Remove timeout disable/enable code in gk20a_init_sw_bundle() since
parent API gr_gk20a_init_golden_ctx_image() is already taking care of
that
Jira NVGPU-2961
Change-Id: Ice72699059f031ca0b1994fa57661716a6c66cd2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072550
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Earlier falcon HAL ops were embedded in the falcon structure. For clear
separation of common and HAL these ops will have to be accessed through
g->ops.falcon interfaces.
With these changes nvgpu_falcon_* functions directly call falcon gpu
ops functions for falcon. Falcon registers and HAL functions are
exported from falcon_gk20a.h. HAL files per platform are now
updated with base falcon functions.
Falcon software state such as is_falcon_supported, is_interrupt_enabled
and flcn_base are set from software init functions defined per chip.
JIRA NVGPU-2038
Change-Id: Ib1729d2833cd2c6c7b2c8ed7cbc17d4d6daeba73
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2023077
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add fifo sub-unit to common.fifo to handle init/deinit code
and global support functions.
Split init into:
- nvgpu_channel_setup_sw
- nvgpu_tsg_setup_sw
- nvgpu_fifo_setup_sw
- nvgpu_runlist_setup_sw
- nvgpu_engine_setup_sw
- nvgpu_userd_setup_sw
- nvgpu_pbdma_setup_sw
Split de-init into
- nvgpu_channel_cleanup_sw
- nvgpu_tsg_cleanup_sw
- nvgpu_fifo_cleanup_sw
- nvgpu_runlist_cleanup_sw
- nvgpu_engine_cleanup_sw
- nvgpu_userd_cleanup_sw
- nvgpu_pbdma_cleanup_sw
Added the following HALs
- runlist.length_max
- fifo.init_pbdma_info
- fifo.userd_entry_size
Last 2 HALs should be moved resp. to pbdma and userd sub-units,
when available.
Added vgpu implementation of above hals
- vgpu_runlist_length_max
- vgpu_userd_entry_size
- vgpu_channel_count
Use hals in vgpu_fifo_setup_sw.
Jira NVGPU-1306
Change-Id: I954f56be724eee280d7b5f171b1790d33c810470
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029620
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Register write from gr_gk20a_init_fs_state function are moved to hal.
New hal added for setting the pd_tpc_per_gpc, pd_skip_table_gpc and
cwd_gpcs_tpcs_num.
pd_tpc_per_gpc helps to describe the number of tpcs in each logical
gpc.
pd_skip_table helps to skip certain TPCs during distribution.
cwd_gpcs_tpcs_num helps to set number of tpcs and gpcs in CWD.
remove write for depreciated NV_PBE_PRI_ZROP_SETTING_NUM_ACTIVE_FBPS
and NV_PBE_PRI_CROP_SETTINS_NUM_ACTIVE_FBPS fields from
BES_ZROP_SETTINGS and BES_CROP_SETTINGS registers. Both these fields
changed to NUM_ACTIVE_LTCS from gm20b onwards and those are being
set in existing hal functions.
JIRA NVGPU-2951
Change-Id: I905b98356e8eadaf7e2481850de841c050ea50c5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072249
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
create new hals for wait_idle and wait_fe_idle under gr.init.
modify functions to following hals and use same hals for all chips.
gr_gk20a_wait_idle -> gm20b_gr_init_wait_idle
gr_gk20a_wait_fe_idle -> gm20b_gr_init_wait_fe_idle
JIRA NVGPU-2951
Change-Id: Ie60675a08cba12e31557711b6f05f06879de8965
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072051
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gk20a_init_golden_ctx_image() right now resets sys/gpc/be units by
directly accessing gr_fecs_ctxsw_reset_ctl_r() register
Move this register write/read sequence to common.hal.gr.init unit
through HAL operation g->ops.gr.init.override_context_reset()
Use new HAL in gr_gk20a_init_golden_ctx_image()
Also fix the delay() operations. delay() should be added before we read
back gr_fecs_ctxsw_reset_ctl_r() register and not after
Jira NVGPU-2961
Change-Id: I70d3a61b5aa60846815dee52ecac544066542695
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070608
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL unit common.hal.gr.init with below source files
hal/gr/init/gr_init_gm20b.c
hal/gr/init/gr_init_gm20b.h
In gr_gk20a_init_golden_ctx_image() we force FE power mode on and also
disable it. Extract out this sequence into new unit and expose new HAL
operation that takes a boolean flag to enable/disable power mode
g->ops.gr.init.fe_pwr_mode_force_on()
Use new HAL operation in gr_gk20a_init_golden_ctx_image()
Set this HAL for all the chips
Jira NVGPU-2961
Change-Id: I1dd35d94fda5e5296af67c0abc944e200fb752ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070607
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The functions gk20a_dump_eng_status and gv11b_dump_eng_status belongs
to engine_status HAL unit.
1) The corresponding declaration and definitions of the above functions
are moved from fifo_{arch} files to engine_status_{arch} files.
2) The corresponding HAL pointer .dump_eng_status is moved from
fifo to engine_status HAL unit.
3) gv11b_dump_eng_status is now based to gv100b_dump_eng_status
4) Small changes in the files for ENGINE_STATUS such as correction of
HEADER DEFINES etc
Jira NVGPU-1315
Change-Id: I7fc06eab97206bc3b78c6f5c7aa30fa2c034961c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033632
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The corresponding HAL pointer for gk20a_fifo_wait_engine_idle is not
being invoked anywhere and hence they are removed from the code.
The function gk20a_fifo_wait_engine_idle belongs to engine unit and is
only called in a non-safe build, hence its moved to engine unit and is
restricted by a non-safe build flag NVGPU_ENGINE
Also, gk20a_fifo_wait_engine_idle is renamed to nvgpu_engine_wait_for_idle
Jira NVGPU-1315
Change-Id: Ie550c7e46a4284dfe368859d828b1994df34185f
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033631
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
is_fault_engine_subid_gpc HAL pointer belongs to engine HAL unit instead of
fifo. This patch moves the HAL pointer to a newly constructed engine
HAL unit.
The following new files are added under HAL/fifo/
engines_gm20b.h
engines_gm20b.c
engines_gv11b.h
engines_gv11b.c
Jira NVGPU-1315
Change-Id: If28686bf7350563b06b13348a9fe3ef0099c35b2
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2031659
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create Compression Bit Cache(CBC) unit to have comptags
cache related functionality in one place. In this patch
Moved following gpu ops from ltc to cbc and renamed accordingly:
void (*init)(struct gk20a *g, struct gr_gk20a *gr);
u64 (*get_base_divisor)(struct gk20a *g);
int (*alloc_comptags)(struct gk20a *g, struct gr_gk20a *gr);
int (*ctrl)(struct gk20a *g, enum gk20a_cbc_op op,
u32 min, u32 max);
u32 (*fix_config)(struct gk20a *g, int base);
To avoid ambiguity renamed function pointer from
init_comptags to alloc_comptags.
Moved following function from ltc.h to cbc.h:
nvgpu_ltc_alloc_cbc -> nvgpu_cbc_alloc
Also changed file name that implemented
nvgpu_cbc_alloc functionality from
os/ltc.c -> os/linux-cbc.c
JIRA NVGPU-2897
Change-Id: Ide32a98567e9a3f0a784d62221a6f484f8343e53
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030194
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of FBPA we need to consider mask of active FBPAs on dGPUs.
For that we have GR unit HAL g->ops.gr.add_ctxsw_reg_pm_fbpa()
Generic support to consider active mask of unit need not be in a HAL,
move it to common code in add_ctxsw_buffer_map_entries_subunits() itself
This API now supports providing active_unit_mask as its parameter
In case we don't need to consider unit mask caller will simply pass
~U32(0U) to indicate all units are active
In case of FBPA, add a new HAL g->ops.gr.hwpm_pm.get_active_fbpa_mask()
which gets mask of active FBPAs, and pass this value to common API
add_ctxsw_buffer_map_entries_subunits()
Jira NVGPU-2895
Change-Id: I0d208ce53abcd36929c25a4d248868d6eaa5c70d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069472
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new HAL unit hal.gr.hwpm_map that provides chip specific
support to common.gr.hwpm_map unit
We currently have common.gr HAL g->ops.gr.add_ctxsw_reg_perf_pma()
to handle chip specific alignment of perf_pma list
We only adjust the offset of list and remaining code is same
Hence delete above HAL, and add new HAL under hal.gr.hwpm_map
g->ops.gr.hwpm_map.align_regs_perf_pma() which returns correct
alignment if HAL is defined
Remove gr_gv100_add_ctxsw_reg_perf_pma() and
gr_gk20a_add_ctxsw_reg_perf_pma() APIs since they are no longer used
Simplify perf_pma parsing by fixing alignment with new HAL and then
directly calling add_ctxsw_buffer_map_entries()
Jira NVGPU-2895
Change-Id: I1852db846e1f5441e482028c79a3f39c5142b0c2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069471
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently ACR chip specific properties set using HAL ops but
need to move out from HAL ops as ACR unit doesn't access
H/W directly & uses other engines to execute ACR on chip.
To fix used GPUID to init ACR chip specific properties
JIRA NVGPU-2909
Change-Id: I8fa1abcace6f7870bd116d39f94430497d80840b
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032666
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes are made in this patch.
1) nvgpu driver is incorrectly using u32 to store enum values in some
functions. Replaced them with correct type enum nvgpu_fifo_engine
2) change parameter type in nvgpu_engine_get_ids from engine_id[]
to *engine_ids
3) rename some function names to remove redundant characters to make
the name shorter.
4) Removed the initialization of enum nvgpu_fifo_engine in functions
where we assign a value before direct access.
Jira NVGPU-1315
Change-Id: Ic65b40c9cb1e90ad278cb36a00e1c9de51724f27
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2020230
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added support to disable/skip to load LS PMU based on PMU support flag,
when LS PMU skipped only basic PMU engine ops are needed for HS ACR
to load & execute on PMU engine falcon,
GR LS falcons cold/recovery bootstrap will be taken care by ACR as HS
ACR will be loaded for both case & exits by halting in non-secure mode.
JIRA NVGPU-173
Change-Id: I7288c185a9ca2e18b2689aa8a7e0c27a61dd12f5
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2019927
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gk20a_init_gr_config() we right now directly access a register
from hw_pri_ringmaster_*.h h/w header to read FBP count
Add a new HAL operation to PRIV_RING unit and start using it in GR code
instead of directly accessing register
g->ops.priv_ring.get_fbp_count()
Jira NVGPU-2894
Change-Id: I8a7b5423e28ef40612f55cb2915d7a2cff2f7435
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030673
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below HALs to get max FBPs, max LTC per FBP, max LTS pet LTC values are
right now defined by GR unit.
g->ops.gr.get_max_fbps_count()
g->ops.gr.get_max_ltc_per_fbp()
g->ops.gr.get_max_lts_per_ltc()
These HALs only read registers from hw_top_*.h h/w unit, and as such
belong to TOP unit. Move them appropriately as below
g->ops.top.get_max_fbps_count()
g->ops.top.get_max_ltc_per_fbp()
g->ops.top.get_max_lts_per_ltc()
Remove hw_top_*.h h/w header include from gr_gk20a.c and gr_gm20b.c
Jira NVGPU-2894
Change-Id: I995d9f56edb65c9de98d2d15d34ecb72920a65c6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030672
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- rename vgpu_gr_gm20b_init_cyclestats() to vgpu_gr_init_cyclestats()
moving to gr_vgpu.c common to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctxsw_preemption_mode() to
vgpu_gr_init_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_ctxsw_preemption_mode() to
vgpu_gr_set_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_preemption_mode() to
vgpu_gr_set_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctx_state() to vgpu_gr_init_ctx_state()
moving to ctx_vgpu.c common to all vgpu chips.
- combine vgpu_gr_gv11b_commit_ins() to vgpu_gr_commit_inst()
executing alloc/free subctx header code only if chip supports
subctx.
- remove inclusion of hw header files from vgpu gr code by
introducing hal ops for the following:
- alloc_global_ctx_buffers:
- hal op for getting global ctx cb buffer
- hal op for getting global ctx pagepool buffer size
- set_ctxsw_preemption_mode:
- hal op for getting ctx spill size
- hal op for getting ctx pagepool size
- hal op for getting ctx betacb size
- hal op for getting ctx attrib cb size
These chip specific function definitions are currently implemented in
chip specific gr files which will need to be moved to hal units.
Also use these hal ops for corresponding functions for native. This
makes gr_gv11b_set_ctxsw_preemption_mode() function redundant. Use
gr_gp10b_set_ctxsw_preemption_mode() for gv11b as well.
Jira GVSCI-334
Change-Id: I60be86f932e555176a972c125e3ea31270e6cba7
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently PMU support enable check is done with multiple
methods which added complexity to know status of PMU
support.
Changed to replace multiple methods with support_pmu
flag to know the PMU support, support_pmu will be updated
at init stage based on platform/chip specific settings
to know the PMU support status.
Cleaned up support_pmu flag check with platform specific
PMU members in multiple places & moved check to
public functions
JIRA NVGPU-173
Change-Id: Ief2c64250d1f78e3b054203be56499e4d1d9b046
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024024
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move bus related HAL code to new top level HAL directory: hal/.
This directory should mirror the common directory as much as
possible.
There's some nice pros here:
1. Isolate HAL and common code.
2. Since the common directory should not be including HAL
related headers directly this structure will make it
easier to catch these sorts of bugs with a script.
Change-Id: Ib9eb03a97d05db17b637b115c650adcbe9553d54
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011627
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved zbc related files to common/gr/zbc location.
struct nvgpu_gr_zbc created for zbc variables.
common zbc functions are moved to gr_zbc.c file.
All zbc hal functions are moved with corresponding chip specific
filename.
JIRA NVGPU-1882
Change-Id: I1bdaa2d9416e6e77ab305f117647dc070438ee86
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2019760
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gm20b clocks is accessing thermal registers directly in several places.
Moved all this code to thermal unit and clock code is accessing these
through provided thermal hal functions.
Following new hal are defined in thermal unit for enabling/disabling
throttling and enabling/disabling idle slowdown:
void (*throttle_enable)(struct gk20a *g, u32 val);
int (*throttle_disable)(struct gk20a *g);
void (*idle_slowdown_enable)(struct gk20a *g, u32 val);
int (*idle_slowdown_disable)(struct gk20a *g);
At this moment, these hals are getting used only by gm20b code.
JIRA NVGPU-2001
Change-Id: I937a7c76dfae9aa7e86f23c53f84fae9a9dda13e
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2023289
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add zbc stencil as chip feature. This help to remove the
hals added for stencil feature, instead use common functions.
Removed hals
stencil_query_table
load_stencil_default_tbl
add_type_stencil
load_stencil_tbl
JIRA NVGPU-1882
Change-Id: Iae410a8dd879660ecfd2d2a5ebf28b2cc8309be4
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022385
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Renamed gr_gk20a zbc hal functions which involve register access as
gk20a_gr_zbc* hal functions.
gr_gk20a_add_zbc_color -> gk20a_gr_zbc_add_color
gr_gk20a_add_zbc_depth -> gk20a_gr_zbc_add_depth
gr_gk20a zbc hal functions without any register access are renamed as
common function as nvgpu_gr_zbc*
gk20a_gr_zbc_set_table -> nvgpu_gr_zbc_set_table
gr_gk20a_query_zbc -> nvgpu_gr_zbc_query_table
Renamed gr_gp10b zbc hal functions as gp10b_gr_zbc* hal functions.
gr_gp10b_add_zbc_color -> gp10b_gr_zbc_add_color
gr_gp10b_add_zbc_depth -> gp10b_gr_zbc_add_depth
gr_gp10b_get_gpcs_swdx_dss_zbc_c_format_reg ->
gp10b_gr_zbc_get_gpcs_swdx_dss_zbc_c_format_reg
gr_gp10b_get_gpcs_swdx_dss_zbc_z_format_reg ->
gp10b_gr_zbc_get_gpcs_swdx_dss_zbc_z_format_reg
common code added for nvgpu_gr_zbc_add_color and
nvgpu_gr_zbc_add_depth which update ltc, update local copy
and call add_color or add_depth hal function
All these functions will be moved to common/gr/zbc location
in future updates.
gk20a_writel replaced with nvgpu_writel function.
JIRA NVGPU-1882
Change-Id: I717739e0b20c243e8f5ed3e00f8f76755587bcee
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018737
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
As part of creating zbc as gr subunit, move pmu_save hal function
from zbc to pmu hal.
This hal function is used to pass the information to gpmu
firmware, which should reside as part of pmu.
remove pmu_save hal from zbc.
add save_zbc hal under pmu.
remove unused function gr_gk20a_pmu_save_zbc
JIRA NVGPU-1882
Change-Id: I132dbc7a9ee9755043cd08f288344df447e28af6
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A new unit pbdma_status is added. The unit provides a HAL
ops function pointer read_pbdma_status_info() to read and produce
a struct of type nvgpu_pbdma_status_info. Additionally, the unit
provides public APIs to retrieve data from the struct
nvgpu_pbdma_status_info.
Jira NVGPU-1311
Change-Id: Ic89c78703c3738b91be8d18ba970a591658d4022
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2019976
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move ACR code to separate folder under common/acr to
make ACR separate unit. with this, separating ACR blob
construct, bootstrap & ACR chip specific configuration
code to different files.
ACR blob construction code split into two version, as
gm20b & gp10b still uses older ACR interfaces & not yet
moved to Tegra ACR, blob_construct_v0 file can be deleted
once gm20b/gp10b uses Tegra ACR ucode & point to
blob_construct_v1 with simple change.
As ACR ucode can execute on different engine falcon &
should not be dependent on specific engine falcon, used
generic falcon functions/interface to support ACR & doesn't
access any engine h/w registers directly, and files with
chip name has configuration needed for ACR HS ucode & LS
falcons.
JIRA NVGPU-1148
Change-Id: Ieedbe82f3e1a4303f055fbc795d9ce0f1866d259
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017046
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes are done in this patch.
1) gk20a_fifo_get_engine_info() is moved to common/fifo/engine.c
and is renamed to gk20a_fifo_get_active_engine_info() to reflect
accurately the purpose of the function.
2) move the definition of enum fifo_engine to <nvgpu/engines.h> and
add the prefix NVGPU_
3) move the following functions related to engines in fifo_gk20a.c to
common/fifo/engines.c and replace their signature by adding the prefix
nvgpu_engine and removing gk20a_fifo.
gk20a_fifo_get_active_engine_info
gk20a_fifo_engine_enum_from_type
gk20a_fifo_get_engine_ids
gk20a_fifo_is_valid_engine_id
gk20a_fifo_get_gr_engine_id
gk20a_fifo_act_eng_interrupt_mask
gk20a_fifo_engine_interrupt_mask
gk20a_fifo_get_all_ce_engine_reset_mask
Jira NVGPU-1315
Change-Id: I63d9dcd905a0bebcc9a4c65776cf6ec7a0837acf
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011298
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Drop the "runlist_" part in the runlist section of the HAL ops. For
example:
- old: g->ops.runlist.runlist_wait_pending
- new: g->ops.runlist.wait_pending
At the same time, drop the "fifo_" part from the function names. For
example:
- old: gk20a_fifo_runlist_wait_pending
- new: gk20a_runlist_wait_pending
Also rename eng_runlist_base_size to count_max. The size of the
eng_runlist_base register array depicts the maximum possible number of
runlists in the chip for which count_max is more descriptive.
Jira NVGPU-1309
Change-Id: Ie9e94b9f65cd10d3e682d19954f240adb6e311be
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017403
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We get gpc_mask by calling GR HAL g->ops.gr.get_gpc_mask()
But gpc_mask should be logically owned by gr/config unit
Hence add new gpc_mask field to nvgpu_gr_config
Initialize it in nvgpu_gr_config_init() by calling a new HAL
g->ops.gr.config.get_gpc_mask() if available
If HAL is not defined we just initialize it based on gpc_count
Expose new API nvgpu_gr_config_get_gpc_mask() to get gpc_mask
and use this API now
Remove gr_gm20b_get_gpc_mask() and HAL g->ops.gr.get_gpc_mask()
Update GV100 and TU104 chip HALs to remove old and add new HAL
Add gpc_mask to struct tegra_vgpu_constants_params to support this
on vGPU. Also get gpc_mask from vGPU private data in
vgpu_gr_init_gr_config()
Jira NVGPU-1879
Change-Id: Ibdc89ea51df944dc7085920509e3536a5721efc0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016084
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Unit gr/config right now queries gpc_count from priv_ring by directly
reading the value from register
priv_ring unit now exposes below HAL to get gpc_count
g->ops.priv_ring.get_gpc_count()
Use this HAL in gr/config unit
Jira NVGPU-1879
Change-Id: Ibd3557b7f906690a7ad18f11d02a0a6990b98337
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016083
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr/config unit we right now query max gpc_count and tpc_per_gpc_count
by directly accessing registers using hw_top_gm20b.h h/w header
Update TOP unit to provide below HALs
g->ops.top.get_gpc_count()
g->ops.top.get_tpc_per_gpc_count()
And call these HALs from gr/config
Jira NVGPU-1879
Change-Id: I39f5d3bb80960d68a1f493b372745e964ad82803
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016082
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>