Add new hal operation g->ops.gr.init.fe_go_idle_timeout() in hal.gr.init
unit to enable/disable fe_go_idle timeout
Use this hal in gr_gk20a_init_golden_ctx_image() instead of direct
register access
Remove timeout disable/enable code in gk20a_init_sw_bundle() since
parent API gr_gk20a_init_golden_ctx_image() is already taking care of
that
Jira NVGPU-2961
Change-Id: Ice72699059f031ca0b1994fa57661716a6c66cd2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072550
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move GR HAL operation g->ops.gr.init_preemption_state() to hal.gr.init
unit as g->ops.gr.init.preemption_state()
Create hal.gr.init unit files for gp10b and gv11b and copy over
corresponding functions to new files
This API now takes gfxp_wfi_timeout_unit and gfxp_wfi_timeout_count as
parameter
Define gfxp_wfi_timeout_unit in struct gr_gk20a as a boolean flag named
gfxp_wfi_timeout_unit_usec
Remove GFXP_WFI_TIMEOUT_UNIT_SYSCLK/USEC macros
Jira NVGPU-2961
Change-Id: I4347b1e30c86c231e44cf274adccd8c70addcdab
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072549
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Register write from gr_gk20a_init_fs_state function are moved to hal.
New hal added for setting the pd_tpc_per_gpc, pd_skip_table_gpc and
cwd_gpcs_tpcs_num.
pd_tpc_per_gpc helps to describe the number of tpcs in each logical
gpc.
pd_skip_table helps to skip certain TPCs during distribution.
cwd_gpcs_tpcs_num helps to set number of tpcs and gpcs in CWD.
remove write for depreciated NV_PBE_PRI_ZROP_SETTING_NUM_ACTIVE_FBPS
and NV_PBE_PRI_CROP_SETTINS_NUM_ACTIVE_FBPS fields from
BES_ZROP_SETTINGS and BES_CROP_SETTINGS registers. Both these fields
changed to NUM_ACTIVE_LTCS from gm20b onwards and those are being
set in existing hal functions.
JIRA NVGPU-2951
Change-Id: I905b98356e8eadaf7e2481850de841c050ea50c5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072249
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
create new hals for wait_idle and wait_fe_idle under gr.init.
modify functions to following hals and use same hals for all chips.
gr_gk20a_wait_idle -> gm20b_gr_init_wait_idle
gr_gk20a_wait_fe_idle -> gm20b_gr_init_wait_fe_idle
JIRA NVGPU-2951
Change-Id: Ie60675a08cba12e31557711b6f05f06879de8965
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072051
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gk20a_init_golden_ctx_image() right now resets sys/gpc/be units by
directly accessing gr_fecs_ctxsw_reset_ctl_r() register
Move this register write/read sequence to common.hal.gr.init unit
through HAL operation g->ops.gr.init.override_context_reset()
Use new HAL in gr_gk20a_init_golden_ctx_image()
Also fix the delay() operations. delay() should be added before we read
back gr_fecs_ctxsw_reset_ctl_r() register and not after
Jira NVGPU-2961
Change-Id: I70d3a61b5aa60846815dee52ecac544066542695
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070608
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL unit common.hal.gr.init with below source files
hal/gr/init/gr_init_gm20b.c
hal/gr/init/gr_init_gm20b.h
In gr_gk20a_init_golden_ctx_image() we force FE power mode on and also
disable it. Extract out this sequence into new unit and expose new HAL
operation that takes a boolean flag to enable/disable power mode
g->ops.gr.init.fe_pwr_mode_force_on()
Use new HAL operation in gr_gk20a_init_golden_ctx_image()
Set this HAL for all the chips
Jira NVGPU-2961
Change-Id: I1dd35d94fda5e5296af67c0abc944e200fb752ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070607
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of FBPA we need to consider mask of active FBPAs on dGPUs.
For that we have GR unit HAL g->ops.gr.add_ctxsw_reg_pm_fbpa()
Generic support to consider active mask of unit need not be in a HAL,
move it to common code in add_ctxsw_buffer_map_entries_subunits() itself
This API now supports providing active_unit_mask as its parameter
In case we don't need to consider unit mask caller will simply pass
~U32(0U) to indicate all units are active
In case of FBPA, add a new HAL g->ops.gr.hwpm_pm.get_active_fbpa_mask()
which gets mask of active FBPAs, and pass this value to common API
add_ctxsw_buffer_map_entries_subunits()
Jira NVGPU-2895
Change-Id: I0d208ce53abcd36929c25a4d248868d6eaa5c70d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069472
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new HAL unit hal.gr.hwpm_map that provides chip specific
support to common.gr.hwpm_map unit
We currently have common.gr HAL g->ops.gr.add_ctxsw_reg_perf_pma()
to handle chip specific alignment of perf_pma list
We only adjust the offset of list and remaining code is same
Hence delete above HAL, and add new HAL under hal.gr.hwpm_map
g->ops.gr.hwpm_map.align_regs_perf_pma() which returns correct
alignment if HAL is defined
Remove gr_gv100_add_ctxsw_reg_perf_pma() and
gr_gk20a_add_ctxsw_reg_perf_pma() APIs since they are no longer used
Simplify perf_pma parsing by fixing alignment with new HAL and then
directly calling add_ctxsw_buffer_map_entries()
Jira NVGPU-2895
Change-Id: I1852db846e1f5441e482028c79a3f39c5142b0c2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069471
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>