In this CL, following ctxsw related code is moved to hal gr falcon.
1. gr_gk20a_wait_ctxsw_ready -> gm20b_gr_falcon_wait_ctxsw_ready
2. gr_gk20a_submit_fecs_method_op ->
gm20b_gr_falcon_submit_fecs_method_op
3. gr_gk20a_submit_fecs_sideband_method_op->
gm20b_gr_falcon_submit_fecs_sideband_method_op
Above functions are populated with following gr.falcon hals and called
from the current code as required:
int (*wait_ctxsw_ready)(struct gk20a *g);
int (*submit_fecs_method_op)(struct gk20a *g,
struct fecs_method_op_gk20a op, bool sleepduringwait);
int (*submit_fecs_sideband_method_op)(struct gk20a *g,
struct fecs_method_op_gk20a op);
Following static function also moved to gr_gk20a.c to hal gr falcon.
gr_gk20a_ctx_wait_ucode -> gm20b_gr_falcon_ctx_wait_ucode
JIRA NVGPU-1881
Change-Id: Icb4238dcacaf46ecfcada8bc8dcdeb68b6278bab
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085189
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move handle_tex_exception hal function to hal.gr.intr
Move hal function for gm20b and gp10b chips.
Removed the null implementation in gv11b hal.
Modify ops->gr.handle_tex_exception call to
ops->gr.intr.handle_tex_exception
JIRA NVGPU-3016
Change-Id: Ifd88ab6f35301525f7a58e8ccf2f4796dda640bf
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084387
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Code for secure/non-secure ctxsw booting spread across gr_gk20a.c
and gr_gm20b.c. With this change this code is move to gr falcon unit.
Ctxsw loading is now supported with 2 supported common functions:
1.Non secure boot:
int nvgpu_gr_falcon_load_ctxsw_ucode(struct gk20a *g);
2.Secure boot:
int nvgpu_gr_falcon_load_secure_ctxsw_ucode(struct gk20a *g);
Now gr ops function "int (*load_ctxsw_ucode)(struct gk20a *g);" is moved to
gr falcon ops and in chip hals it is set with secure/non-secure booting.
Non-secure booting: nvgpu_gr_falcon_load_ctxsw_ucode support ctxsw loading
in 2 methods: bit-banging uode or booting with bootloader
A. Common and hal functions for non-secure bit-banging ctxsw loading:
Common: static void nvgpu_gr_falcon_load_dmem(struct gk20a *g) ->
Hals: void (*load_gpccs_dmem)(struct gk20a *g,i
const u32 *ucode_u32_data, u32 size);
void (*load_fecs_dmem)(struct gk20a *g,
const u32 *ucode_u32_data, u32 size);
Common: static void nvgpu_gr_falcon_load_imem(struct gk20a *g) ->
Hals: void (*load_gpccs_imem)(struct gk20a *g,
const u32 *ucode_u32_data, u32 size);
void (*load_fecs_imem)(struct gk20a *g,
const u32 *ucode_u32_data, u32 size);
Other basic HALs:
void (*configure_fmodel)(struct gk20a *g); -> configure fmodel for ctxsw loading
void (*start_ucode)(struct gk20a *g); -> start running ctxcw ucode
B.Common and hal functions for non-secure ctxsw loading with bootloader
First get the ctxsw ucode using: nvgpu_gr_falcon_init_ctxsw_ucode, then
Common: static void nvgpu_gr_falcon_load_with_bootloader(struct gk20a *g)
void nvgpu_gr_falcon_bind_instblk((struct gk20a *g) ->
Hal: void (*bind_instblk)(struct gk20a *g, struct nvgpu_mem *mem, u64 inst_ptr);
Common: nvgpu_gr_falcon_load_ctxsw_ucode_segments ->
nvgpu_gr_falcon_load_ctxsw_ucode_header ->
nvgpu_gr_falcon_load_ctxsw_ucode_boot for both fecs and gpccs ->
Hals: void (*load_ctxsw_ucode_header)(struct gk20a *g, u32 reg_offset,
u32 boot_signature, u32 addr_code32, u32 addr_data32,
u32 code_size, u32 data_size);
void (*load_ctxsw_ucode_boot)(struct gk20a *g, u64 reg_offset, u32 boot_entry,
u32 addr_load32, u32 blocks, u32 dst);
Other basic HAL to get gpccs start offset:
u32 (*get_gpccs_start_reg_offset)(void);
C.Secure booting is support with gpmu and acr and with following additional
common function in gr falcon.
static void nvgpu_gr_falcon_load_gpccs_with_bootloader(struct gk20a *g) ->
nvgpu_gr_falcon_bind_instblk and nvgpu_gr_falcon_load_ctxsw_ucode_segments
Additional basic hals:
void (*start_gpccs)(struct gk20a *g);
void (*start_fecs)(struct gk20a *g);
Following ops from gr is removed, since it is not required to set by chip hals:
void (*falcon_load_ucode)(struct gk20a *g, u64 addr_base,
struct gk20a_ctxsw_ucode_segments *segments, u32 reg_offset);
Now this is handled by static common function:
static int nvgpu_gr_falcon_copy_ctxsw_ucode_segments( struct gk20a *g,
struct nvgpu_mem *dst, struct gk20a_ctxsw_ucode_segments *segments,
u32 *bootimage, u32 *code, u32 *data)
JIRA NVGPU-1881
Change-Id: I895a03faaf1a21286316befde24765c8b55075cf
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083388
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new hal g->ops.gr.init.load_sw_bundle_init() in hal.gr.init unit
and move corresponding code from gk20a_init_sw_bundle()
Add this hal to all the supported chips
Move g->ops.gr.init_sw_veid_bundle() hal to hal.gr.init unit
Move definition of hal to gv11b chip file of hal.gr.init
Add this hal for gv11b/gv100/tu104
Move g->ops.gr.init_sw_bundle64() hal to hal.gr.init unit
Move definition of hal to tu104 chip file of hal.gr.init
Add this hal for tu104
Jira NVGPU-2961
Change-Id: I560c2ba95fb820275d5ccb46939007c58481ccbb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083631
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move functions related to init_ctxsw_ucode to common
falcon from gr_gk20a.c. Modified code to call this new
function and modified function names in common falcon
to reflect new re-org.
JIRA NVGPU-1881
Change-Id: I389f5c902bfbec17cdb4b16840a5ba66f6b1e331
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081331
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Created gr falcon hal unit with moving following hal functions
from gr to gr falcon:
u32 (*fecs_base_addr)(void);
u32 (*gpccs_base_addr)(void);
void (*dump_stats)(struct gk20a *g);
u32 (*fecs_ctxsw_mailbox_size)(void);
u32 (*get_fecs_ctx_state_store_major_rev_id)(struct gk20a *g);
Modified chip hals to populate these new functions and related code
now refers to gr falcon hals.
Modified kernel headers to have following defs for
fecs/gpccs base address in gm20b/gp10b/gv11b/tu104:
static inline u32 gr_fecs_irqsset_r(void);
static inline u32 gr_gpcs_gpccs_irqsset_r(void);
Created base gm20b hals for fecs/gpccs_base_addr and
removed redundant gp106 related hals.
JIRA NVGPU-1881
Change-Id: I16e820cc1c89223f57988f1e5723fd8fdcbfe89d
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081245
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move g->ops.gr.commit_global_pagepool() hal to hal.gr.init unit as
g->ops.gr.init.commit_global_pagepool()
Also move g->ops.gr.pagepool_default_size() hal to
g->ops.gr.init.pagepool_default_size()
Add global_ctx boolean flag as parameer to
g->ops.gr.init.commit_global_pagepool() to distinguish between
committing global pagepool v/s ctxsw pagepool buffers
Remove register header accessors from gr_gk20a_commit_global_ctx_buffers()
and move them to hal functions
Move hal definitions to gm20b/gp10b hal files appropriately
Remove g->ops.gr.pagepool_default_size() hal for gv11b since gv11b can
re-use gp10b hal
Jira NVGPU-2961
Change-Id: Id532defe05edf2e5d291fec9ec1aeb5b8e33c544
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077217
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create interrupt hal unit under hal.gr.intr.
This holds the interrupt and exception related hals.
Move enable_exceptions and enable_gpc_exceptions hal functions to
hal.gr.init location.
Modify enable_exceptions hal to pass gr->config and enable or disable
parameters.
Modify enable_gpc_exceptions to pass gr->config parameter.
Add new hal function enable_interrupts with enable or disable parameter
This hal helps to enable and disable the gr interrupts as needed.
gr init calls that use these hals are modified to
g->ops.gr.intr.enable_exceptions
g->ops.gr.intr.enable_gpc_exceptions
JIRA NVGPU-3016
Change-Id: Ib62f8bf0b5289b815c8eff4d32a47387f24af51b
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077857
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move g->ops.gr.get_global_ctx_cb_buffer_size() and
g->ops.gr.get_global_ctx_pagepool_buffer_size() hals to hal.gr.init
unit
Move corresponding hal definitions to hal.gr.init unit
Jira NVGPU-2961
Change-Id: Ifff3e2073f6d9bca5b37244f7e107bad885e7ca7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077215
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove below variables from struct gr_gk20a
u32 bundle_cb_default_size;
u32 min_gpm_fifo_depth;
u32 bundle_cb_token_limit;
u32 attrib_cb_default_size;
u32 alpha_cb_default_size;
u32 attrib_cb_gfxp_default_size;
u32 attrib_cb_gfxp_size;
u32 attrib_cb_size;
u32 alpha_cb_size;
Instead add below hals in hal.gr.init unit to get all of above sizes
u32 (*get_bundle_cb_default_size)(struct gk20a *g);
u32 (*get_min_gpm_fifo_depth)(struct gk20a *g);
u32 (*get_bundle_cb_token_limit)(struct gk20a *g);
u32 (*get_attrib_cb_default_size)(struct gk20a *g);
u32 (*get_alpha_cb_default_size)(struct gk20a *g);
u32 (*get_attrib_cb_gfxp_default_size)(struct gk20a *g);
u32 (*get_attrib_cb_gfxp_size)(struct gk20a *g);
u32 (*get_attrib_cb_size)(struct gk20a *g, u32 tpc_count);
u32 (*get_alpha_cb_size)(struct gk20a *g, u32 tpc_count);
u32 (*get_global_attr_cb_size)(struct gk20a *g, u32 max_tpc);
Define these hals for all gm20b/gp10b/gv11b/gv100/tu104 chips
Also add hal.gr.init support for gv100 chip
Remove all accesses to variables from struct gr_gk20a and start using
newly defined hals
Remove below hals to initialize sizes since they are no more required
g->ops.gr.bundle_cb_defaults(g);
g->ops.gr.cb_size_default(g);
g->ops.gr.calc_global_ctx_buffer_size(g);
Also remove definitions of above hals from all the chip files
Jira NVGPU-2961
Change-Id: I130b578ababf22328d68fe19df581e46aebeccc9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077214
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move load_tpc_mask and setup_rop_mapping hal functions to hal.gr.init.
Existing load_tpc_mask hal code is split to two parts, one as a common
code in gr_load_tpc_mask and register write to init.tpc_mask hal
functions.
Modify pd_tpc_per_gpc and pd_skip_table_gpc hals in the
hal.gr.init to pass struct nvgpu_gr_config as a parameter.
JIRA NVGPU-2951
Change-Id: I52e26d0f023afa511a8cf8c3e4c54f45350be4ae
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2074892
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move g->ops.gr.commit_global_timeslice() hal operation to hal.gr.init
unit as g->ops.gr.init.commit_global_timeslice()
Drop channel pointer in parameter list since it was unused
Also change return type to void since it never returns error
Move corresponding gm20b and gv11b hal operations to hal.gr.init unit
Jira NVGPU-2961
Change-Id: I68deef45af1d52149eb354a1478cc2b5f2e4ec2a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075228
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved cbc related code and data from gr to cbc unit.
Ltc and cbc related data is moved from gr header:
1. Ltc related data moved from gr_gk20a -> gk20a and it
will be moved eventually to ltc unit:
u32 slices_per_ltc;
u32 cacheline_size;
2. cbc data moved from gr_gk20a -> nvgpu_cbc
u32 compbit_backing_size;
u32 comptags_per_cacheline;
u32 gobs_per_comptagline_per_slice;
u32 max_comptag_lines;
struct gk20a_comptag_allocator comp_tags;
struct compbit_store_desc compbit_store;
3. Following config data moved gr_gk20a -> gk20a
u32 comptag_mem_deduct;
u32 max_comptag_mem;
These are part of initial config which should be available
during nvgpu_probe. So it can't be moved to nvgpu_cbc.
Modified code to use above updated data structures.
Removed cbc init sequence from gr and added in
common cbc unit. This sequence is getting called
from common nvgpu init code.
JIRA NVGPU-2896
JIRA NVGPU-2897
Change-Id: I1a1b1e73b75396d61de684f413ebc551a1202a57
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033286
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move GR HAL operation g->ops.gr.init_preemption_state() to hal.gr.init
unit as g->ops.gr.init.preemption_state()
Create hal.gr.init unit files for gp10b and gv11b and copy over
corresponding functions to new files
This API now takes gfxp_wfi_timeout_unit and gfxp_wfi_timeout_count as
parameter
Define gfxp_wfi_timeout_unit in struct gr_gk20a as a boolean flag named
gfxp_wfi_timeout_unit_usec
Remove GFXP_WFI_TIMEOUT_UNIT_SYSCLK/USEC macros
Jira NVGPU-2961
Change-Id: I4347b1e30c86c231e44cf274adccd8c70addcdab
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072549
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
create new hals for wait_idle and wait_fe_idle under gr.init.
modify functions to following hals and use same hals for all chips.
gr_gk20a_wait_idle -> gm20b_gr_init_wait_idle
gr_gk20a_wait_fe_idle -> gm20b_gr_init_wait_fe_idle
JIRA NVGPU-2951
Change-Id: Ie60675a08cba12e31557711b6f05f06879de8965
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2072051
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of FBPA we need to consider mask of active FBPAs on dGPUs.
For that we have GR unit HAL g->ops.gr.add_ctxsw_reg_pm_fbpa()
Generic support to consider active mask of unit need not be in a HAL,
move it to common code in add_ctxsw_buffer_map_entries_subunits() itself
This API now supports providing active_unit_mask as its parameter
In case we don't need to consider unit mask caller will simply pass
~U32(0U) to indicate all units are active
In case of FBPA, add a new HAL g->ops.gr.hwpm_pm.get_active_fbpa_mask()
which gets mask of active FBPAs, and pass this value to common API
add_ctxsw_buffer_map_entries_subunits()
Jira NVGPU-2895
Change-Id: I0d208ce53abcd36929c25a4d248868d6eaa5c70d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069472
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new HAL unit hal.gr.hwpm_map that provides chip specific
support to common.gr.hwpm_map unit
We currently have common.gr HAL g->ops.gr.add_ctxsw_reg_perf_pma()
to handle chip specific alignment of perf_pma list
We only adjust the offset of list and remaining code is same
Hence delete above HAL, and add new HAL under hal.gr.hwpm_map
g->ops.gr.hwpm_map.align_regs_perf_pma() which returns correct
alignment if HAL is defined
Remove gr_gv100_add_ctxsw_reg_perf_pma() and
gr_gk20a_add_ctxsw_reg_perf_pma() APIs since they are no longer used
Simplify perf_pma parsing by fixing alignment with new HAL and then
directly calling add_ctxsw_buffer_map_entries()
Jira NVGPU-2895
Change-Id: I1852db846e1f5441e482028c79a3f39c5142b0c2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069471
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new power/clock gating functions that can be called by
other units.
New clock_gating functions will reside in cg.c under
common/power_features/cg unit.
New power gating functions will reside in pg.c under
common/power_features/pg unit.
Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable
elpg and also in gr_gk20a_elpg_protected macro to access gr registers.
Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled
and slcg_enabled thread safe.
JIRA NVGPU-2014
Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025493
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create new unit common.gr.hwpm_map with source file common/gr/hwpm_map.c
and public header include/nvgpu/gr/hwpm_map.h
Move all APIs in gr_gk20a.c that handle hwpm_map functionality to this
new unit. This unit now exposes below struct that is included in struct
gr_gk20a
struct nvgpu_gr_hwpm_map {
u32 pm_ctxsw_image_size;
u32 count;
struct ctxsw_buf_offset_map_entry *map;
bool init;
}
Expose below APIs
nvgpu_gr_hwpm_map_init() - initialize HWPM map meta-data with given size
nvgpu_gr_hwpm_map_deinit() - deinitialize HWPM map
nvgpu_gr_hwmp_map_find_priv_offset() - find a given offset in the map
The sequence to create the map by reading various netlist segments is
moved to a static API nvgpu_gr_hwpm_map_create()
Jira NVGPU-2894
Change-Id: I07d31169d2ff18a496eb79a726027b847d5f0e06
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032777
GVS: Gerrit_Virtual_Submit
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- rename vgpu_gr_gm20b_init_cyclestats() to vgpu_gr_init_cyclestats()
moving to gr_vgpu.c common to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctxsw_preemption_mode() to
vgpu_gr_init_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_ctxsw_preemption_mode() to
vgpu_gr_set_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_preemption_mode() to
vgpu_gr_set_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctx_state() to vgpu_gr_init_ctx_state()
moving to ctx_vgpu.c common to all vgpu chips.
- combine vgpu_gr_gv11b_commit_ins() to vgpu_gr_commit_inst()
executing alloc/free subctx header code only if chip supports
subctx.
- remove inclusion of hw header files from vgpu gr code by
introducing hal ops for the following:
- alloc_global_ctx_buffers:
- hal op for getting global ctx cb buffer
- hal op for getting global ctx pagepool buffer size
- set_ctxsw_preemption_mode:
- hal op for getting ctx spill size
- hal op for getting ctx pagepool size
- hal op for getting ctx betacb size
- hal op for getting ctx attrib cb size
These chip specific function definitions are currently implemented in
chip specific gr files which will need to be moved to hal units.
Also use these hal ops for corresponding functions for native. This
makes gr_gv11b_set_ctxsw_preemption_mode() function redundant. Use
gr_gp10b_set_ctxsw_preemption_mode() for gv11b as well.
Jira GVSCI-334
Change-Id: I60be86f932e555176a972c125e3ea31270e6cba7
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently PMU support enable check is done with multiple
methods which added complexity to know status of PMU
support.
Changed to replace multiple methods with support_pmu
flag to know the PMU support, support_pmu will be updated
at init stage based on platform/chip specific settings
to know the PMU support status.
Cleaned up support_pmu flag check with platform specific
PMU members in multiple places & moved check to
public functions
JIRA NVGPU-173
Change-Id: Ief2c64250d1f78e3b054203be56499e4d1d9b046
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024024
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved zbc related files to common/gr/zbc location.
struct nvgpu_gr_zbc created for zbc variables.
common zbc functions are moved to gr_zbc.c file.
All zbc hal functions are moved with corresponding chip specific
filename.
JIRA NVGPU-1882
Change-Id: I1bdaa2d9416e6e77ab305f117647dc070438ee86
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2019760
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Timeslice mode is always set to enabled. So, it is not
required to check for timeslice mode enable and done
following cleanup as part of this change.
1. Removed timeslice_mode field from struct gr_gk20a and
removed setting of this field from the function
gr_gk20a_init_gr_config.
2. Removed checks for timeslice_mode enable in
gr_gk20a_commit_global_timeslice function.
3. Removed unused kernel definitions from headers:
gr_gpcs_ppcs_cbm_cfg_r()
gr_gpcs_ppcs_cbm_cfg_timeslice_mode_enable_v()
JIRA NVGPU-2155
Change-Id: Id99f4b771c74f4cea763ea63441043e93def2347
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Change-Id: Id99f4b771c74f4cea763ea63441043e93def2347
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024320
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add zbc stencil as chip feature. This help to remove the
hals added for stencil feature, instead use common functions.
Removed hals
stencil_query_table
load_stencil_default_tbl
add_type_stencil
load_stencil_tbl
JIRA NVGPU-1882
Change-Id: Iae410a8dd879660ecfd2d2a5ebf28b2cc8309be4
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022385
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Renamed gr_gk20a zbc hal functions which involve register access as
gk20a_gr_zbc* hal functions.
gr_gk20a_add_zbc_color -> gk20a_gr_zbc_add_color
gr_gk20a_add_zbc_depth -> gk20a_gr_zbc_add_depth
gr_gk20a zbc hal functions without any register access are renamed as
common function as nvgpu_gr_zbc*
gk20a_gr_zbc_set_table -> nvgpu_gr_zbc_set_table
gr_gk20a_query_zbc -> nvgpu_gr_zbc_query_table
Renamed gr_gp10b zbc hal functions as gp10b_gr_zbc* hal functions.
gr_gp10b_add_zbc_color -> gp10b_gr_zbc_add_color
gr_gp10b_add_zbc_depth -> gp10b_gr_zbc_add_depth
gr_gp10b_get_gpcs_swdx_dss_zbc_c_format_reg ->
gp10b_gr_zbc_get_gpcs_swdx_dss_zbc_c_format_reg
gr_gp10b_get_gpcs_swdx_dss_zbc_z_format_reg ->
gp10b_gr_zbc_get_gpcs_swdx_dss_zbc_z_format_reg
common code added for nvgpu_gr_zbc_add_color and
nvgpu_gr_zbc_add_depth which update ltc, update local copy
and call add_color or add_depth hal function
All these functions will be moved to common/gr/zbc location
in future updates.
gk20a_writel replaced with nvgpu_writel function.
JIRA NVGPU-1882
Change-Id: I717739e0b20c243e8f5ed3e00f8f76755587bcee
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018737
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
As part of creating zbc as gr subunit, move pmu_save hal function
from zbc to pmu hal.
This hal function is used to pass the information to gpmu
firmware, which should reside as part of pmu.
remove pmu_save hal from zbc.
add save_zbc hal under pmu.
remove unused function gr_gk20a_pmu_save_zbc
JIRA NVGPU-1882
Change-Id: I132dbc7a9ee9755043cd08f288344df447e28af6
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit gr/config to initialize GR configuration like GPC/TPC
count, MAX count and mask
Create new structure nvgpu_gr_config that stores all the configuration
and that is owned by the new unit
Move below fields from struct gr_gk20a to nvgpu_gr_config in gr/config.h
Struct gr_gk20a now only holds the pointer to struct nvgpu_gr_config
u32 max_gpc_count;
u32 max_tpc_per_gpc_count;
u32 max_zcull_per_gpc_count;
u32 max_tpc_count;
u32 gpc_count;
u32 tpc_count;
u32 ppc_count;
u32 zcb_count;
u32 pe_count_per_gpc;
u32 *gpc_tpc_count;
u32 *gpc_ppc_count;
u32 *gpc_zcb_count;
u32 *pes_tpc_count[GK20A_GR_MAX_PES_PER_GPC];
u32 *gpc_tpc_mask;
u32 *pes_tpc_mask[GK20A_GR_MAX_PES_PER_GPC];
u32 *gpc_skip_mask;
u8 *map_tiles;
u32 map_tile_count;
u32 map_row_offset;
Remove gr->sys_count since it was already no longer used
common/gr/config/gr_config.c unit now exposes the APIs to initialize
the configuration and also to query the configuration values
nvgpu_gr_config_init() is called to initialize GR configuration from
gr_gk20a_init_gr_config() and gr_gk20a_init_map_tiles() is simply
renamed as nvgpu_gr_config_init_map_tiles()
Expose new API nvgpu_gr_config_deinit() to deinit the configuration
Expose nvgpu_gr_config_get_*() APIs to query above configuration
fields stored in nvgpu_gr_config structure
Update vgpu_gr_init_gr_config() to initialize the configuration
from gr->config structure
Chip specific HALs that access GR register for initialization
are implemented in common/gr/config/gr_config_gm20b.c
Set these HALs for all GPUs
Jira NVGPU-1879
Change-Id: Ided658b43124ea61b9f273b82b73fdde4ed3c8f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012167
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We use below APIs to update patch context
gr_gk20a_ctx_patch_write_begin()
gr_gk20a_ctx_patch_write_end()
gr_gk20a_ctx_patch_write()
Since patch context is owned by gr/ctx unit, move these APIs
to this unit and rename them to
nvgpu_gr_ctx_patch_write_begin()
nvgpu_gr_ctx_patch_write_end()
nvgpu_gr_ctx_patch_write()
Jira NVGPU-1527
Change-Id: Iee19c7a71d074763d3dcb9b1997cb2a3159d5299
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989214
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We currently load and create new graphics context image in
gr_gk20a_load_golden_ctx_image()
This API will first load local golden image in new context
image and then initialize context appropriately by calling
g->ops.gr.ctxsw_prog() HALs
Move this sequence to gr/ctx unit and rename the API as
nvgpu_gr_ctx_load_golden_ctx_image()
Note that call to g->ops.gr.update_ctxsw_preemption_mode()
is moved out of this API and called directly from
gk20a_alloc_obj_ctx()
Jira NVGPU-1527
Change-Id: Id5a5b2cd2c0704fbefe536d581a37a60ec185ea9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989157
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>