Add new HAL unit common.hal.gr.init with below source files
hal/gr/init/gr_init_gm20b.c
hal/gr/init/gr_init_gm20b.h
In gr_gk20a_init_golden_ctx_image() we force FE power mode on and also
disable it. Extract out this sequence into new unit and expose new HAL
operation that takes a boolean flag to enable/disable power mode
g->ops.gr.init.fe_pwr_mode_force_on()
Use new HAL operation in gr_gk20a_init_golden_ctx_image()
Set this HAL for all the chips
Jira NVGPU-2961
Change-Id: I1dd35d94fda5e5296af67c0abc944e200fb752ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070607
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gr_reset_mutex to engines_reset_mutex and acquire it
before initiating recovery. Recovery running in parallel with
engine reset is not recommended.
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
Use deferred_reset_mutex to protect deferred_reset_pending variable
If deferred_reset_pending is true then acquire engines_reset_mutex
and call gk20a_fifo_deferred_reset.
gk20a_fifo_deferred_reset would also check the value of
deferred_reset_pending before initiating reset process
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I47de669a6203e0b2e9a8237ec4e4747339b9837c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022373
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
set gr.initialized to false in the beginning of gk20a_gr_reset() and
set it to true at the end of successful execution of gk20a_gr_reset.
Use gk20a_gr_wait_initialized() to enable/disable cg/pg
functions to make sure engine is out of reset and initialized.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: Ic7b0b71382c6d852a625c603dad8609c43b7f20f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030827
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
if fecs is sent stop_ctxsw method, elpg entry/exit cannot happen
and may timeout. It could manifest as different error signatures
depending on when stop_ctxsw fecs method gets sent with respect
to pmu elpg sequence. It could come as pmu halt or abort or
maybe ext error too.
If ctxsw failed to disable, do not read engine info and just abort tsg.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I5f3ba07663bcafd3f0083d44c603420b0ccf6945
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014914
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For handle_sched_error, change err to info print for failing eng
id returned as -1 i.e. FIFO_INVAL_ENGINE_ID as no engine is found
busy doing ctxsw. May be ctxsw already finished for the context
for which ctxsw timeout intr was triggered.
Possible Causes:
a)
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
This is just a side effect of how gv100 and earlier versions of
ctxsw_timeout behave.
With gv10b and later, h/w snaps the context at the point of error
so that s/w can see the tsg_id which caused the HW timeout.
b)
If engines are not busy and ctxsw state is valid then intr occurred
in the past and if the ctxsw state has moved on to VALID from LOAD
or SAVE, it means that whatever timed out eventually finished
anyways. The problem with this is that s/w cannot conclude which
context caused the problem as maybe more switches occurred before
intr is handled.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: Ia79bee6e860fb179ee39024c963671d4f8245227
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030866
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The functions gk20a_dump_eng_status and gv11b_dump_eng_status belongs
to engine_status HAL unit.
1) The corresponding declaration and definitions of the above functions
are moved from fifo_{arch} files to engine_status_{arch} files.
2) The corresponding HAL pointer .dump_eng_status is moved from
fifo to engine_status HAL unit.
3) gv11b_dump_eng_status is now based to gv100b_dump_eng_status
4) Small changes in the files for ENGINE_STATUS such as correction of
HEADER DEFINES etc
Jira NVGPU-1315
Change-Id: I7fc06eab97206bc3b78c6f5c7aa30fa2c034961c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033632
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The corresponding HAL pointer for gk20a_fifo_wait_engine_idle is not
being invoked anywhere and hence they are removed from the code.
The function gk20a_fifo_wait_engine_idle belongs to engine unit and is
only called in a non-safe build, hence its moved to engine unit and is
restricted by a non-safe build flag NVGPU_ENGINE
Also, gk20a_fifo_wait_engine_idle is renamed to nvgpu_engine_wait_for_idle
Jira NVGPU-1315
Change-Id: Ie550c7e46a4284dfe368859d828b1994df34185f
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033631
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following functions belong to engine unit and are moved
gk20a_fifo_enable_engine_activity
gk20a_fifo_enable_all_engine_activity
gk20a_fifo_disable_engine_activity
gk20a_fifo_disable_all_engine_activity
These are renamed by replacing gk20a_fifo with nvgpu_engine as prefix.
These functions are only invoked by linux build and not required for
safety build and hence they are defined when
-DNVGPU_ENGINE is enabled.
Jira NVGPU-1315
Change-Id: I39d820879bb55b40e754526c657d794930a4b6a1
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
is_fault_engine_subid_gpc HAL pointer belongs to engine HAL unit instead of
fifo. This patch moves the HAL pointer to a newly constructed engine
HAL unit.
The following new files are added under HAL/fifo/
engines_gm20b.h
engines_gm20b.c
engines_gv11b.h
engines_gv11b.c
Jira NVGPU-1315
Change-Id: If28686bf7350563b06b13348a9fe3ef0099c35b2
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2031659
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Pre-volta, cbc config is part of hw ltc and from volta onwards this is
moved to hw fb. Because of this, cbc_init functions are present in both
cbc unit and fb unit. Pre-volta uses cbc_init from cbc unit and from
volta onwards it uses cbc_init from fb unit.
With this patch, unified two cbc_init functions to cbc unit and created
new fb hal for cbc_configure. cbc unit uses fb hal for cbc_config and
fb unit is independent of cbc unit.
JIRA NVGPU-2897
Change-Id: Ib62f0b08547b031bcb5011c837e43c74931a22fe
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030906
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create Compression Bit Cache(CBC) unit to have comptags
cache related functionality in one place. In this patch
Moved following gpu ops from ltc to cbc and renamed accordingly:
void (*init)(struct gk20a *g, struct gr_gk20a *gr);
u64 (*get_base_divisor)(struct gk20a *g);
int (*alloc_comptags)(struct gk20a *g, struct gr_gk20a *gr);
int (*ctrl)(struct gk20a *g, enum gk20a_cbc_op op,
u32 min, u32 max);
u32 (*fix_config)(struct gk20a *g, int base);
To avoid ambiguity renamed function pointer from
init_comptags to alloc_comptags.
Moved following function from ltc.h to cbc.h:
nvgpu_ltc_alloc_cbc -> nvgpu_cbc_alloc
Also changed file name that implemented
nvgpu_cbc_alloc functionality from
os/ltc.c -> os/linux-cbc.c
JIRA NVGPU-2897
Change-Id: Ide32a98567e9a3f0a784d62221a6f484f8343e53
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030194
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of FBPA we need to consider mask of active FBPAs on dGPUs.
For that we have GR unit HAL g->ops.gr.add_ctxsw_reg_pm_fbpa()
Generic support to consider active mask of unit need not be in a HAL,
move it to common code in add_ctxsw_buffer_map_entries_subunits() itself
This API now supports providing active_unit_mask as its parameter
In case we don't need to consider unit mask caller will simply pass
~U32(0U) to indicate all units are active
In case of FBPA, add a new HAL g->ops.gr.hwpm_pm.get_active_fbpa_mask()
which gets mask of active FBPAs, and pass this value to common API
add_ctxsw_buffer_map_entries_subunits()
Jira NVGPU-2895
Change-Id: I0d208ce53abcd36929c25a4d248868d6eaa5c70d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069472
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new HAL unit hal.gr.hwpm_map that provides chip specific
support to common.gr.hwpm_map unit
We currently have common.gr HAL g->ops.gr.add_ctxsw_reg_perf_pma()
to handle chip specific alignment of perf_pma list
We only adjust the offset of list and remaining code is same
Hence delete above HAL, and add new HAL under hal.gr.hwpm_map
g->ops.gr.hwpm_map.align_regs_perf_pma() which returns correct
alignment if HAL is defined
Remove gr_gv100_add_ctxsw_reg_perf_pma() and
gr_gk20a_add_ctxsw_reg_perf_pma() APIs since they are no longer used
Simplify perf_pma parsing by fixing alignment with new HAL and then
directly calling add_ctxsw_buffer_map_entries()
Jira NVGPU-2895
Change-Id: I1852db846e1f5441e482028c79a3f39c5142b0c2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2069471
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Loading of ctx/method init values from NET image is done in GR boot
sequence and also in golden image creation
Doing this during GR boot creation is unnecessary since those writes
are anyways part of golden image itself
Hence remove those method loads from gk20a_init_gr_setup_hw()
Also remove g->ops.gr.commit_global_timeslice() call since that too
is covered during golden image creation
Since above code is removed disabling and restoring fe_go_idle timeout
is also not needed
Jira NVGPU-1885
Change-Id: Ic6eb4bea7ceb36bb1420f58446785cefe25066df
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2034042
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new power/clock gating functions that can be called by
other units.
New clock_gating functions will reside in cg.c under
common/power_features/cg unit.
New power gating functions will reside in pg.c under
common/power_features/pg unit.
Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable
elpg and also in gr_gk20a_elpg_protected macro to access gr registers.
Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled
and slcg_enabled thread safe.
JIRA NVGPU-2014
Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025493
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move below calls to gr/fecs_trace unit
gk20a_fecs_trace_bind_channel()
gk20a_fecs_trace_unbind_channel()
And rename them to
nvgpu_gr_fecs_trace_bind_channel()
nvgpu_gr_fecs_trace_unbind_channel()
We are not accessing any fifo/ch/tsg construct in gr/fecs_trace unit
hence update parameter list of above APIs to receive inst_block,
gr_ctx, subctx pointers directly instead of receiving channel_gk20a
Delete gk20a/fecs_trace_gk20a.* files since they are no longer
required. All the contents in those files are now moved to gr/fecs_trace
unit
Jira NVGPU-1880
Change-Id: I7ef9f0b66781b45155035237172ae400f02740e4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create new unit common.gr.hwpm_map with source file common/gr/hwpm_map.c
and public header include/nvgpu/gr/hwpm_map.h
Move all APIs in gr_gk20a.c that handle hwpm_map functionality to this
new unit. This unit now exposes below struct that is included in struct
gr_gk20a
struct nvgpu_gr_hwpm_map {
u32 pm_ctxsw_image_size;
u32 count;
struct ctxsw_buf_offset_map_entry *map;
bool init;
}
Expose below APIs
nvgpu_gr_hwpm_map_init() - initialize HWPM map meta-data with given size
nvgpu_gr_hwpm_map_deinit() - deinitialize HWPM map
nvgpu_gr_hwmp_map_find_priv_offset() - find a given offset in the map
The sequence to create the map by reading various netlist segments is
moved to a static API nvgpu_gr_hwpm_map_create()
Jira NVGPU-2894
Change-Id: I07d31169d2ff18a496eb79a726027b847d5f0e06
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032777
GVS: Gerrit_Virtual_Submit
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit f67bc51e51.
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Skip allocation and initialization of inactive runlists. Active
runlists info is stored in the active_runlist_info array.If a
runlist is active, then runlist_info[runlist_id] points to one
entry in active_runlist_info. Otherwise, runlist_info[runlist_id]
is NULL.
Operations that used to walk through all runlists are modified
to walk though active runlists only.
Bug 2470115
Bug 2522374
Change-Id: I98253ebebb4b1ba5957b57329820b94444b9d41b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030409
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit ade1d50cbe.
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Use an array of pointers to runlists in fifo_gk20a. The array
keeps existing indexing by runlist_id. In this patch a context
is still allocated for each possible runlist, but follow up
patch will allow to skip context allocation for inactive
runlists.
Bug 2470115
Bug 2522374
Change-Id: I0deb6981bc6f5152bdf121f0a44429748aa14687
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030407
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes are made in this patch.
1) nvgpu driver is incorrectly using u32 to store enum values in some
functions. Replaced them with correct type enum nvgpu_fifo_engine
2) change parameter type in nvgpu_engine_get_ids from engine_id[]
to *engine_ids
3) rename some function names to remove redundant characters to make
the name shorter.
4) Removed the initialization of enum nvgpu_fifo_engine in functions
where we assign a value before direct access.
Jira NVGPU-1315
Change-Id: Ic65b40c9cb1e90ad278cb36a00e1c9de51724f27
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2020230
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gk20a_init_gr_config() we right now directly access a register
from hw_pri_ringmaster_*.h h/w header to read FBP count
Add a new HAL operation to PRIV_RING unit and start using it in GR code
instead of directly accessing register
g->ops.priv_ring.get_fbp_count()
Jira NVGPU-2894
Change-Id: I8a7b5423e28ef40612f55cb2915d7a2cff2f7435
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030673
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below HALs to get max FBPs, max LTC per FBP, max LTS pet LTC values are
right now defined by GR unit.
g->ops.gr.get_max_fbps_count()
g->ops.gr.get_max_ltc_per_fbp()
g->ops.gr.get_max_lts_per_ltc()
These HALs only read registers from hw_top_*.h h/w unit, and as such
belong to TOP unit. Move them appropriately as below
g->ops.top.get_max_fbps_count()
g->ops.top.get_max_ltc_per_fbp()
g->ops.top.get_max_lts_per_ltc()
Remove hw_top_*.h h/w header include from gr_gk20a.c and gr_gm20b.c
Jira NVGPU-2894
Change-Id: I995d9f56edb65c9de98d2d15d34ecb72920a65c6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030672
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gv11b/fecs_trace_gv11b.* files to
common/gr/fecs_trace/fecs_trace_gv11b.*
Also move HAL API gk20a_fecs_trace_get_buffer_full_mailbox_val()
to gr/fecs_trace unit and rename it as
gm20b_fecs_trace_get_buffer_full_mailbox_val()
Protect gm20b/gv11b HAL code under CONFIG_GK20A_CTXSW_TRACE
Remove tu104/fecs_trace_tu104.* since tu104 will re-use gv11b HAL
Fix g->ops.fecs_trace.get_buffer_full_mailbox_val() for vgpu/gv11b and
use gv11b HAL
Jira NVGPU-1880
Change-Id: If78480e36be4e5f0fd659019518f233d8805486d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029259
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove below calls from fecs_trace_gk20a.c
gk20a_fecs_trace_ring_read()
gk20a_fecs_trace_poll()
gk20a_fecs_trace_periodic_polling()
gk20a_fecs_trace_reset()
And move them to common gr/fecs_trace unit with below renames
nvgpu_gr_fecs_trace_ring_read()
nvgpu_gr_fecs_trace_poll()
nvgpu_gr_fecs_trace_periodic_polling()
nvgpu_gr_fecs_trace_reset()
Also update above calls to support QNX use cases by adding
vm_update_mask as a parameter
Add below HALs for QNX support. These HALs will not be set for linux
g->ops.fecs_trace.vm_dev_write()
g->ops.fecs_trace.vm_dev_update()
Jira NVGPU-1880
Change-Id: Idc305b9288a1df5ca86622b95d6e62a23fdfde7e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029258
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- rename vgpu_gr_gm20b_init_cyclestats() to vgpu_gr_init_cyclestats()
moving to gr_vgpu.c common to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctxsw_preemption_mode() to
vgpu_gr_init_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_ctxsw_preemption_mode() to
vgpu_gr_set_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_preemption_mode() to
vgpu_gr_set_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctx_state() to vgpu_gr_init_ctx_state()
moving to ctx_vgpu.c common to all vgpu chips.
- combine vgpu_gr_gv11b_commit_ins() to vgpu_gr_commit_inst()
executing alloc/free subctx header code only if chip supports
subctx.
- remove inclusion of hw header files from vgpu gr code by
introducing hal ops for the following:
- alloc_global_ctx_buffers:
- hal op for getting global ctx cb buffer
- hal op for getting global ctx pagepool buffer size
- set_ctxsw_preemption_mode:
- hal op for getting ctx spill size
- hal op for getting ctx pagepool size
- hal op for getting ctx betacb size
- hal op for getting ctx attrib cb size
These chip specific function definitions are currently implemented in
chip specific gr files which will need to be moved to hal units.
Also use these hal ops for corresponding functions for native. This
makes gr_gv11b_set_ctxsw_preemption_mode() function redundant. Use
gr_gp10b_set_ctxsw_preemption_mode() for gv11b as well.
Jira GVSCI-334
Change-Id: I60be86f932e555176a972c125e3ea31270e6cba7
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently PMU support enable check is done with multiple
methods which added complexity to know status of PMU
support.
Changed to replace multiple methods with support_pmu
flag to know the PMU support, support_pmu will be updated
at init stage based on platform/chip specific settings
to know the PMU support status.
Cleaned up support_pmu flag check with platform specific
PMU members in multiple places & moved check to
public functions
JIRA NVGPU-173
Change-Id: Ief2c64250d1f78e3b054203be56499e4d1d9b046
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024024
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new variable in nvgpu_as_map_buffer_ex_args for app
to specify the platform atomic support for the page.
When platform atomic attribute flag is set, pte memory
aperture is set to be coherent type.
renamed nvgpu_aperture_mask_coh -> nvgpu_aperture_mask_raw
function.
bug 200473147
Change-Id: I18266724dafdc8dfd96a0711f23cf08e23682afc
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012679
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Skip allocation and initialization of inactive runlists.
Active runlists info is stored in the active_runlist_info array.
If a runlist is active, then runlist_info[runlist_id] points to
one entry in active_runlist_info. Otherwise, runlist_info[runlist_id]
is NULL.
Operations that used to walk through all runlists are modified to
walk though active runlists only.
Bug 2470115
Change-Id: Icd10281dc904bdee581ebc9cfeb662018ecca121
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025385
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Use an array of pointers to runlists in fifo_gk20a. The array
keeps existing indexing by runlist_id. In this patch a context
is still allocated for each possible runlist, but follow up
patch will allow to skip context allocation for inactive
runlists.
Bug 2470115
Change-Id: I1615043cea84db35a270ade64695d51f85c1193a
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025203
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The file semaphore.c is now split into 4 units namely
semaphore, semaphore_hw, semaphore_pool and semaphore_sea.
Each of the above units now have separate compilation units under
common/semaphore/. The public APIs corresponding to each unit is
present in include/nvgpu/semaphore.h. The dependency graph of the
below units is as follows where '->' indicates left depends on right.
semaphore -> semaphore_hw -> semaphore_pool -> semaphore_sea
Some of the other major changes made in this patch are as follows
i) Renamed some of the functions.
ii) Some functions are changed from private to public.
iii) Public header for semaphore contains only the declaration of the
corresponding structs as an opaque structure.
iv) Constructed a private header to contain internal functions common
to all the units and struct definitions corresponding to each unit.
v) Added new functions to provide access to internal members of the
units.
Jira NVGPU-2076
Change-Id: I6f111647ba9a9a9f8ef9c658f316cd5d6276c703
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022782
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move below APIs from gk20a/fecs_trace_gk20a.c
gk20a_fecs_trace_enable()
gk20a_fecs_trace_disable()
gk20a_fecs_trace_is_enabled()
gk20a_fecs_trace_reset_buffer()
gk20a_fecs_trace_buffer_size()
gk20a_gr_max_entries()
and move them to new gr/fecs_trace unit with below renames
nvgpu_gr_fecs_trace_enable()
nvgpu_gr_fecs_trace_disable()
nvgpu_gr_fecs_trace_is_enabled()
nvgpu_gr_fecs_trace_reset_buffer()
nvgpu_gr_fecs_trace_buffer_size()
nvgpu_gr_fecs_trace_max_entries()
Use new functions in the driver instead of old ones
Export gk20a_fecs_trace_periodic_polling() in fecs_trace_gk20a.h
header since it is needed in gr/fecs_trace for transition
This include and the function itself will be later moved to
gr/fecs_trace unit
Move struct nvgpu_gpu_ctxsw_trace_filter and all filter TSG
macros in the form NVGPU_GPU_CTXSW_TAG_* to gr/fecs_trace.h
Jira NVGPU-1880
Change-Id: Ic95b99554e626033a111452f311bbc026ec604e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2027530
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move struct gk20a_fecs_trace_record to gr/fecs_trace unit and rename
it as struct nvgpu_fecs_trace_record
Move all of the APIs in nvgpu/fecs_trace.h to nvgpu/gr/fecs_trace.h
and rename them in nvgpu_gr_fecs_trace_*() format
Delete nvgpu/fecs_trace.h
Add new HAL unit common/gr/fecs_trace/fecs_trace_gm20b.c for register
accesses needed for gr/fecs_trace unit
Add below new HALs in this HAL unit
g->ops.fecs_trace.get_read_index()
g->ops.fecs_trace.get_write_index()
g->ops.fecs_trace.set_read_index()
Jira NVGPU-1880
Change-Id: Ib6ee32ba0d2f8a8a3e82491057e2f01a0275fcf4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024973
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move struct gk20a_fecs_trace to new gr/fecs_trace unit and rename
it as struct nvgpu_gr_fecs_trace
Add enable_lock mutex and enable_count to this structure to support
QNX use cases
Remove init field from struct gk20a_fecs_trace
Rename gk20a_fecs_trace_init() to nvgpu_gr_fecs_trace_init() and
move it to new unit
Rename gk20a_fecs_trace_deinit() to nvgpu_gr_fecs_trace_deinit()
and move it to new unit
Update gk20a_fecs_trace_enable() to start thread only when
enable_count == 1, otherwise we just increment enable_count
Update gk20a_fecs_trace_disable() to stop thread when
enable_count == 0, otherwise we just decrement enable_count
Before this patch struct gk20a_fecs_trace was not visible in new
unit, and hence all mutex_acquire for list_lock were done in
fecs_trace_gk20a.c file
Since new struct is now available in new unit, move mutex_lock/release
calls to gr/fecs_trace unit now
Jira NVGPU-1880
Change-Id: I5abfa0165fa1c31716f3d6f2f669284f8959d7cf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024562
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved zbc related files to common/gr/zbc location.
struct nvgpu_gr_zbc created for zbc variables.
common zbc functions are moved to gr_zbc.c file.
All zbc hal functions are moved with corresponding chip specific
filename.
JIRA NVGPU-1882
Change-Id: I1bdaa2d9416e6e77ab305f117647dc070438ee86
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2019760
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Timeslice mode is always set to enabled. So, it is not
required to check for timeslice mode enable and done
following cleanup as part of this change.
1. Removed timeslice_mode field from struct gr_gk20a and
removed setting of this field from the function
gr_gk20a_init_gr_config.
2. Removed checks for timeslice_mode enable in
gr_gk20a_commit_global_timeslice function.
3. Removed unused kernel definitions from headers:
gr_gpcs_ppcs_cbm_cfg_r()
gr_gpcs_ppcs_cbm_cfg_timeslice_mode_enable_v()
JIRA NVGPU-2155
Change-Id: Id99f4b771c74f4cea763ea63441043e93def2347
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Change-Id: Id99f4b771c74f4cea763ea63441043e93def2347
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024320
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The type for the timeout parameter to the NVGPU_COND_WAIT and
NVGPU_COND_WAIT_INTERRUPTIBLE macros was too weak. This updates these
macros to require a u32 for the timeout.
Users of the macros are updated to be compliant as necessary.
This addresses MISRA 10.3 violations for implicit conversions of types
of different size or essential type.
JIRA NVGPU-1008
Change-Id: I12368dfa81b137c35bd056668c1867f03a73b7aa
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017503
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>