Move ACR code to separate folder under common/acr to
make ACR separate unit. with this, separating ACR blob
construct, bootstrap & ACR chip specific configuration
code to different files.
ACR blob construction code split into two version, as
gm20b & gp10b still uses older ACR interfaces & not yet
moved to Tegra ACR, blob_construct_v0 file can be deleted
once gm20b/gp10b uses Tegra ACR ucode & point to
blob_construct_v1 with simple change.
As ACR ucode can execute on different engine falcon &
should not be dependent on specific engine falcon, used
generic falcon functions/interface to support ACR & doesn't
access any engine h/w registers directly, and files with
chip name has configuration needed for ACR HS ucode & LS
falcons.
JIRA NVGPU-1148
Change-Id: Ieedbe82f3e1a4303f055fbc795d9ce0f1866d259
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017046
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes are done in this patch.
1) gk20a_fifo_get_engine_info() is moved to common/fifo/engine.c
and is renamed to gk20a_fifo_get_active_engine_info() to reflect
accurately the purpose of the function.
2) move the definition of enum fifo_engine to <nvgpu/engines.h> and
add the prefix NVGPU_
3) move the following functions related to engines in fifo_gk20a.c to
common/fifo/engines.c and replace their signature by adding the prefix
nvgpu_engine and removing gk20a_fifo.
gk20a_fifo_get_active_engine_info
gk20a_fifo_engine_enum_from_type
gk20a_fifo_get_engine_ids
gk20a_fifo_is_valid_engine_id
gk20a_fifo_get_gr_engine_id
gk20a_fifo_act_eng_interrupt_mask
gk20a_fifo_engine_interrupt_mask
gk20a_fifo_get_all_ce_engine_reset_mask
Jira NVGPU-1315
Change-Id: I63d9dcd905a0bebcc9a4c65776cf6ec7a0837acf
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011298
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Drop the "runlist_" part in the runlist section of the HAL ops. For
example:
- old: g->ops.runlist.runlist_wait_pending
- new: g->ops.runlist.wait_pending
At the same time, drop the "fifo_" part from the function names. For
example:
- old: gk20a_fifo_runlist_wait_pending
- new: gk20a_runlist_wait_pending
Also rename eng_runlist_base_size to count_max. The size of the
eng_runlist_base register array depicts the maximum possible number of
runlists in the chip for which count_max is more descriptive.
Jira NVGPU-1309
Change-Id: Ie9e94b9f65cd10d3e682d19954f240adb6e311be
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017403
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Since we plan to separate engine DMEM/EMEM and FB queues into separate
implementations, let's make the engine queue_head and queue_tail APIs
independent of nvgpu_falcon_queue parameter.
JIRA NVGPU-1994
Change-Id: I389cc48d4045d9df8f768166f6a1d7074a69a309
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016283
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We get gpc_mask by calling GR HAL g->ops.gr.get_gpc_mask()
But gpc_mask should be logically owned by gr/config unit
Hence add new gpc_mask field to nvgpu_gr_config
Initialize it in nvgpu_gr_config_init() by calling a new HAL
g->ops.gr.config.get_gpc_mask() if available
If HAL is not defined we just initialize it based on gpc_count
Expose new API nvgpu_gr_config_get_gpc_mask() to get gpc_mask
and use this API now
Remove gr_gm20b_get_gpc_mask() and HAL g->ops.gr.get_gpc_mask()
Update GV100 and TU104 chip HALs to remove old and add new HAL
Add gpc_mask to struct tegra_vgpu_constants_params to support this
on vGPU. Also get gpc_mask from vGPU private data in
vgpu_gr_init_gr_config()
Jira NVGPU-1879
Change-Id: Ibdc89ea51df944dc7085920509e3536a5721efc0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016084
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Unit gr/config right now queries gpc_count from priv_ring by directly
reading the value from register
priv_ring unit now exposes below HAL to get gpc_count
g->ops.priv_ring.get_gpc_count()
Use this HAL in gr/config unit
Jira NVGPU-1879
Change-Id: Ibd3557b7f906690a7ad18f11d02a0a6990b98337
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016083
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr/config unit we right now query max gpc_count and tpc_per_gpc_count
by directly accessing registers using hw_top_gm20b.h h/w header
Update TOP unit to provide below HALs
g->ops.top.get_gpc_count()
g->ops.top.get_tpc_per_gpc_count()
And call these HALs from gr/config
Jira NVGPU-1879
Change-Id: I39f5d3bb80960d68a1f493b372745e964ad82803
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2016082
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A new unit nvgpu_engine_status_info is added. The unit provides a HAL
ops function pointer read_engine_status_info() to read and produce
a struct of type nvgpu_engine_status_info. Additionally, the unit
provides public APIs to retrieve data from the struct
nvgpu_engine_status_info.
Jira NVGPU-1315
Change-Id: I6c167c36081bee5c9a8db51d3467c8f5f02c2685
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2003886
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a HAL op for resetting the eng_faulted and pbdma_faulted states on a
channel. This used to be a local feature in fifo_gv11b.c; the HAL is
defined for all chips from gv11b onwards.
Jira NVGPU-1307
Change-Id: I120a59c429851cc69e712ddd5b06a4b3d16c06c9
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017269
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Refactor read accesses to the ccsr_channel register for channel state to
be done via a channel HAL op for all chips. A new op called read_state
is added for this; information needed by other units is collected in a
new struct nvgpu_channel_hw_state.
Jira NVGPU-1307
Change-Id: Iff9385c08e17ac086d97f5771a54b56b2727e3c4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017266
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Extract out the HAL ops' implementation that now belongs to the channel
unit. This unit is responsible for channel register accesses and the
like (ccsr_*).
Rename channel_gm20b_bind to gm20b_fifo_channel_bind to match with the
rest of the naming. Same with channel_gv11b_unbind.
Jira NVGPU-1307
Change-Id: I58b9d96dbdaf36bdb163a5729544a41faec828ab
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split out ops that belong to channel unit to a new section called
channel. Channel is a broad concept; this includes just the code that
accesses channel registers (ccsr_*). This is effectively just renaming;
the implementation still stays put.
The word "channel" is also dropped from certain HAL entries to avoid
redundancy (e.g., channel.disable_channel -> channel.disable).
fifo.get_num_fifos gets an entirely new name: channel.count.
Jira NVGPU-1307
Change-Id: I9a08103e461bf3ddb743aa37ababee3e0c73c861
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently clk_arb needs PSTATE to be true for dGPU.
Setting PSTATE only FALSE, causes issue as clk_arb fails.
There is no such dependency of PSTATE on iGPU.
Making it unified with a call to check_clk_arb_support().
This call is implemented based on its dependency in iGPU, dGPU.
check_clk_arb_support returns true if supported, else false.
Jira NVGPU-1948
Change-Id: I108dc12bd6ad8d0e074352080c978b7dda9bee05
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014775
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make the APIs nvgpu_channel_sync_get_syncpt_id() and
channel_sync_syncpt_get_id() return u32s rather than converting to
ints and back.
Also define FIFO_INVAL_SYNCPT_ID to use for invalid syncpt IDs rather
than using magic numbers.
JIRA NVGPU-1008
Change-Id: I4dde6b15fd3708fb0126b46c6fea8ac1b447c7ce
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014821
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gk20a_update_hwpm_ctxsw_mode() right now validates the incoming
hwpm mode, checks if it is already set, and if not, it will go ahead
and set the new hwpm mode by calling g->ops.gr.ctxsw_prog HALs
Instead of programming hwpm mode in gr_gk20a.c, move the programming
to gr/ctx and gr/subctx units by adding below APIs
nvgpu_gr_ctx_prepare_hwpm_mode() - validate the incoming mode and
check if it is already set
nvgpu_gr_ctx_set_hwpm_mode() - set pm mode in graphics context
nvgpu_gr_subctx_set_hwpm_mode() - set pm mode in subcontext
Add gpu_va field to struct pm_ctx_desc to store the gpu_va to be
programmed into context
Rename NVGPU_DBG_HWPM_CTXSW_MODE_* to NVGPU_GR_CTX_HWPM_CTXSW_MODE_*
and move them to gr/ctx.h
Remove below HALs since they are no longer used
g->ops.gr.ctxsw_prog.set_pm_mode_no_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_stream_out_ctxsw()
Jira NVGPU-1527
Jira NVGPU-1613
Change-Id: Id2a4d498182ec0e3586dc7265f73a25870ca2ef7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011093
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gk20a_ctx_zcull_setup(), we configure context/subcontext with
zcull details
This API now does it directly by calling g->ops.gr.ctxsw_prog HAL
Move all context/subcontext setup to gr/ctx and gr/subctx units
respectively
Define and use below new APIs for same
gr/ctx : nvgpu_gr_ctx_zcull_setup()
gr/subctx : nvgpu_gr_subctx_zcull_setup()
Jira NVGPU-1527
Jira NVGPU-1613
Change-Id: I1b7b16baea60ea45535c623b5b41351610ca433e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011090
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
LSF loader cleanup, on gm20b/gp10b PMU falcon & other GR falcons
uses different struct to store loader config which needs different
functions to fill LSF loader config data, but on gv11b/gv10x/tu10a
uses common falcon struct to store loader config, so made single
function to fill LSF loader config data using ACR LSF struct &
removed duplicate code.
Removed ACR LSF loader ops which were part of PMU ops
to cleanup dependency
JIRA NVGPU-1148
Change-Id: I681829e05463d2517a4049433d8b0de3adeb06d9
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012853
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added data struct under ACR struct to manage LS falcons ucode
as LS falcon ucode holds multiple properties & can be set at acr
init stage to bootstrap LS falcons as required, at present LS falcons
code is part ACR & partially part of PMU code to setup LSF bootstrap,
so, needed to clean up the dependency.
JIRA NVGPU-1148
Change-Id: Ie206e129e3db838041db44d5227ab76a1de991c8
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012763
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>