Extract out the HAL ops' implementation that now belongs to the channel
unit. This unit is responsible for channel register accesses and the
like (ccsr_*).
Rename channel_gm20b_bind to gm20b_fifo_channel_bind to match with the
rest of the naming. Same with channel_gv11b_unbind.
Jira NVGPU-1307
Change-Id: I58b9d96dbdaf36bdb163a5729544a41faec828ab
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split out ops that belong to channel unit to a new section called
channel. Channel is a broad concept; this includes just the code that
accesses channel registers (ccsr_*). This is effectively just renaming;
the implementation still stays put.
The word "channel" is also dropped from certain HAL entries to avoid
redundancy (e.g., channel.disable_channel -> channel.disable).
fifo.get_num_fifos gets an entirely new name: channel.count.
Jira NVGPU-1307
Change-Id: I9a08103e461bf3ddb743aa37ababee3e0c73c861
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently clk_arb needs PSTATE to be true for dGPU.
Setting PSTATE only FALSE, causes issue as clk_arb fails.
There is no such dependency of PSTATE on iGPU.
Making it unified with a call to check_clk_arb_support().
This call is implemented based on its dependency in iGPU, dGPU.
check_clk_arb_support returns true if supported, else false.
Jira NVGPU-1948
Change-Id: I108dc12bd6ad8d0e074352080c978b7dda9bee05
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014775
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gk20a_update_hwpm_ctxsw_mode() right now validates the incoming
hwpm mode, checks if it is already set, and if not, it will go ahead
and set the new hwpm mode by calling g->ops.gr.ctxsw_prog HALs
Instead of programming hwpm mode in gr_gk20a.c, move the programming
to gr/ctx and gr/subctx units by adding below APIs
nvgpu_gr_ctx_prepare_hwpm_mode() - validate the incoming mode and
check if it is already set
nvgpu_gr_ctx_set_hwpm_mode() - set pm mode in graphics context
nvgpu_gr_subctx_set_hwpm_mode() - set pm mode in subcontext
Add gpu_va field to struct pm_ctx_desc to store the gpu_va to be
programmed into context
Rename NVGPU_DBG_HWPM_CTXSW_MODE_* to NVGPU_GR_CTX_HWPM_CTXSW_MODE_*
and move them to gr/ctx.h
Remove below HALs since they are no longer used
g->ops.gr.ctxsw_prog.set_pm_mode_no_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_stream_out_ctxsw()
Jira NVGPU-1527
Jira NVGPU-1613
Change-Id: Id2a4d498182ec0e3586dc7265f73a25870ca2ef7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011093
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A comment for gk20a_fifo_update_runlist() says:
/* add/remove a channel from runlist
special cases below: runlist->active_channels will NOT be changed.
(ch == NULL && !add) means remove all active channels from runlist.
(ch == NULL && add) means restore all active channels on runlist. */
Those special cases call for a new function, so add that. Delete the
update_runlist HAL op and add update_for_channel (like update_runlist
without the special cases) and reload (no channel to add or remove, just
the special cases).
While at it, rename gk20a_fifo_update_runlist_ids to
nvgpu_runlist_reload_ids. It's common across chips and does what the
reload HAL does but for a list of several IDs.
Jira NVGPU-1922
Change-Id: I9a99ab03a636a1214c021faad359d2b304a9472f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit gr/config to initialize GR configuration like GPC/TPC
count, MAX count and mask
Create new structure nvgpu_gr_config that stores all the configuration
and that is owned by the new unit
Move below fields from struct gr_gk20a to nvgpu_gr_config in gr/config.h
Struct gr_gk20a now only holds the pointer to struct nvgpu_gr_config
u32 max_gpc_count;
u32 max_tpc_per_gpc_count;
u32 max_zcull_per_gpc_count;
u32 max_tpc_count;
u32 gpc_count;
u32 tpc_count;
u32 ppc_count;
u32 zcb_count;
u32 pe_count_per_gpc;
u32 *gpc_tpc_count;
u32 *gpc_ppc_count;
u32 *gpc_zcb_count;
u32 *pes_tpc_count[GK20A_GR_MAX_PES_PER_GPC];
u32 *gpc_tpc_mask;
u32 *pes_tpc_mask[GK20A_GR_MAX_PES_PER_GPC];
u32 *gpc_skip_mask;
u8 *map_tiles;
u32 map_tile_count;
u32 map_row_offset;
Remove gr->sys_count since it was already no longer used
common/gr/config/gr_config.c unit now exposes the APIs to initialize
the configuration and also to query the configuration values
nvgpu_gr_config_init() is called to initialize GR configuration from
gr_gk20a_init_gr_config() and gr_gk20a_init_map_tiles() is simply
renamed as nvgpu_gr_config_init_map_tiles()
Expose new API nvgpu_gr_config_deinit() to deinit the configuration
Expose nvgpu_gr_config_get_*() APIs to query above configuration
fields stored in nvgpu_gr_config structure
Update vgpu_gr_init_gr_config() to initialize the configuration
from gr->config structure
Chip specific HALs that access GR register for initialization
are implemented in common/gr/config/gr_config_gm20b.c
Set these HALs for all GPUs
Jira NVGPU-1879
Change-Id: Ided658b43124ea61b9f273b82b73fdde4ed3c8f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012167
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The support sparse HAL severs only one purpose: return true or false
depending on whether the given chip supports sparse mappings. This
HAL is used to, in turn, program (or not) the
NVGPU_SUPPORT_SPARSE_ALLOCS enabled flag. So instead of having all
this rigmarole to program this flag just program it for all native
GPUs. Then, in the vGPU specific characteristics function disable it
explicitly. This seems to have precedent already.
JIRA NVGPU-1737
JIRA NVGPU-1934
Change-Id: I630928ad656aaffc09fdc6b7fec9fc423aa94c38
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2006796
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 21.2 forbids the usage of identifier names which start with
an underscore. This is because identifier names which start with an
underscore are reserved for the C library. This patch fixes rule 21.2
issues in vgpu header guard names.
JIRA NVGPU-1028
Change-Id: I89b450f0c1960ad93392971dcd6ef3c15d2460db
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011872
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/subctx.c to manage GR subcontext
This unit provides interfaces to allocate/free/load GR subcontext
Add new header file include/nvgpu/gr/subctx.h to declare all the
interfaces.
Right now channel_gk20a structure directly includes a nvgpu_mem
for context header.
Declare a new structure nvgpu_gr_subctx for subcontext and include
this from channel_gk20a
Make all necessary changes to refer ctx_header from subctx instead
of directly referencing it from channel
Jira NVGPU-1613
Change-Id: I9eb1ee8f26fa88d2881f9b294935b65e9cbcc9b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990129
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_fifo_update_runlist_locked() and the vgpu counterpart do three
things:
1. find out whether there's a channel to add or remove (if not, the
whole runlist is just reconstructed or cleared),
2. reconstruct the runlist format for hardware (or the vgpu message),
3. actually update the runlist to hw and maybe wait for finish.
Split out the two first operations to separate functions to make the
code easier to understand. Now it's also clearer that the "add"
parameter behaves completely differently depending on whether the
channel pointer is NULL or not.
Also ignore (with a warning) channels not bound to a tsg. We shouldn't
get runlist updates on such channels. This simplifies the control flow a
bit.
Jira NVGPU-1309
Jira NVGPU-1922
Change-Id: I478c33eb2bd154c05a6c8c4148e4fea528a39a3e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2007473
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Remove handling for channels that are no more bound to tsg
as channel could be referenceable but no more part of a tsg
- Use tsg_gk20a_from_ch to get pointer to tsg for a given channel
- Clear unhandled gr interrupts
Bug 2429295
JIRA NVGPU-1580
Change-Id: I9da43a2bc9a0282c793b9f301eaf8e8604f91d70
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972492
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gv11b_mm_l2_flush was not checking error codes from the various
functions it was calling. MISRA Rule-17.7 requires the return value
of all functions to be used. This patch now checks return values and
propagates the error upstream.
JIRA NVGPU-677
Change-Id: I9005c6d3a406f9665d318014d21a1da34f87ca30
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998809
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A naked channel ID does not carry good information about the channel
validity and is a very low level construct for an API of this level.
Refactor the runlist updating fifo APIs to take a channel pointer.
While at it, delete the channel and wait_for_finish parameters from
gk20a_fifo_update_runlist_ids() - the only caller is suspend and resume
and the parameters were always null for channel and true for wait.
Jira NVGPU-1309
Jira NVGPU-1737
Change-Id: Ied350bc8e482d8e311cc708ab0c7afdf315c61cc
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
sema cmdbuf specific functions are only for the sync functionality
of nvgpu and donot belong to fifo.
construct files sema_cmdbuf_gv11b.h and sema_cmdbuf_gv11b.c
under common/sync to contain the syncpt specific cmdbuf functions
for arch gv11b.
Jira NVGPU-1308
Change-Id: I440847e8b996e0956d81fe6cdde331937deda40e
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975923
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
sema cmdbuf specific functions are only for the sync functionality
of nvgpu and do not belong to fifo.
construct files sema_cmdbuf_gk20a.h and sema_cmdbuf_gk20a.c
under common/sync to contain the syncpt specific cmdbuf functions
for arch gk20a.
Jira NVGPU-1308
Change-Id: Iebeebe7a3de627f2de08d4ced74bb1aabf1eb53c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975922
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Syncpt cmdbuf specific functions are only for the sync functionality
of nvgpu and do not belong to fifo.
Construct files syncpt_cmdbuf_gv11b.h and syncpt_cmdbuf_gv11b.c under
common/sync to contain the syncpt specific cmdbuf functions for arch gv11b.
The word 'fifo' is also removed from the name of these functions.
Jira NVGPU-1308
Change-Id: I4253fd04b5f2ae48611ea501a9abf2b0e42a2c0e
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975921
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
syncpt cmdbuf specific functions are only for the sync functionality of
nvgpu and donot belong to fifo.
construct files syncpt_cmdbuf_gk20a.h and syncpt_cmdbuf_gk20a.c under
common/sync to contain the syncpt specific cmdbuf functions for arch
gk20a.
The word 'fifo' is also removed from the name of these functions.
Jira NVGPU-1308
Change-Id: I1a1fd1d31f7decd1398f8e2ff625f95cf1f55033
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
sync cmbbuf specific ops pointers are moved into a new struct sync_ops
under the parent struct gpu_ops. The HAL assignments to the gk20a and
gv11b versions are updated to match the new struct type.
Jira NVGPU-1308
Change-Id: I1d9832ed5e938cb65747f0f6d34088552f75e2bc
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975919
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Avoid including the HW headers directly in the HAL listings: add
indirection functions for the two ops that were naked:
- runlist.eng_runlist_base_size
- runlist.runlist_entry_size
GV100 gets a new fifo HAL file as base_size is the first one (and
currently the only one) of GV100-specific ops.
NVGPU-1309
Change-Id: Idf28b5e26c798457132ef595fa55c65bcddb1b31
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997826
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h
Move rest of the platform HAL files to common/regops/ as well
Fix all the header includes to include new public header
Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow
Jira NVGPU-620
Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Regops does not depend on debug session logically
We right now include debugger.h in regops_gk20a.c to extract
channel pointer from debug session and to check if session is for
profiling or not
Update exec_regops_gk20a() to receive channel pointer and profiler
flag directly as parameters, and remove dbg_session_gk20a from
parameter list
Remove ((!dbg_s->is_profiler) && (ch != NULL)) checks from
check_whitelists(). Caller of exec_regops_gk20a() already ensures
that we have context bound for debug session
Use only is_profiler boolean flag in this case which should be
sufficient
Remove (ch == NULL) check in check_whitelists() if regops is of
type gr_ctx. Instead move this check to earlier function call
in validate_reg_ops(). If we have non-zero context operation on
a profiler session, return error from validate_reg_ops()
Update all subsequent calls with appropriate parameter list
Remove debugger.h include from regops_gk20a.c
Jira NVGPU-620
Change-Id: If857c21da1a43a2230c1f7ef2cc2ad6640ff48d9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997868
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently all the global contex buffers are mapped into each graphics
context. Move all the mapping/unmapping support to gr/ctx unit since
all the mappings are owned by context itself
Add nvgpu_gr_ctx_map_global_ctx_buffers() that maps all the global
context buffers into given gr_ctx
Add nvgpu_gr_ctx_get_global_ctx_va() that returns VA of the mapping
for requested index
Remove g->ops.gr.map_global_ctx_buffers() since it is no longer
required. Also remove below APIs
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_unmap_global_ctx_buffers()
gr_tu104_map_global_ctx_buffers()
Remove global_ctx_buffer_size from nvgpu_gr_ctx since it is no
longer used
Jira NVGPU-1527
Change-Id: Ic185c03757706171db0f5a925e13a118ebbdeb48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987739
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/ctx.c to manage GR context
This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers
Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.
Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit
Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference
Remove gr_gp10b_alloc_buffer() since it is no longer used
Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer
Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size
Jira NVGPU-1527
Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split out native-specific engine info collection out of
nvgpu_init_runlist() so that it only contains common code. Call this
common function from vgpu code that ends up being identical.
Jira NVGPU-1309
Change-Id: I9e83669c84eb6b145fcadb4fa6e06413b34e1c03
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978060
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Extract non-chip-specific code that manages the runlists (init, update,
reschedule etc.) to a new file in the common directory. Move the
declarations to a new matching runlist.h header.
Jira NVGPU-1309
Change-Id: I3c7e0032899516487037f47ddc9a7e7aa4b0b33a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With intention to make falcon header free of private data we are making
all falcon struct members (pmu.flcn, sec2.flcn, fecs_flcn, gpccs_flcn,
nvdec_flcn, minion_flcn, gsp_flcn) in the gk20a, pointers to struct
nvgpu_falcon. Falcon structures are allocated/deallocated by
falcon_sw_init & _free respectively.
While at it, remove duplicate gk20a.pmu_flcn and gk20a.sec2_flcn,
refactor flcn_id assignment and introduce falcon_hal_sw_free.
JIRA NVGPU-1594
Change-Id: I222086cf28215ea8ecf9a6166284d5cc506bb0c5
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/global_ctx.c to manage GR global context buffers
This unit provides interfaces to allocate/free/map/unmap all the global
context buffers. It also provides APIs to get/set size of the buffers,
and to get memory handle of the buffers
Use interfaces exposed by this unit instead of directly accessing global
context buffers in common code
Add new header file include/nvgpu/gr/global_ctx.h to declare all the
interfaces.
Rename "struct gr_ctx_buffer_desc" to "struct nvgpu_gr_global_ctx_buffer_desc"
which holds all data for each global context
Remove void *priv since it is no longer used
Add size to the desc structure to store the requested size
Remove global_ctx_buffer_size from struct nvgpu_gr_ctx since it is no longer
used for any real purpose
Jira NVGPU-1625
Change-Id: I3feaf47bc2fdf192f36b136f2ef80a49d1782c5d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1977884
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_gr_handle_fecs_error(), we right now check the error code in
mailbox to identify if we hit timestamp buffer full error interrupt
This error code right now is hard coded to 0x26
But on Turing ucode this error code is set to 0x32
Add new HAL g->ops.fecs_trace.get_buffer_full_mailbox_val() to get
correct error code per platform and use this in
gk20a_gr_handle_fecs_error()
Bug 200471541
Bug 2469604
Change-Id: I7325354b39d35b1c8b218e554814316d22950469
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978144
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update the fifo code to use the HALs exposed by "Top" unit to
read data from device_info table.
The information for GRAPHICS engine in device_info table is
now parsed using the get_device_info HAL from "Top" unit.
Copy engine(CE) has multiple entries in the device_info table
corresponding to each instance of the engine. Prior to Pascal, each
instance of an engine was denoted by different engine type.
For example in GM20B, there are engine types like COPY_ENGINE0,
COPY_ENGINE1 and so on. In Pascal and chips beyond, a new field
called "inst_id" is added and the engine_type is kept the same for
different instances of an engine. For example in GP10B, all copy
engine entries have same engine type i.e ENGINE_LCE, but different
inst_ids. So for Pascal and chips beyond, we use a different HAL to
get CE information from device_info table.
JIRA NVGPU-1053
Change-Id: Ib40a616d903a5dbef5730678c2ebc3454b8e900d
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969400
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Follow-up change to rename g->ops.mm.bar1_map (and implementations)
to more specific g->ops.mm.bar1_map_userd.
Also use nvgpu_big_zalloc() to allocate userd slabs memory descriptors.
Bug 2422486
Bug 200474793
Change-Id: Iceff3bd1d34d56d3bb9496c179fff1b876b224ce
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970891
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>