A naked channel ID does not carry good information about the channel
validity and is a very low level construct for an API of this level.
Refactor the runlist updating fifo APIs to take a channel pointer.
While at it, delete the channel and wait_for_finish parameters from
gk20a_fifo_update_runlist_ids() - the only caller is suspend and resume
and the parameters were always null for channel and true for wait.
Jira NVGPU-1309
Jira NVGPU-1737
Change-Id: Ied350bc8e482d8e311cc708ab0c7afdf315c61cc
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
sync cmbbuf specific ops pointers are moved into a new struct sync_ops
under the parent struct gpu_ops. The HAL assignments to the gk20a and
gv11b versions are updated to match the new struct type.
Jira NVGPU-1308
Change-Id: I1d9832ed5e938cb65747f0f6d34088552f75e2bc
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1975919
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function gk20a_fifo_set_runlist_state was moved to another place
some time ago but the declaration didn't follow the implementation move.
Move it from fifo_gk20a.h to runlist.h.
Jira NVGPU-1309
Change-Id: Ib939a5243cee4be1c1092a553cb81b81adc6e5ce
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997825
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h
Move rest of the platform HAL files to common/regops/ as well
Fix all the header includes to include new public header
Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow
Jira NVGPU-620
Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Earlier, with DMEM queue, if command needs in/out payload
then space needs to be allocated in DMEM/FB-surface &
copy payload in allocated space before sending command
by providing payload info in sending command .
-With FBQ, command in/out payload is also part of FB command
queue element & not required to allocate separate space in
DMEM/FB-surface, so added changes to handle FBQ payload request
while sending command & also in response handler to extract
data from out payload.
JIRA NVGPU-1579
Change-Id: Ic256523db38badb1f9c14cbdb98dc9f70934606d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966741
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Change-Id: If2678d20f7195e6e8cba354b7dca5117003e3c29
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964068
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-FBQ(command/message queue) will be part of super surface
which will reside in FB.
-FBQ access should happen using falcon generic queue functions
so added FBQ related functions under falcon queue as needed
to support FBQ.
-Additional FBQ related public functions exposed as command
buffer is constructed in sysmem buffer which will be copied
to actual FBQ element buffer & also, payload is part of queue
element which needs some queue parameters access from client
JIRA NVGPU-1577
Change-Id: I3ae097e378fd162bb779aaae986b2fae306238d9
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1777939
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The Tegra SOC nvlink driver and dGPU nvlink driver depend on
struct definitions, macros and functions exposed by nvlink-core
driver. The nvlink-core driver is not part of the nvgpu driver,
hence we should not be directly accessing any core driver
APIs/macros/structs from the /common/nvlink code. Common code can
only use nvgpu internal APIs. We wrap all calls from common/nvlink.c
to other drivers in nvgpu wrappers, and define the implementation of
wrappers in os/linux and os/nvgpu_rmos, and stub them in os/posix.
Also, we remove the implicit inclusion of OS specific nvlink header
file via common nvgpu/nvlink.h. So the OS specific code needs to
explicitly add OS specific header file.
JIRA NVGPU-966
Change-Id: I65c67e247ee74088bb1253f6ae4c8d0c49420a98
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990071
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Two new fifo hals are added.
read_pbdma_data and reset_pbdma_header.
In turing the instruction that caused the interrupt
will be stored in NV_PPBDMA_PB_DATA0 register or
NV_PPBDMA_HDR_SHADOW register, which is decided based on
NV_PPBDMA_PB_COUNT value and PB_HEADER type
JIRA NVGPU-1240
Change-Id: I54a92e317a6054335439d2d61bced28aff3eecb7
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990699
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Regops does not depend on debug session logically
We right now include debugger.h in regops_gk20a.c to extract
channel pointer from debug session and to check if session is for
profiling or not
Update exec_regops_gk20a() to receive channel pointer and profiler
flag directly as parameters, and remove dbg_session_gk20a from
parameter list
Remove ((!dbg_s->is_profiler) && (ch != NULL)) checks from
check_whitelists(). Caller of exec_regops_gk20a() already ensures
that we have context bound for debug session
Use only is_profiler boolean flag in this case which should be
sufficient
Remove (ch == NULL) check in check_whitelists() if regops is of
type gr_ctx. Instead move this check to earlier function call
in validate_reg_ops(). If we have non-zero context operation on
a profiler session, return error from validate_reg_ops()
Update all subsequent calls with appropriate parameter list
Remove debugger.h include from regops_gk20a.c
Jira NVGPU-620
Change-Id: If857c21da1a43a2230c1f7ef2cc2ad6640ff48d9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997868
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We use below APIs to update patch context
gr_gk20a_ctx_patch_write_begin()
gr_gk20a_ctx_patch_write_end()
gr_gk20a_ctx_patch_write()
Since patch context is owned by gr/ctx unit, move these APIs
to this unit and rename them to
nvgpu_gr_ctx_patch_write_begin()
nvgpu_gr_ctx_patch_write_end()
nvgpu_gr_ctx_patch_write()
Jira NVGPU-1527
Change-Id: Iee19c7a71d074763d3dcb9b1997cb2a3159d5299
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989214
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We currently load and create new graphics context image in
gr_gk20a_load_golden_ctx_image()
This API will first load local golden image in new context
image and then initialize context appropriately by calling
g->ops.gr.ctxsw_prog() HALs
Move this sequence to gr/ctx unit and rename the API as
nvgpu_gr_ctx_load_golden_ctx_image()
Note that call to g->ops.gr.update_ctxsw_preemption_mode()
is moved out of this API and called directly from
gk20a_alloc_obj_ctx()
Jira NVGPU-1527
Change-Id: Id5a5b2cd2c0704fbefe536d581a37a60ec185ea9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989157
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a stub POSIX version of __nvgpu_mem_create_from_phys. That allows
building nvgpu code that is behind NVHOST compilation option.
JIRA NVGPU-1734
Change-Id: I12d80e69e78d975141d690bc3f37220ecb5bcc89
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993125
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently all the global contex buffers are mapped into each graphics
context. Move all the mapping/unmapping support to gr/ctx unit since
all the mappings are owned by context itself
Add nvgpu_gr_ctx_map_global_ctx_buffers() that maps all the global
context buffers into given gr_ctx
Add nvgpu_gr_ctx_get_global_ctx_va() that returns VA of the mapping
for requested index
Remove g->ops.gr.map_global_ctx_buffers() since it is no longer
required. Also remove below APIs
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_unmap_global_ctx_buffers()
gr_tu104_map_global_ctx_buffers()
Remove global_ctx_buffer_size from nvgpu_gr_ctx since it is no
longer used
Jira NVGPU-1527
Change-Id: Ic185c03757706171db0f5a925e13a118ebbdeb48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987739
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/ctx.c to manage GR context
This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers
Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.
Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit
Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference
Remove gr_gp10b_alloc_buffer() since it is no longer used
Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer
Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size
Jira NVGPU-1527
Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Local golden image is copy of global GR context buffer hence move its
ownership to global context unit
Add new structure nvgpu_gr_global_ctx_local_golden_image to hold all meta
data for local golden image and move it to struct gr_gk20a
Expose and use new APIs to initialize/deinitialize and load local golden image
Jira NVGPU-1625
Change-Id: Ieb68e52c205ca0ecd27f8bf4bb31922a01e7ae54
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1984952
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>