Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h
Move rest of the platform HAL files to common/regops/ as well
Fix all the header includes to include new public header
Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow
Jira NVGPU-620
Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Earlier, with DMEM queue, if command needs in/out payload
then space needs to be allocated in DMEM/FB-surface &
copy payload in allocated space before sending command
by providing payload info in sending command .
-With FBQ, command in/out payload is also part of FB command
queue element & not required to allocate separate space in
DMEM/FB-surface, so added changes to handle FBQ payload request
while sending command & also in response handler to extract
data from out payload.
JIRA NVGPU-1579
Change-Id: Ic256523db38badb1f9c14cbdb98dc9f70934606d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966741
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Change-Id: If2678d20f7195e6e8cba354b7dca5117003e3c29
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964068
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-FBQ(command/message queue) will be part of super surface
which will reside in FB.
-FBQ access should happen using falcon generic queue functions
so added FBQ related functions under falcon queue as needed
to support FBQ.
-Additional FBQ related public functions exposed as command
buffer is constructed in sysmem buffer which will be copied
to actual FBQ element buffer & also, payload is part of queue
element which needs some queue parameters access from client
JIRA NVGPU-1577
Change-Id: I3ae097e378fd162bb779aaae986b2fae306238d9
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1777939
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The Tegra SOC nvlink driver and dGPU nvlink driver depend on
struct definitions, macros and functions exposed by nvlink-core
driver. The nvlink-core driver is not part of the nvgpu driver,
hence we should not be directly accessing any core driver
APIs/macros/structs from the /common/nvlink code. Common code can
only use nvgpu internal APIs. We wrap all calls from common/nvlink.c
to other drivers in nvgpu wrappers, and define the implementation of
wrappers in os/linux and os/nvgpu_rmos, and stub them in os/posix.
Also, we remove the implicit inclusion of OS specific nvlink header
file via common nvgpu/nvlink.h. So the OS specific code needs to
explicitly add OS specific header file.
JIRA NVGPU-966
Change-Id: I65c67e247ee74088bb1253f6ae4c8d0c49420a98
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990071
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We use below APIs to update patch context
gr_gk20a_ctx_patch_write_begin()
gr_gk20a_ctx_patch_write_end()
gr_gk20a_ctx_patch_write()
Since patch context is owned by gr/ctx unit, move these APIs
to this unit and rename them to
nvgpu_gr_ctx_patch_write_begin()
nvgpu_gr_ctx_patch_write_end()
nvgpu_gr_ctx_patch_write()
Jira NVGPU-1527
Change-Id: Iee19c7a71d074763d3dcb9b1997cb2a3159d5299
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989214
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We currently load and create new graphics context image in
gr_gk20a_load_golden_ctx_image()
This API will first load local golden image in new context
image and then initialize context appropriately by calling
g->ops.gr.ctxsw_prog() HALs
Move this sequence to gr/ctx unit and rename the API as
nvgpu_gr_ctx_load_golden_ctx_image()
Note that call to g->ops.gr.update_ctxsw_preemption_mode()
is moved out of this API and called directly from
gk20a_alloc_obj_ctx()
Jira NVGPU-1527
Change-Id: Id5a5b2cd2c0704fbefe536d581a37a60ec185ea9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989157
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All slave clock should be quantized as per step size.
TU104 has 15Mhz as step size.
Enable clk_arb without enabling clk_freq_controller.
clk_freq_controller is not needed for Auto use case.
Increase the maxclk only when master is less that slave clock.
This is needed when gpcclk is less than slave P0 min.
Use get_status to get Vim and use it for change sequencer.
Add support for Device Events
Bug 200454682
Bug 2481917
Change-Id: Ie0c404f4b77e41f6a1719b52d6e29a5ac757b41b
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1994831
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vaikundanathan S <vaikuns@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, there is free space of 3MB with current implementation due to
gap between WPR & NON-WPR offset, with this PMU buffers are allocated
between this space & some are after WPR.
So, modified WPR to allocate at 0th offset of bootstrap-region of VIDMEM
& NON-WPR to be at WPR+WPR_SIZE offset of bootstrap-region to make
contiguous free space available till end of bootstrap-region of VIDMEM.
Increased WPR/NON-WPR size from 1MB to 2MB as LS falcon managed
count increased to 4 for Turing & remains 2MB for previous chips too.
Bug 200476497
Change-Id: I92ca5bc9a571330d75a66ce820a1c82442c1f200
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1994653
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses.
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace the references to
container_of() in common semaphore code:
* nvgpu_semaphore_pool_from_ref
* nvgpu_semaphore_from_ref
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I79c0b6fc4fa819c92985f2e2239e9d1d7137618d
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1995937
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses.
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace the reference to
container_of() in nvgpu init code:
* gk20a_from_refcount
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I70698c01906872e6bba368b00f5926c4419671d0
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1995743
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following set
of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object to type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses.
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() in common channel sync code:
* nvgpu_channel_sync_semaphore_from_ops
* nvgpu_channel_sync_syncpt_from_ops
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: Ib4279c7d28824ce5888838580a54b573323f7152
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993787
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses.
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in clk arb code:
* nvgpu_clk_dev_from_refcount
* nvgpu_clk_session_from_refcount
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I612990e9c27f10d0ce3ac76729529aa1eb15d42a
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993796
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in tsg and fence code:
* tsg_gk20a_from_ref
* gk20a_fence_from_ref
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: Ib5f3b8c7b18b92af8237e82ef5ee42d39c0381e5
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993503
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in vm.c:
* nvgpu_mapped_buf_from_ref
* vm_gk20a_from_ref
The implementation of the following routine has also been updated
using the same approach to eliminate the direct reference to
container_of():
* gk20a_from_as
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I530f8d0cbe37445535b82578953eb5193ccf682c
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989570
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gv11b array.
JIRA NVGPU-844
Change-Id: I6200936455117a6205bd282365d4bc90ee1ccccc
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990492
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gp10b array.
JIRA NVGPU-844
Change-Id: I573dd94fb0fc354d297fa94ab1d9a74d5a2b1809
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990482
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gm20b array.
JIRA NVGPU-844
Change-Id: I04f3b4db1601c1d5c21d7c48a8a513db4e12a542
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990479
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently all the global contex buffers are mapped into each graphics
context. Move all the mapping/unmapping support to gr/ctx unit since
all the mappings are owned by context itself
Add nvgpu_gr_ctx_map_global_ctx_buffers() that maps all the global
context buffers into given gr_ctx
Add nvgpu_gr_ctx_get_global_ctx_va() that returns VA of the mapping
for requested index
Remove g->ops.gr.map_global_ctx_buffers() since it is no longer
required. Also remove below APIs
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_unmap_global_ctx_buffers()
gr_tu104_map_global_ctx_buffers()
Remove global_ctx_buffer_size from nvgpu_gr_ctx since it is no
longer used
Jira NVGPU-1527
Change-Id: Ic185c03757706171db0f5a925e13a118ebbdeb48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987739
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/ctx.c to manage GR context
This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers
Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.
Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit
Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference
Remove gr_gp10b_alloc_buffer() since it is no longer used
Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer
Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size
Jira NVGPU-1527
Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The nvgpu managed syncpoint is not needed for anything if a channel uses
usermode submits; in that case the channel would allocate an
user-managed syncpoint and use that. Create the channel sync in
nvgpu_channel_setup_bind() only if usermode submit is not enabled.
Bug 200466905
Change-Id: I976f4b4fd0c3131cb310c72b286329fb16f1f29a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990270
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Local golden image is copy of global GR context buffer hence move its
ownership to global context unit
Add new structure nvgpu_gr_global_ctx_local_golden_image to hold all meta
data for local golden image and move it to struct gr_gk20a
Expose and use new APIs to initialize/deinitialize and load local golden image
Jira NVGPU-1625
Change-Id: Ieb68e52c205ca0ecd27f8bf4bb31922a01e7ae54
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1984952
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The channel timeout ends up in a strange state during timeout handling
for a brief moment; it can become stopped and started again, and the
timeout lock is released in the middle. Add a more explicit rewind
function to reset the timeout to start if it's active. The active check
allows to use this from gk20a_channel_timeout_restart_all_channels(), so
that's also modified.
Also replace the return statements with more readable control flow in
gk20a_channel_timeout_handler().
Change-Id: Ia7d67242dfc149ace1f4f841a837e90b6c985308
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989327
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add clk arbiter support for tu104
setup clk_arb for supporting functions in hal_tu04
TU104 supports GPCCLK and not GPC2CLK
Remove multiplication and division by 2 to convert gpcclk to gpc2clk
Provide support for following features
*Domains: Currently GPCCLK is supported
*clk Range: From P0 min to P0 max
*Freq Points: Gives the VF curve from PMU
*Default: Default value(P0 Max)
*Current Pstate: P0 is supported
All request for change is freq is validated against P0 value
Out of bound values are trimmed to match the Pstate limits
Multiple requests are supported and max of that will be set
Requests are sent to PMU via change sequencer
Bug 200454682
JIRA NVGPU-1653
Change-Id: I36735fa50c7963830ebc569a2ea2a2d7aafcf2ab
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1982078
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>