Introduce nvgpu_gmmu_map_partial() to map a specific size of a buffer
represented by nvgpu_mem, or what nvgpu_gmmu_map() used to do. Delete
the size parameter from nvgpu_gmmu_map() such that it now maps the
entire buffer. The separate size parameter is a historical artifact from
when nvgpu_mem did not exist yet; the typical use is to map the entire
buffer.
Mapping at a certain address with nvgpu_gmmu_map_fixed() still takes the
size parameter.
The returned address still has to be stored somewhere, typically to
mem.gpu_va by the caller so that the matching unmap variant finds the
right address.
Change-Id: I7d67a0b15d741c6bcee1aecff1678e3216cc28d2
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601788
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Introduce nvgpu_gmmu_unmap_addr() to unmap a nvgpu_mem that was mapped
at some other address than mem.gpu_va, which can be the case for buffers
that are shared across different address spaces. Delete the address
parameter from nvgpu_gmmu_unmap(), as the common case is to store the
address to mem.gpu_va when mapping the buffer.
Modify some instances of consecutive unmap + free calls to call just
nvgpu_dma_unmap_free().
Change-Id: Iecd7c9aa41d04e9f48e055f6bc0c9227cd759c69
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601787
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GFxP preemption for graphics contexts is not supported in safety.
But the support was enabled along with CONFIG_NVGPU_GRAPHICS since GFxP
preemption was protected under same config.
Add a separate config CONFIG_NVGPU_GFXP to protect all GFxP specific
code, enum values, and HALs.
Disable the config in safety profile.
Jira NVGPU-6893
Change-Id: Iebb5f754a1025dfa6e05a94704bdb8a7123b599a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2534986
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Some of the RTV circular buffer programming is under GRAPHICS config and
some is under DGPU config. For nvgpu-next, RTV circular buffer is
required even for iGPU so keeping the code under DGPU config does not
make sense.
Move all the code from DGPU config to GRAPHICS config.
Bug 3159973
Change-Id: I8438cc0e25354d27701df2fe44762306a731d8cd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2524897
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below updates to common.gr doxygen:
- Add doxygen comments for APIs that are mentioned in RM SWAD and in
RM-common.gr traceability document.
- Comment about valid ranges for input parameters of bunch of functions.
- Add nvgpu_assert() to ensure correct value is passed as input
parameter to number of functions.
- Add references to relevant functions with @see.
- Update Targets field for unit tests to cover newly doxygenated
functions.
- Update unit test test_gr_init_hal_pd_skip_table_gpc to take care of
new asserts added into some APIs.
Jira NVGPU-6180
Change-Id: Ie889bed96b6428b1fd86dcf30b322944464e9d12
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2469397
(cherry picked from commit 5d7d7e9ce1c4efe836ab842d7962a3aee4e8972f)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2469394
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
APIs to set preemption modes right now have config based code to set
default preemption modes or to check if given preemption mode is valid
or not. This makes code unreadable and complex.
Rework nvgpu_gr_obj_ctx_init_ctxsw_preemption_mode() so that it checks
for initial preemption modes in the beginning. If no preemption mode is
passed while allocating context, get default preemption modes with
gops.gr.init.get_default_preemption_modes() and use them.
Rework nvgpu_gr_ctx_check_valid_preemption_mode() so that it is more
readable. Use gops.gr.init.get_supported_preemption_modes() to validate
incoming preemption modes against supported preemption modes.
Log preemption modes getting set in
nvgpu_gr_obj_ctx_set_ctxsw_preemption_mode().
Disable failing unit test. It will need rework according to new code.
Jira NVGPU-5648
Change-Id: Ie1a3e1aeae7826a123e104d9d016f181bea3b271
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2419034
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- nvgpu_gr_ctx_load_golden_ctx_image() does not return any error, change
the return type to void
- Check for preemption modes greater than CILP in
nvgpu_gr_ctx_check_valid_preemption_mode
- Check if received class is valid or not in
nvgpu_gr_setup_set_preemption_mode
- Compile out entire nvgpu_gr_obj_ctx_init_ctxsw_preemption_mode since
it is really not doing anything in safety
- Remove the switch statement in nvgpu_gr_obj_ctx_set_compute_preemption_mode
since it is not possible to receive any other value than supported.
Previous function calls ensure that input values are validated.
- nvgpu_gr_obj_ctx_commit_global_ctx_buffers() does not return any
error, change the return type to void
- gops.gr.init.preemption_state HAL is not needed in safety since it
only configures gfxp related timeout
- remove redundant call to gops.gr.init.wait_idle in
nvgpu_gr_obj_ctx_commit_hw_state. We trigger wait despite earlier
failure in same function call.
Jira NVGPU-4457
Change-Id: I06a474ef7cc1b16fbc3846e0cad1cda6bb2bf2af
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2260938
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Set ENOMEM error if GPU mapping fails in nvgpu_gr_ctx_alloc().
- In nvgpu_gr_ctx_free_patch_ctx(), it is not possible to have a
valid patch buffer without GPU virtual address space. Hence instead
of checking for gpu_va, use nvgpu_mem_is_valid() to check if GPU
mappings can be removed.
- Check if RTV circular buffer is ready within DGPU config.
nvgpu_gr_global_ctx_buffer_map() already checks if buffer is ready
or not and returns error if buffer is not ready.
- g->ops.gr.ctxsw_prog.init_ctxsw_hdr_data() will always be set for
each chip. Remove unnecessary NULL check.
Jira NVGPU-4373
Change-Id: Ib490f81f8b8299f87cffbb8a33fde8cf98e6c288
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2259073
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move code used only with graphics under
CONFIG_NVGPU_GRAPHICS check.
gm20b_gr_init_load_sw_bundle_init hal get called
without CONFIG_NVGPU_GR_GOLDEN_CTX_VERIFICATION check.
Remove dead code in
nvgpu_gr_ctx_check_valid_preemption_mode function.
Jira NVGPU-3968
Change-Id: I399126123006ae44dba29b3c08378d11fe82e543
Signed-off-by: vinodg <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2247346
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Some of the debugger and graphics specific functions were already
not being used in safety build. Compile out their definitions
and declarations as well.
Also, fail preemption set call if non-zero graphics preemption mode
is requested in safety build.
Jira NVGPU-3967
Change-Id: Iaf5e3bd58e6096da40301b79e9295a6c5893cd4a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2191764
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add macros for whitelisting coverity violations. These macros use pragma
directives. The pragma directives and whitelisting macros are only
enabled when a coverity scan is being run.
The whitelisting macros have been added to a new header called
static_analysis.h. The contents of safe_ops.h (CERT C safe ops) have
been moved into static_analysis.h because this will be the new header
for static analysis related macros/defines/etc.
JIRA NVGPU-3820
Change-Id: I9c63f20f670880b420415535738034619314b7c3
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Error: CERT INT30-C:
drivers/gpu/nvgpu/common/gr/ctx.c:644:
cert_violation: Unsigned integer operation
"gr_ctx->patch_ctx.mem.size / 4UL - 2UL" may wrap.
Error: CERT INT31-C:
drivers/gpu/nvgpu/common/gr/ctx.c:580:
cert_violation: Casting "gr_ctx->boosted_ctx" from "bool" to "unsigned int"
without checking its value may result in lost or misinterpreted data.
JIRA NVGPU-3409
Change-Id: Ib3c865d43ecd3c0eaaafd47b7b65111aa77689bd
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2118521
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix misra errors from nvgpu_gr_ctx_prepare_hwpm_mode()
Error: MISRA C-2012 Rule 16.1:
nvgpu/drivers/gpu/nvgpu/common/gr/ctx.c:815:
missing_break: This switch clause does not end with
an unconditional break statement.
nvgpu/drivers/gpu/nvgpu/common/gr/ctx.c:785:
misra_violation: The switch statement is not well formed.
NVGPU-3224
Change-Id: Ica7029fd84f443b8ce3d51a5c5f32bb6a6172040
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2112703
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Debug boolean flag dump_ctxsw_stats_on_channel_close is right now stored
in gr_gk20a.ctx_vars struct
This flag logically is property of gr.ctx units since it indicates
whether each context should dump ctxsw stats on channel/context close
Move this flag to struct nvgpu_gr_ctx_desc and remove it from
gr_gk20a.ctx_vars
Expose below API from gr.ctx unit to check if flag is set
nvgpu_gr_ctx_desc_dump_ctxsw_stats_on_channel_close()
Move debugfs creation code to create corresponding debugfs to
gr_gk20a_debugfs_init() and change debugfs type from "u32" to "file"
Struct gr.gr_ctx_desc is created only during first poweron.
Return error if this struct is not available.
Remove unnecessary initialization of this variable from platform
specific probe functions
Jira NVGPU-3112
Change-Id: Id675e047237f82e9b8198a42082e99c95824578f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099399
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Debug boolean flags force_preemption_gfxp and force_preemption_cilp are
right now stored in gr_gk20a.ctx_vars struct
These flags logically are property of gr.ctx units since they indicate
whether each context should be forced to gfxp/cilp preemption mode by
default
Move these flags to struct nvgpu_gr_ctx_desc and remove them from
gr_gk20a.ctx_vars
Expose below APIs from gr.ctx unit to check if flags are set
nvgpu_gr_ctx_desc_force_preemption_gfxp()
nvgpu_gr_ctx_desc_force_preemption_cilp()
Move debugfs creation code to create corresponding debugfs to
gr_gk20a_debugfs_init() and change debugfs type from "u32" to "file"
Struct gr.gr_ctx_desc is created only during first poweron.
Return error if this struct is not available.
Remove unnecessary initialization of these variables from platform
specific probe functions
Jira NVGPU-3112
Change-Id: I8b2de27f0c71dd2ea5abcf94221c2e15c80073ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099398
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a new MM HAL directory to contain all MM related HAL units.
As part of this change add cache unit to the MM HAL. This contains
several related fixes:
1. Move the cache code in gk20a/mm_gk20a.c and gv11b/mm_gv11b.c to
the new cache HAL. Update makefiles and header includes to take
this into account. Also rename gk20a_{read,write}l() to their
nvgpu_ variants.
2. Update the MM gops: move the cache related functions to the new
cache HAL and update all calls to this HAL to reflect the new
name.
3. Update some direct calls to gk20a MM cache ops to pass through
the HAL instead.
4. Update the unit tests for various MM related things to use the
new MM HAL locations.
This change accomplishes two architecture design goals. Firstly it
removes a multiple HW include from mm_gk20a.c (the flush HW header).
Secondly it moves code from the gk20a/ and gv11b/ directories into
more proper locations under hal/.
JIRA NVGPU-2042
Change-Id: I91e4bdca4341be4dbb46fabd72622b917769f4a6
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095749
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Change the global_ctx_buffer_index member in the nvgpu_gr_ctx struct
to be an enum nvgpu_gr_global_ctx_index. global_ctx_buffer_index is
used as an array of these indicies, but had been declared as an int.
This change resolves a number of MISRA Rule 10.3 violations for implicit
assignment of objects of different essential or narrower type.
In order to use this enum, it is moved out of global_ctx.h into a new
header file ctx_common.h that can be used by both ctx.h and global_ctx.h.
JIRA NVGPU-2955
Change-Id: I5e399ba3b0821d696aa0b9909d3bc6bbe99d274c
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075753
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_gr_ctx_load_golden_ctx_image() in common.gr.ctx unit programs
initial pm_mode in context. gk20a_alloc_obj_ctx() then disables pm_mode
ctxsw by default.
Fix this by disabling pm_mode ctxsw by default in
nvgpu_gr_ctx_load_golden_ctx_image() itself. Remove corresponding code
from gk20a_alloc_obj_ctx()
Jira NVGPU-1887
Change-Id: I6e1f83cefcb6229394da353e4cd87f1f5a0b10d4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2076273
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new APIs to set preemption buffer in graphics context or
subcontext respectively
nvgpu_gr_ctx_set_preemption_buffer_va()
nvgpu_gr_subctx_set_preemption_buffer_va()
Remove g->ops.gr.set_preemption_buffer_va() hal and use above APIs to
set preemption buffer VA.
Jira NVGPU-1887
Change-Id: I38fb76eaf01d3fc73fd8104f30bcd89be9fa45b6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2076272
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>