Add new MAPPING_MODIFY ioctl to the linux nvgpu driver.
This ioctl is used (for example) by the NvRmGpuMappingModify API to
change the kind of an existing mapping.
For compressed mappings the ioctl can be used to do the following:
* switch between two different compressed kinds
* switch between compressed and incompressed kinds
For incompressed mappings the ioctl can be used to do the following:
* switch between two different incompressed kinds
In order to properly update an existing mapping the nvgpu_mapped_buf
structure has been extended to cache the following state when the
mapping is first created:
* the compression tag offset (if applicable)
* the GMMU read/write flags
* the memory aperture
The unused ctag_lines field in the nvgpu_ctag_buffer_info structure
has been replaced with a new ctag_offset field.
Jira NVGPU-6374
Change-Id: I647ab9c2c272e3f9b52f1ccefc5e0de4577c14f1
Signed-off-by: scottl <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468100
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When building NVGPU with the GCC -Werror=sizeof-pointer-memaccess
warning enabled, the following error is seen ...
drivers/gpu/nvgpu/common/mm/as.c: In function ‘gk20a_vm_alloc_share’:
drivers/gpu/nvgpu/common/mm/as.c:131:33: error: argument to ‘sizeof’
in ‘strncpy’ call is the same expression as the source; did you
mean to use the size of the destination?
[-Werror=sizeof-pointer-memaccess]
131 | p = strncpy(name, "as_", sizeof("as_"));
| ^
This is caused because the source buffer is passed to sizeof instead of
the destination. This could cause a buffer overflow if the source is
larger than the destination buffer.
Looking at the code further, there is another problem and that is that
after copying the string 'as_' to the 'name' buffer, the pointer 'p'
returned by strncpy is then used as the address to append an unsigned
integer to the string 'as_'. However, the pointer returned by strncpy
is actually the same address as pointed to by 'name'. Therefore, the
prefix 'as_' is actually overwritten by the call to nvgpu_strnadd_u32.
Fix these issues by initialising 'name' buffer to 'as_' statically and
then set the pointer 'p' to the offset in the 'name' buffer that
follows the prefix 'as_'. This removes the need to use strncpy at all
and simplifies the code.
Bug 200689205
Change-Id: Ia9f0c634dc5a6dada088756cdae8c3dd688dcc48
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2465814
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
- Modify NVGPU_GPU_IOCTL_ALLOC_AS and struct nvgpu_alloc_as_args to
accept start address and size of user memory. This allows configurable
address space allocation.
- Modify gk20a_as_alloc_share() and gk20a_vm_alloc_share() to receive
va_range_start and va_range_end values.
- gk20a_vm_alloc_share() initializes vm with low_hole = va_range_start,
and user vma size = (va_range_end - va_range_start).
- Modify nvgpu_as_alloc_space_args and nvgpu_as_free_space_args to
accept 64 bit number of pages.
Bug 2043269
JIRA NVGPU-5302
Change-Id: I243995adf5b7e0e84d6b36abe3b35a5ccabd7a37
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2385496
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Mapping of large buffers to GMMU end up needing many
pages for the PTE tables. Allocating these one by one
can end up being a performance bottleneck, particularly
in the virtualized case.
This is adding the following changes:
- As the TLB invalidation doesn't have access to mem_off,
allow top-level allocation by alloc_cache_direct().
- Define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab
for the PD cache, effectively set to 64K bytes
- Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE
When freeing up cached entries, avoid prefetch errors by
invalidating the entry (memset to 0).
- Try to fall back to direct allocation of smaller chunk for
contiguous allocation failures.
- Unit test changes.
Bug 200649243
Change-Id: I0a667af0ba01d9147c703e64fc970880e52a8fbc
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404371
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following changes are made in this patch.
1) Change unnamed structs within gpu_ops to named structs
with the prefix gops_*.
2) Each named struct gops_ are moved into a separate gops specific file
under include/nvgpu/gops/
3) struct gpu_ops is moved into a separate file include/nvgpu/gpu_ops.h
and all other dependent struct gops_* are included in this header.
4) Direct references to include/nvgpu/gops are removed from files as its enough
to include gk20a.h.
Change-Id: Ieb22cb853be567e3bef14f5f8a04674eebd902ea
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398776
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Not sure if there's an actual bug or JIRA filed for this, but the
change here fixes a long standing bug in the MM code for unit tests.
Te GMMU programming code verifies that the CPU _physical_ address
programmed into the GMMU PDE0 is a valid Tegra SoC CPU physical
address. That means that it's not too large a value.
The POSIX imlementation of the nvgpu_mem related code used the CPU
virtual address as the "phys" address. Obviously, in userspace,
there's no access to physical addresses, so in some sense it's a
meaningless function. But the GMMU code does care, as described
above, about the format of the address.
The fix is simple enough: since the nvgpu_mem_get_addr() and
nvgpu_mem_get_phys_addr() values shouldn't actually be accessed by
the driver anyway (they could be vidmem addresses or IOVA addresses
in real life) ANDing them with 0xffffffff (e.g 32 bits) truncates
the potentially problematic CPU virtual address bits returned by
malloc() in the POSIX environment.
With this, a run of the unit test framework passes for me locally
on my Ubuntu 18 machine.
Also, clean up a few whitespace issues I noticed while I debugged
this and fix another long standing bug where the
NVGPU_DEFAULT_DBG_MASK was not being copied to g->log_mask during
gk20a struct init.
Change-Id: Ie92d3bd26240d194183b4376973d4d32cb6f9b8f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2395953
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
The lockless allocator that spins in alloc and free ops using cmpxchg to
mitigate race conditions has only ever been used for the post fences in
preallocated job resources. Now each post fence has a clear owner (the
job struct which already is allocated well) and lifetime, so this
allocator has no longer a purpose. Delete it to avoid bitrot. (The
design of the job queue has always been such that there's minimal
contention in any case.)
Jira NVGPU-5773
Change-Id: Ied98d977c2c75bacfd3d010ce60c80fe709231e0
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2392705
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, ltc fs_state is initialized during ltc init support. However,
ltc cbc_param and cbc_param2 registers do not seem to be providing
correct data if ltc.init_fs_state is called before fb.init_fs_state.
- Create fb.init_fb_support hal to initialize fb.
- Trigger init_fb_support before init_ltc_support.
Bug 2969956
Bug 2957808
JIRA NVGPU-4666
Change-Id: I54d697d27b9d9c6318c4ef459d215b6f82cd5571
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2345673
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The ecc init, handling for the fb unit is refactored to improve reusability
for nvgpu-next.
The following changes have been done:
- fb.ecc:
This is a new subunit within fb and contains the following functions:
- init: Moved from fb.fb_ecc_init.
- free: Moved from fb.fb_ecc_free.
- l2tlb_error_mask: Fetch bit mask for corrected, uncorrected errors supported
by the unit.
- fb.intr:
This unit has been updated to include the following ecc interrupt, error
handlers:
- handle_ecc: Top level interrupt handler for fb ecc errors.
- handle_ecc_l2tlb: Handle errors within l2tlb memory.
- handle_ecc_hubtlb: Handle errors within hubtlb memory.
- handle_ecc_fillunit: Handle errors within fillunit memory
Jira: NVGPU-5032
Change-Id: I1a26c1823eb992e0e0175250b969f1186dff6e62
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2333271
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_has_syncpoints is more general than a channel synchronization
related, so move it to nvhost.c from channel_sync.c. Move the
declaration from gk20a.h to nvhost.h.
As the debugfs knob is Linux related, move it from struct gk20a to
struct nvgpu_os_linux.
Jira NVGPU-4548
Change-Id: I4236086744993c3daac042f164de30939c01ee77
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2318814
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the upstream kernel ACCESS_ONCE is now deprecated with reason as
given in the following related commit:
commit 381f20fceba8e ("security: use READ_ONCE instead of deprecated
ACCESS_ONCE")
ACCESS_ONCE() does not work reliably on non-scalar types. For
example gcc 4.6 and 4.7 might remove the volatile tag for such
accesses during the SRA (scalar replacement of aggregates) step.
Replace usages of ACCESS_ONCE with READ_ONCE and WRITE_ONCE in nvgpu.
Bug 2834141
Change-Id: I9904c49e1a4d7b17ed2fe54360051d08595a2982
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2294096
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Implmented functions to load and execute PUB which
is the safety POR.
PUB has following functionality:
1) Lower PLM
2) Reset PMU
3) FBPA register access to devtools
Secure Boot and Runtime (SBR) microcode comprises of
single PLM Update Binary (PUB) which will execute on
SEC2 Engine Falcon. NVGPU shall load and execute PUB
and wait for falcon halt. On successful halt NVGPU
shall proceed with ns ucode loading on respective
falcons.
NVGPU-4549
Change-Id: I8ea897a026bbe2b1714823aba51bfa51864dd68a
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2292330
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- "include/trace/events/gk20a.h" file was having GPL2 license
(which should not used for QNX code). This file was used for
compiling linux userspace driver("libnvgpu-drv.so") and was used for
unit testing on QNX.
- This patch removes stubs in "include/trace/events/gk20a.h" file.
(which were used for linux userspace driver.)
- For QNX driver, "nvgpu_rmos/trace/events/gk20a.h" was used.
This patch moves that file to "include/nvgpu/posix/trace_gk20a.h" and
does relevant license change. This same file will be used for linux
userspace driver.
- This patch also creates a new file "include/nvgpu/trace.h" which
selects proper trace file depending on the config.
Bug 2802414
Change-Id: Icdfb251e5698073f986753a969e804161af3ecc5
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2286388
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Advisory Rule 2.5 states that a project should not contain unused
macro declarations.
While most of the violations in the nvgpu driver are due to unused
macros from hw headers, devinit-related headers, etc. there is a small
number that are due to things like:
* macros not being used when they could/should be
* macros in C files that are really not referenced
* CPP build flag mismatches
This change eliminates such violations from the following:
* replace constants with existing macros in timeout conversion code
* wrap nvgpu_gmmu_dbg macro #defines in #ifdef CONFIG_NVGPU_TRACE/#endif
* wrap MAX_MC_INTR_REGS #define in #ifdef CONFIG_NVGPU_NON_FUSA/#endif
* remove unused FECS_MAILBOX_0_ACK_RESTORE from runlist code
* wrap BACKTRACE_MAXSIZE macro with #ifndef _QNX_SOURCE/#endif
Jira NVGPU-3178
Change-Id: I2bc72f706d7af3f8e7b062126e8543d0dc8ac250
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2284419
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
MISRA Advisory Directive 4.5 states that identifiers in the same
name space with overlapping visibility should be typographically
unambiguous.
In both the nvgpu_locate_pte_last_level() and gp10b_get_pde0_pgz()
routines this violation is raised because a variable used as
a for-loop index ('i') is ambiguous with a parameter that points
to a struct gk20a_mmu_level ('l').
To fix these violations the loop index 'i' is renamed to 'idx'.
Jira NVGPU-3178
Change-Id: I2cec904201075b48ab6ccfbd0ff6d7e9dcac2867
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2271456
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 11.2 doesn't allow conversions of a pointer from or to an
incomplete type. These type of conversions may result in a pointer
aligned incorrectly and may further result in undefined behavior.
This patch addresses rule 11.2 violations related to pointers to and
from struct nvgpu_sgl. This patch replaces struct nvgpu_sgl pointers by
void pointers.
Jira NVGPU-3736
Change-Id: I8fd5766eacace596f2761b308bce79f22f2cb207
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2267876
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
fbpa related functions are not supported on igpu safety. Don't
compile them if CONFIG_NVGPU_DGPU is not set.
Also compile out fb and ramin hals that are dgpu specific.
Update the tests for the same.
JIRA NVGPU-4529
Change-Id: I1cd976c3bd17707c0d174a62cf753590512c3a37
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2265402
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>