Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The lockless allocator that spins in alloc and free ops using cmpxchg to
mitigate race conditions has only ever been used for the post fences in
preallocated job resources. Now each post fence has a clear owner (the
job struct which already is allocated well) and lifetime, so this
allocator has no longer a purpose. Delete it to avoid bitrot. (The
design of the job queue has always been such that there's minimal
contention in any case.)
Jira NVGPU-5773
Change-Id: Ied98d977c2c75bacfd3d010ce60c80fe709231e0
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2392705
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the upstream kernel ACCESS_ONCE is now deprecated with reason as
given in the following related commit:
commit 381f20fceba8e ("security: use READ_ONCE instead of deprecated
ACCESS_ONCE")
ACCESS_ONCE() does not work reliably on non-scalar types. For
example gcc 4.6 and 4.7 might remove the volatile tag for such
accesses during the SRA (scalar replacement of aggregates) step.
Replace usages of ACCESS_ONCE with READ_ONCE and WRITE_ONCE in nvgpu.
Bug 2834141
Change-Id: I9904c49e1a4d7b17ed2fe54360051d08595a2982
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2294096
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 11.2 doesn't allow conversions of a pointer from or to an
incomplete type. These type of conversions may result in a pointer
aligned incorrectly and may further result in undefined behavior.
This patch addresses rule 11.2 violations related to pointers to and
from struct nvgpu_sgl. This patch replaces struct nvgpu_sgl pointers by
void pointers.
Jira NVGPU-3736
Change-Id: I8fd5766eacace596f2761b308bce79f22f2cb207
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2267876
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add macros for whitelisting coverity violations. These macros use pragma
directives. The pragma directives and whitelisting macros are only
enabled when a coverity scan is being run.
The whitelisting macros have been added to a new header called
static_analysis.h. The contents of safe_ops.h (CERT C safe ops) have
been moved into static_analysis.h because this will be the new header
for static analysis related macros/defines/etc.
JIRA NVGPU-3820
Change-Id: I9c63f20f670880b420415535738034619314b7c3
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
INT30-C requires that unsigned integer operations do not wrap.
INT31-C requires checking that data isn't misinterpreted after casting.
INT32-C requires that signed operations do not overflow.
Jira NVGPU-3882
Change-Id: I6b4c1769ec85919f8ec2aa183cba3b7c0ffa1e97
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2166124
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rule 2.2 doesn't allow unused variable assignments. The reason is
presence of unused variable assignments may indicate error in program's
logic.
Rule 21.x doesn't allow reserved identifier or macro names starting with
'_' to be reused or defined.
Jira NVGPU-3864
Change-Id: I8ee31c0ee522cd4de00b317b0b4463868ac958ef
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2163723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Safety build does not support vidmem. This patch compiles out vidmem
related changes - vidmem, dma alloc, cbc/acr/pmu alloc based on
vidmem and corresponding tests like pramin, page allocator &
gmmu_map_unmap_vidmem..
As vidmem is applicable only in case of DGPUs the code is compiled
out using CONFIG_NVGPU_DGPU.
JIRA NVGPU-3524
Change-Id: Ic623801112484ffc071195e828ab9f290f945d4d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2132773
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix the following MISRA rule violations in bitops unit,
MISRA Rule 10.1
MISRA Rule 10.3
MISRA Rule 10.4
MISRA Rule 11.8
MISRA Rule 21.2
Introduce nvgpu specific functions for bitops and bitmap operations
with unsigned integer as parameter for offset. OS specific type
conversions and handling of these inerfaces are taken care in the
respective OS files.
Jira NVGPU-3545
Change-Id: Ib1ef76563db6ba1d879a0b4d365b2958ea03f85c
Signed-off-by: ajesh <akv@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2129513
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make the nvgpu_init_mutex function return void.
In linux case, this doesn't affect anything since mutex_init
returns void.
For posix, we assert() and die if pthread_mutex_init fails.
This alleviates the need to error inject for _every_
nvgpu_mutex_init function in the driver.
Jira NVGPU-3476
Change-Id: Ibc801116dc82cdfcedcba2c352785f2640b7d54f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2130538
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 21.1 states that #define and #undef shall not be used on
a reserved identifier or reserved macro name. Fix violations of
rule 21.1 in bitops unit.
MISRA rule 21.2 states that a reserved identifier or macro name
shall not be declared. Fix violations of rule 21.2 in bitops unit.
Jira NVGPU-3545
Change-Id: Ie551d7ce5e19287107403f2c991bcc55bd11a4e8
Signed-off-by: ajesh <akv@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2125842
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Previously initializing buddy_allocator with size=0 initialized large
memory block. With fixes in buddy_allocator_init() function, size = 0
triggered segmentation fault and so size was temporarily updated to
fixed value.
This patch updates buddy_allocator_init() function to return error if
requested size of buddy_allocator is zero. As kernel VMA is absent for
VGPU, this patch also updates nvgpu_vm_do_init() function to not
allocate kernel VMA with size = 0.
Jira NVGPU-3005
Change-Id: I568fbbff6ac2c66395d1dc5a4b35304c7f4002fb
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2113190
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 5.7 doesn't allow reuse of tag or variable names as this
would lead to developer confusion. Patch renames page_alloc_slab_page
struct pointer's local variable name to page_ptr to resolve the
violation. Also, renames function "nvgpu_page_alloc" to
"nvgpu_page_balloc" to remove identifier name overloading.
MISRA rule 10.x forbids from casting value of composite expression to an
object with different essential type category. This patch replaces
multiplication operation to left shift operation.
MISRA rule 15.7 requires every if .. else if construct to be terminated
with an else statement. This patch updates if .. else if condition to
resolve the violation.
MISRA Rule 17.2 doesn't allow recursive call from a function to itself,
as this might lead to stack overflow. Updated nvgpu_page_allocator_init
to call nvgpu_buddy_allocator_init() directly.
MISRA rule 21.6 doesn't allow use of snprintf from standard library.
This patch replaces snprintf call with string functions.
JIRA NVGPU-3338
Change-Id: Ic8cb956edb3c72811752008de192e6e8ba12463e
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2117968
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 17.2 prohibits indirect recursion. nvgpu_allocator_initi()
was calling nvgpu_page_allocator_init(), which in turn was calling back
to nvgpu_allocator_init() to init a buddy allocator. Rather than
calling back to nvgpu_allocator_init(), have nvgpu_page_allocator_init()
call nvgpu_buddy_allocator_init() directly.
JIRA NVGPU-3332
Change-Id: I1102450ae26dda355d5f5dcc3ddb195871c26c32
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2114028
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 11.2 doesn't allow conversion to or from an incomplete type
pointer, as it may result incorrect point alignment and may further lead
to undefined behavior.
MISRA Rule 16.x requires all switch statements to be well-formed with
terminating break statement for every switch-clause.
This patch fixes 11.2 and 16.x violations in common.mm.nvgpu_mem.
Jira NVGPU-3339
Change-Id: I002393cc64d44826e6954d1bf6af71bd569e862f
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2113096
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 21.3 forbids from using calloc, malloc, realloc and free
identifiers for function or macro names. This patch renames nvgpu
allocator free operator to free_alloc to follow rule 21.3.
Jira NVGPU-3336
Change-Id: Ie9f48d567255a3e1dca70632fbe3d36b45023f3f
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2111365
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, bitmap allocator reuses identifier "nvgpu_bitmap_alloc" for
an allocation function and as bitmap rbtree node struct. Renaming the
allocation function to "nvgpu_bitmap_balloc". Also, renaming fixed
allocation function to "nvgpu_bitmap_balloc_fixed" for consistency.
Jira NVGPU-3335
Change-Id: I6fe616db5137b2d4e2795a84ae5eafd527f0dba5
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2110714
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are many miscellaneous HALs for various MM related functionality.
This patch aims to migrate all the remaining MM code from the <chip>/
mm_<chip>.[ch] files in HAL files under hal/.
Much of this is fairly straightforward copy/paste and updates to the
HAL init files.
The exception to that is the move of the left over gv11b MMU fault
handling code in mm_gv11b.c. Having both a hal/mm/mm/mm_gv11b.c and
a gv11b/mm_gv11b.c file causes tmake to choke so the gv11b/mm_gv11b.c
file was moved to gv11b/mmu_fault_gv11b.c. This will be cleaned up in
a subsequent patch.
JIRA NVGPU-2042
Change-Id: I12896de865d890a61afbcb71159cff486119ffb8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, buddy, page and bitmap allocators have individual init()
functions. This patch creates common nvgpu_alloc_allocator_init()
function to trigger the individual functions based on allocator type
argument. This makes writing requirements for the allocators easier.
Jira NVGPU-991
Change-Id: If94e3496f46f036460ef9f1831852e6fc19d3a0b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2097962
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Bugs in current version are listed below:
1. Function alloc() or alloc_pte() allocate memory for len=0.
2. Function alloc() or alloc_pte() don't unlock nvgpu_allocator if
pte_size is invalid.
3. Function alloc_pte() and alloc_fixed() set alloc_made flag
unconditionally.
4. Function release_carveout() tries to acquire nvgpu_allocator lock
twice causing unresponsive state.
5. If buddy allocation fails in balloc_do_alloc() or
balloc_do_alloc_fixed() function, previously allocated buddies are not
merged. This causes seg fault in ops->fini().
6. With gva_space enabled and base=0, buddy_allocator updated base not
checked for pde alignment.
7. In balloc_do_alloc_fixed(), align_order computed using __fls()
results in one order higher than requested.
8. Initializing buddy allocator with size=0, initializes very large
memory and will trigger seg fault with the changes in this patch.
Setting size=1G so that further execution is successful.
This patch fixes above listed bugs and updates following:
1. With gva_space enabled, BALLOC_PTE_SIZE_ANY is considered as
BALLOC_PTE_SIZE_SMALL which allows alloc() to be used.
2. GPU_BALLOC_MAX_ORDER changed to 63U. Condition added to check that
max_order is never greater than GPU_BALLOC_MAX_ORDER.
3. BUG() changed to nvgpu_do_assert().
JIRA NVGPU-3005
Change-Id: I20c28e20aa3404976d67f7884b4f8cbd5c908ba7
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075646
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, function nvgpu_bitmap_allocator_init() initializes bitmap
with length=0 but will fail in alloc(). Function alloc() allocates
len=0 bitmap and next request also starts from same address. However,
rbtree only holds zero length allocation.
This patch adds length = 0 check in nvgpu_bitmap_allocator_init() and
alloc() functions.
JIRA NVGPU-3086
Change-Id: I0936977cd193f3eba00bba28edae257e40af23bf
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2087077
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Introduce nvgpu_bitmap_set() and nvgpu_bitmap_clear() APIs to wrap the
bitmap_set() and bitmap_clear() APIs, respectively. The new nvgpu_*
versions accept unsigned length parameters since length is logically an
unsigned value where bitmap_set and bitmap_clear accept signed values.
We inherit bitmap_set and bitmap_clear from the OS, so we can't
directly change those.
Also, change uses of the old APIs to the new ones.
These changes resolve MISRA Rule 10.3 violations for implicit assignment
of objects of different essential or narrower type.
JIRA NVGPU-2953
Change-Id: I2c8f790049232a791f248b350c485bb07452315b
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077624
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)
JIRA NVGPU-677
Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add two new sub-directories under MM: gmmu and allocators.
The allocators directory is for all the allocator code we have.
There's a fair amount and as such could be considered a component
with a bunch of sub-units.
The new GMMU directory will contain the GMMU component (which used to
be a single unit). The new GMMU component is comprised of the
page_table and pd_cache units. Also when we migrate the chip specific
GMMU code out of mm_gk20a.c and mm_gp10b.c it will be placed in this
new GMMU directory.
JIRA NVGPU-1390
Change-Id: I7aa47ea2a32612b7d69972671fccb72770e1ae09
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1944385
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>