The file semaphore.c is now split into 4 units namely
semaphore, semaphore_hw, semaphore_pool and semaphore_sea.
Each of the above units now have separate compilation units under
common/semaphore/. The public APIs corresponding to each unit is
present in include/nvgpu/semaphore.h. The dependency graph of the
below units is as follows where '->' indicates left depends on right.
semaphore -> semaphore_hw -> semaphore_pool -> semaphore_sea
Some of the other major changes made in this patch are as follows
i) Renamed some of the functions.
ii) Some functions are changed from private to public.
iii) Public header for semaphore contains only the declaration of the
corresponding structs as an opaque structure.
iv) Constructed a private header to contain internal functions common
to all the units and struct definitions corresponding to each unit.
v) Added new functions to provide access to internal members of the
units.
Jira NVGPU-2076
Change-Id: I6f111647ba9a9a9f8ef9c658f316cd5d6276c703
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022782
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When nvgpu_vm_unmap_sync fails, nvgpu_unmap_sync currently bails
out without decreasing the buffer refcount. This prevents from
releasing the buffer, in case a deferred job completes after the
timeout (which was observed 2 times during overnight
stress tests). This also means that the fixed address is not
re-useable.
Throw out a warning when nvgpu_vm_unmap_sync fails, but proceed
with decreasing refcount.
Bug 200492802
Change-Id: I4b7c90ffac6fd479b91d96a3b82c36d17b85ecdc
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2023097
(cherry picked from commit c8ccc87998afc599303857a85cd4553796034164)
Reviewed-on: https://git-master.nvidia.com/r/2024304
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The type for the timeout parameter to the NVGPU_COND_WAIT and
NVGPU_COND_WAIT_INTERRUPTIBLE macros was too weak. This updates these
macros to require a u32 for the timeout.
Users of the macros are updated to be compliant as necessary.
This addresses MISRA 10.3 violations for implicit conversions of types
of different size or essential type.
JIRA NVGPU-1008
Change-Id: I12368dfa81b137c35bd056668c1867f03a73b7aa
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017503
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function parameter of nvgpu_vm_map function is fixed for MISRA
where implicit assignment of objects to a narrower or different
essential type not allowed.This fixes few enum violations.
JIRA NVGPU-1584
Change-Id: I2353f7501c3326f792f5942b2e247badf03349cf
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986509
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gv11b_mm_l2_flush was not checking error codes from the various
functions it was calling. MISRA Rule-17.7 requires the return value
of all functions to be used. This patch now checks return values and
propagates the error upstream.
JIRA NVGPU-677
Change-Id: I9005c6d3a406f9665d318014d21a1da34f87ca30
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998809
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make a physical nvgpu_mem implementation in the common code. This
implementation assumes a single, contiguous, physical range. GMMU
mappability is provided by building a one entry SGT.
Since this is now "common" code the original Linux code has been
moved to commom/mm/nvgpu_mem.c.
Also fix the '__' prefix in the nvgpu_mem function. This is not
necessary as this function, although somewhat tricky, is expected
to be used by arbitrary users within the nvgpu driver.
JIRA NVGPU-1029
Bug 2441531
Change-Id: I42313e5c664df3cd94933cc63ff0528326628683
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1995866
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in vm.c:
* nvgpu_mapped_buf_from_ref
* vm_gk20a_from_ref
The implementation of the following routine has also been updated
using the same approach to eliminate the direct reference to
container_of():
* gk20a_from_as
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I530f8d0cbe37445535b82578953eb5193ccf682c
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989570
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a flag that let's userspace enable the unified VM functionality
on a selective bassis. This feature is working for all cases except
a single MODS trace. This will allow test coverage to be selectively
added in certain userspace tests as well to help prevent this feature
from bit rotting (as it has historically done).
Also update the unit test for the page table management in the GMMU
to reflect this new flag. It's been set to false since the target
platform for safety is currently not using unified address spaces.
Bug 200438879
Change-Id: Ibe005472910d1668e8372754be8dd792773f9d8c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951864
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The pd_cache header declarations were oriignally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.
Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allowed the pd_cache.h header to not include the gmmu.h
header file.
Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.
JIRA NVGPU-1444
Change-Id: Ib9e9e5a54544de403bfcd8e11c30de05721ddbcc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966352
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit 15603b9fd5.
Causes a build break in the PD cache unit test. Not sure how this
passed GVS - must have been a race or something? Unclear.
Change-Id: Ia484a801d098d69441326fa1dd40a1c86e2e23ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966335
The pd_cache header declarations were originally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.
Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allows the pd_cache.h header to not include the gmmu.h
header file.
Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.
JIRA NVGPU-1444
Change-Id: Iead9a0d998396d2289ffcb3b48765d770400397b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965271
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split the nvgpu_sgt code out from the nvgpu_mem code. Although the
two chunks of code are related the SGT code is distinct and as
such should be its own unit. To do this a new source file has been
added - nvgpu_sgt.c - which contains all the nvgpu_sgt common APIs.
These are the facade APIs to abstract the actual details of how any
given nvgpu_sgt is actually implemented.
An abstract unit - nvgpu_sgt_os - was also defined. This unit
exists solely for the nvgpu_sgt unit to call so that the OS
specific nvgpu_sgt_os_create_from_mem() API can be moved from the
common nvgpu_sgt unit. Note this also updates the name of what the
OS specific units are expected to call. Common code may still use
the generic nvgpu_sgt_create_from_mem() API.
JIRA NVGPU-1391
Change-Id: I37f5b2bbf9f84c0fb6bc296c3e04ea13518bd4d0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946012
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The problem here, and the solution, requires some background
so let's start there.
During page table programming page directories (PDs) are
allocated as needed. Each PD can range in size, depending on
chip, from 256 bytes all the way up to 32KB (gk20a 2-level
page tables).
In HW, two distinct PTE sizes are supported: large and small.
The HW supports mixing these at will. The second to last level
PDE has pointers to both a small and large PD with
corresponding PTEs. Nvgpu doesn't handle that well and as a
result historically we split the GPU virtual address space
up into a small page region and a large page region. This
makes the GMMU programming logic easier since we now only have
to worry about one type of PD for any given region.
But this presents issues for CUDA and UVM. They want to be
able to mix PTE sizes in the same GPU virtual memory range.
In general we still don't support true dual page directories.
That is page directories with both the small and large next
level PD populated. However, we will allow adjecent PDs to
have different sized next-level PDs.
Each last level PD maps the same amount. On Pascal+ that's
2MB. This is true regardless of the PTE coverage (large or
small). That means the last level PD will be different in
size depending on the PTE size.
So - going back to the SW we allocate PDs as needed when
programming the page tables. When we do this allocation we
allocate just enough space for the PD to contain the
necessary number of PTEs for the page size. The problem
manifests when a PD flips in size from large to small PTEs.
Consider the following mapping operations:
map(gpu_va -> phys) [large-pages]
unmap(gpu_va)
map(gpu_va -> phys) [small-pages]
In the first map/unmap we go and allocate all the necessary
PDs and PTEs to build this translation. We do so assuming a
large page size. When unmapping, as an optimzation/quirk of
nvgpu, we leave the PDs around. We know they may well be used
again in the future.
But if we swap the size of the mapping from large to small
then we now need more space in the PD for PTEs. But the logic
in the GMMU coding assumes if the PD has memory allocated then
that memory is sufficient. This worked back when there was no
potential for a PD to swap in page size. But now that there is
we have to re-allocate the PD doesn't have enough space for
the required PTEs.
So that's the fix - reallocate PDs when they require more
space than they currently have.
Change-Id: I9de70da6acfd20c13d7bdd54232e4d4657840394
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1933076
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add two new sub-directories under MM: gmmu and allocators.
The allocators directory is for all the allocator code we have.
There's a fair amount and as such could be considered a component
with a bunch of sub-units.
The new GMMU directory will contain the GMMU component (which used to
be a single unit). The new GMMU component is comprised of the
page_table and pd_cache units. Also when we migrate the chip specific
GMMU code out of mm_gk20a.c and mm_gp10b.c it will be placed in this
new GMMU directory.
JIRA NVGPU-1390
Change-Id: I7aa47ea2a32612b7d69972671fccb72770e1ae09
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1944385
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This A) clears a dangling pointer which should make the code
more robust, and B) allows for easier unit testing of the
nvgpu_pd_cache_do_free() functions since there's now a
tangible change to the nvgpu_gmmu_pd after this function runs.
JIRA NVGPU-1323
Change-Id: I57db02c9e74324b8e3c3fd4a2c14565dfd0048aa
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1949936
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following error checking code in nvgpu_pd_cache_alloc() cannot
hit the greater than PAGE_SIZE check:
if ((bytes & (bytes - 1U)) != 0U ||
(bytes >= PAGE_SIZE ||
bytes < NVGPU_PD_CACHE_MIN)) {
/* ... */
This is because the nvgpu_pd_cache_alloc() function is only called,
specifically when, bytes is less than PAGE_SIZE! As such we would
only see this case when there's a bug.
So change the error condition to now check only for bytes being a
power of 2 and being greater than NVGPU_PD_CACHE_MIN. The greater
than page size check has been turned into an assert since this
should really never happen in practice unless there's a bug.
JIRA NVGPU-1323
Change-Id: I97cd701aa4f045345606b90c97a8478b4a06e189
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946731
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fixed the erroneous timeout message in sync-unmap and corrected the
condition for returning ETIMEDOUT.
In the original codeflow, after waiting for mapped_buffer release,
nvgpu_timeout_expired is called to check whether to return ETIMEDOUT.
However, if there is a delay between the end of the waiting and the
nvgpu_timeout_expired check, ETIMEDOUT is returned regardless if the
mapped_buffer is released with the timeout message printed. This is
an incorrect behavior. This patch fixes it by letting the refcount of
the mapped_buffer be the only source to determine the return value.
Bug 200434475
Change-Id: I8ca170c811da415c24045ab643da26476bc7463c
Signed-off-by: Kyle Guo <kyleg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945388
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This makes unit testing easier because it hides a difficult to
test branch under POSIX. This branch is not functional to the
rest of the code in the pd_cache so moving it to the POSIX
header should avoid headache of testing it in the pd_cache
code.
JIRA NVGPU-1323
Change-Id: Id5ca2627c83cf6dbbe68dd8ad7bfe9def71761cc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945145
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: I957f8ca1fa0eb00928c476960da1e6e420781c09
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1941002
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all if-else blocks and loop blocks
be enclosed in braces, including single statement blocks. Fix errors
due to single statement if-else and loop blocks without braces
by introducing the braces.
JIRA NVGPU-775
Change-Id: Ib70621d39735abae3fd2eb7ccf77f36125e2d7b7
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1928745
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs
to qualified/unqualified types.
To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.
This change switches all offending uses of memcpy() in mm/*
over to use nvgpu_memcpy() with appropriate casts applied.
JIRA NVGPU-849
Change-Id: I17be87475fde62e969b014d4d0fa455dae5d4373
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1941257
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>