Below MISRA 10.4 violation is reported in nvgpu.common.mm.vm_area
${TEGRA_TOP}/kernel/nvgpu/drivers/gpu/nvgpu/common/mm/vm_area.c:234:
misra_violation: The condition clause expression of the for loop has
persistent side-effects.
Fix this by replacing with a while loop.
Jira NVGPU-3330
Change-Id: Ica6882d6c73dc0d74159f34279d8f91b7494c65c
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2117059
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are many miscellaneous HALs for various MM related functionality.
This patch aims to migrate all the remaining MM code from the <chip>/
mm_<chip>.[ch] files in HAL files under hal/.
Much of this is fairly straightforward copy/paste and updates to the
HAL init files.
The exception to that is the move of the left over gv11b MMU fault
handling code in mm_gv11b.c. Having both a hal/mm/mm/mm_gv11b.c and
a gv11b/mm_gv11b.c file causes tmake to choke so the gv11b/mm_gv11b.c
file was moved to gv11b/mmu_fault_gv11b.c. This will be cleaned up in
a subsequent patch.
JIRA NVGPU-2042
Change-Id: I12896de865d890a61afbcb71159cff486119ffb8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make a hal/mm/gmmu sub-unit for the GMMU HAL code. Also move the
gk20a specific HAL code there. gp10b will happen in the next patch.
This change also updates all the GMMU related HAL usage, of which
there is quite a bit. Generally the only change is a .gmmu needs to
be inserted into the HAL path. Each HAL init was also updated.
JIRA NVGPU-2042
Change-Id: I6c46bdfddb8e021f56103d9457fb3e2a226f8947
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The issues are:
1. Non-fixed allocs must take into account explicit PTE size
requests. Previously the PTE size was determines from the
allocation size which was incorect. To do this, the PTE size
is now plumbed through all GPU VA allocations. This is what
the new alloc_pte() op does.
2. Fix buddy PTE size assignment. This changes a '<=' into a
'<' in the buddy allocation logic. Effectively this is now
leaving the PTE size for buddy blocks equal to the PDE block
size as 'ANY'.
This prevents a buddy block of PDE size which has yet to be
allocated from having a specific PDE size. Without this its
possible to do a fixed alloc that fails unexpectedly due to
mismatching PDE sizes.
Consider two PDE block sized fixed allocs that are contained
in one buddy twice the size of a PDE block. Let's call these
fixed allocs S and B (small and big). Let's assume that two
fixed allocs are done, each targeting S and B, in that order.
With the current logic the first alloc, when we create the
two buddies S and B, causes both S and B to have a PTE size of
SMALL. Now when the second alloc happens we attempt to find
a buddy B with a PTE size of either BIG or ANY. But we cannot
becasue B already has size SMALL. This casues us to appear
like we have a conflicting fixed alloc despite this not being
the case.
3. Misc cleanups & bug fixes:
- Clean up some MISRA issues
- Delete an extraneous unlock that could have caused a
deadlock.
Bug 200105199
Change-Id: Ib5447ec6705a5a289ac0cf3d5e90c79b5d67582d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768582
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
In balloc_get_order_list() the argument "int order" has been changed to
a u64 because all callers of this function pass a u64 argument.
JIRA NVGPU-992
Change-Id: Ie2964f9f1dfb2865a9bd6e6cdd65e7cda6c1f638
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1784419
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When nvgpu maps an nvgpu_mem struct the nvgpu driver has a choice of
either using a fixed or non-fixed mapping. For non-fixed mappings the
GMMU APIs allocate a VA space for the caller. In that case the GMMU
APIs must also free that VA range when nvgpu unmaps the nvgpu_mem.
For fixed mappings the GMMU APIs must instead not manage the life
time of the VA space. To support these two possibilities add a field
to nvgpu_mem that specifies whether the GMMU APIs must or must not
free the GPU VA range during the GMMU unmap operation.
Also fix a case in the nvgpu vm_area code that would double free a
VA allocation in some cases (sparse allocs).
Change-Id: Idc32dbb8208fa7c1c05823e67b54707fea51c6b7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a translation layer to convert from the NVGPU_AS_* flags to
to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags.
This allows the common MM code to not depend on the UAPI header
defined for Linux.
In addition to this change a couple of other small changes were
made:
1. Deprecate, print a warning, and ignore usage of the
NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag.
2. Move the t19x IO coherence flag from the t19x UAPI header
to the regular UAPI header.
JIRA NVGPU-293
Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Re-organize the unmap code to be better split between OS specific
requirements and common core requirements. The new code flow works
as follows:
nvgpu_vm_unmap()
Is the primary entrance to the unmap path. It takes a VM and a GPU
virtual address to unmap. There's also an optional batch mapping
struct.
This function is responsible for making sure there is a real buffer
and that if it's being called on a fixed mapping then the mapping
will definitely be freed (since buffers are ref-counted). Then this
function decrements the ref-count and returns.
If the ref-count hits zero then __nvgpu_vm_unmap_ref() is called
which just calls __nvgpu_vm_unmap() with the relevant batch struct
if present. This is where the real work is done. __nvgpu_vm_unmap()
clears the GMMU mapping, removes the mapped buffer from the various
lists and trees it may be in and then calls the
nvgpu_vm_unmap_system() function. This function handles any OS
specific stuff and must be defined by all VM OS implementations.
There's a a short cut used by some other core VM code to free
mappings without going through nvgpu_vm_map(). Mostly they just
directly decrement the mapping ref-count which can then call
__nvgpu_vm_unmap_ref() if the ref-count hits zero.
JIRA NVGPU-30
JIRA NVGPU-71
Change-Id: Ic626d37ab936819841bab45214f027b40ffa4e5a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583982
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This
removes some usages of dma_buf from the mm_gk20a.c code, too, which
helps make mm_gk20a.c less Linux specific.
Also delete some header files that are no longer necessary in
gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now
quite close to being Linux free.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566629
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This patch begins the major rework of the GPU's virtual memory manager
(VMM). The VMM is the piece of code that handles the userspace interface
to buffers and their mappings into the GMMU. The core data structure is
the VM - for now still known as 'struct vm_gk20a'. Each one of these
structs represents one addres space to which channels or TSGs may bind
themselves to.
The VMM splits the interface up into two broad categories. First there's
the common, OS independent interfaces; and second there's the OS specific
interfaces.
OS independent
--------------
This is the code that manages the lifetime of VMs, the buffers inside
VMs (search, batch mapping) creation, destruction, etc.
OS Specific
-----------
This handles mapping of buffers represented as they are represented by
the OS (dma_buf's for example on Linux).
This patch is by no means complete. There's still Linux specific functions
scattered in ostensibly OS independent code. This is the first step. A
patch that rewrites everything in one go would simply be too big to
effectively review.
Instead the goal of this change is to simply separate out the basic
OS specific and OS agnostic interfaces into their own header files. The
next series of patches will start to pull the relevant implementations
into OS specific C files and common C files.
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: I242c7206047b6c769296226d855b7e44d5c4bfa8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464939
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>