Remove NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED and
NVGPU_AS_ALLOC_USERSPACE_MANAGED flags which are used for supporting
userspace managed address-space. This functionality is not implemented
fully in kernel neither going to be implemented in near future.
Jira NVGPU-9832
Bug 4034184
Change-Id: I3787d92c44682b02d440e52c7a0c8c0553742dcc
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882168
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Initially, REMAP only worked with big pages but in some cases
only small pages are supported where REMAP functionality is
also needed.
This cleans up some page size assumptions. In particular, on a
remap request, the nvgpu_vm_area is found from the passed in VA,
but can only be done from virt_offset_in_pages if we're also
told the page size.
This now occurs from _PAGESIZE_ flags which are required by
both map and unmap operations.
Jira NVGPU-6804
Change-Id: I311980a1b5e0e5e1840bdc1123479350a5c9d469
Signed-off-by: Chris Johnson <cwj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2566087
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add REMAP ioctl and accompanying support to the linux nvgpu driver.
REMAP support provides per-page control over sparse VM areas using the
concept of a virtual memory pool.
The REMAP ioctl accepts a list of operations (each a map or unmap) that
modify the VM area pages tracked by the virtual mmemory pool.
Inclusion of REMAP support in the nvgpu build is controlled by the new
CONFIG_NVGPU_REMAP flag. This flag is enabled by default for linux builds.
A new NVGPU_GPU_FLAGS_SUPPORT_REMAP characteristics flag is added for use
in detecting when REMAP support is available.
When a VM allocation tagged with NVGPU_VM_AREA_ALLOC_SPARSE is made the
base virtual memory pool resources are allocated. Per-page resources are
later allocated when the NVGPU_AS_IOCTL_REMAP ioctl is issued. All REMAP
resources are released when the corresponding VM area is freed.
Jira NVGPU-6804
Change-Id: I1f2cdc0c06c1698a62640c1c6fbcb2f9db24a0bc
Signed-off-by: scottl <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2542178
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Modify NVGPU_GPU_IOCTL_ALLOC_AS and struct nvgpu_alloc_as_args to
accept start address and size of user memory. This allows configurable
address space allocation.
- Modify gk20a_as_alloc_share() and gk20a_vm_alloc_share() to receive
va_range_start and va_range_end values.
- gk20a_vm_alloc_share() initializes vm with low_hole = va_range_start,
and user vma size = (va_range_end - va_range_start).
- Modify nvgpu_as_alloc_space_args and nvgpu_as_free_space_args to
accept 64 bit number of pages.
Bug 2043269
JIRA NVGPU-5302
Change-Id: I243995adf5b7e0e84d6b36abe3b35a5ccabd7a37
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2385496
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add macros for whitelisting coverity violations. These macros use pragma
directives. The pragma directives and whitelisting macros are only
enabled when a coverity scan is being run.
The whitelisting macros have been added to a new header called
static_analysis.h. The contents of safe_ops.h (CERT C safe ops) have
been moved into static_analysis.h because this will be the new header
for static analysis related macros/defines/etc.
JIRA NVGPU-3820
Change-Id: I9c63f20f670880b420415535738034619314b7c3
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rule 11.3 forbids pointer cast between two different object types.
Rule 13.5 doesn't allow right hand operand of a logical operator to have
persistent side effects.
This patch fixes mentioned rules in nvgpu.common.mm.
Jira NVGPU-3864
Change-Id: I08b7fb4d3fb623f14f8760a50648b39b3e53b233
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2168522
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
INT30-C requires that unsigned integer operations do not wrap.
INT31-C requires checking that data isn't misinterpreted after casting.
INT32-C requires that signed operations do not overflow.
Jira NVGPU-3882
Change-Id: I6b4c1769ec85919f8ec2aa183cba3b7c0ffa1e97
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2166124
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below MISRA 10.4 violation is reported in nvgpu.common.mm.vm_area
${TEGRA_TOP}/kernel/nvgpu/drivers/gpu/nvgpu/common/mm/vm_area.c:234:
misra_violation: The condition clause expression of the for loop has
persistent side-effects.
Fix this by replacing with a while loop.
Jira NVGPU-3330
Change-Id: Ica6882d6c73dc0d74159f34279d8f91b7494c65c
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2117059
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are many miscellaneous HALs for various MM related functionality.
This patch aims to migrate all the remaining MM code from the <chip>/
mm_<chip>.[ch] files in HAL files under hal/.
Much of this is fairly straightforward copy/paste and updates to the
HAL init files.
The exception to that is the move of the left over gv11b MMU fault
handling code in mm_gv11b.c. Having both a hal/mm/mm/mm_gv11b.c and
a gv11b/mm_gv11b.c file causes tmake to choke so the gv11b/mm_gv11b.c
file was moved to gv11b/mmu_fault_gv11b.c. This will be cleaned up in
a subsequent patch.
JIRA NVGPU-2042
Change-Id: I12896de865d890a61afbcb71159cff486119ffb8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make a hal/mm/gmmu sub-unit for the GMMU HAL code. Also move the
gk20a specific HAL code there. gp10b will happen in the next patch.
This change also updates all the GMMU related HAL usage, of which
there is quite a bit. Generally the only change is a .gmmu needs to
be inserted into the HAL path. Each HAL init was also updated.
JIRA NVGPU-2042
Change-Id: I6c46bdfddb8e021f56103d9457fb3e2a226f8947
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The issues are:
1. Non-fixed allocs must take into account explicit PTE size
requests. Previously the PTE size was determines from the
allocation size which was incorect. To do this, the PTE size
is now plumbed through all GPU VA allocations. This is what
the new alloc_pte() op does.
2. Fix buddy PTE size assignment. This changes a '<=' into a
'<' in the buddy allocation logic. Effectively this is now
leaving the PTE size for buddy blocks equal to the PDE block
size as 'ANY'.
This prevents a buddy block of PDE size which has yet to be
allocated from having a specific PDE size. Without this its
possible to do a fixed alloc that fails unexpectedly due to
mismatching PDE sizes.
Consider two PDE block sized fixed allocs that are contained
in one buddy twice the size of a PDE block. Let's call these
fixed allocs S and B (small and big). Let's assume that two
fixed allocs are done, each targeting S and B, in that order.
With the current logic the first alloc, when we create the
two buddies S and B, causes both S and B to have a PTE size of
SMALL. Now when the second alloc happens we attempt to find
a buddy B with a PTE size of either BIG or ANY. But we cannot
becasue B already has size SMALL. This casues us to appear
like we have a conflicting fixed alloc despite this not being
the case.
3. Misc cleanups & bug fixes:
- Clean up some MISRA issues
- Delete an extraneous unlock that could have caused a
deadlock.
Bug 200105199
Change-Id: Ib5447ec6705a5a289ac0cf3d5e90c79b5d67582d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768582
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
In balloc_get_order_list() the argument "int order" has been changed to
a u64 because all callers of this function pass a u64 argument.
JIRA NVGPU-992
Change-Id: Ie2964f9f1dfb2865a9bd6e6cdd65e7cda6c1f638
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1784419
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When nvgpu maps an nvgpu_mem struct the nvgpu driver has a choice of
either using a fixed or non-fixed mapping. For non-fixed mappings the
GMMU APIs allocate a VA space for the caller. In that case the GMMU
APIs must also free that VA range when nvgpu unmaps the nvgpu_mem.
For fixed mappings the GMMU APIs must instead not manage the life
time of the VA space. To support these two possibilities add a field
to nvgpu_mem that specifies whether the GMMU APIs must or must not
free the GPU VA range during the GMMU unmap operation.
Also fix a case in the nvgpu vm_area code that would double free a
VA allocation in some cases (sparse allocs).
Change-Id: Idc32dbb8208fa7c1c05823e67b54707fea51c6b7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a translation layer to convert from the NVGPU_AS_* flags to
to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags.
This allows the common MM code to not depend on the UAPI header
defined for Linux.
In addition to this change a couple of other small changes were
made:
1. Deprecate, print a warning, and ignore usage of the
NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag.
2. Move the t19x IO coherence flag from the t19x UAPI header
to the regular UAPI header.
JIRA NVGPU-293
Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Re-organize the unmap code to be better split between OS specific
requirements and common core requirements. The new code flow works
as follows:
nvgpu_vm_unmap()
Is the primary entrance to the unmap path. It takes a VM and a GPU
virtual address to unmap. There's also an optional batch mapping
struct.
This function is responsible for making sure there is a real buffer
and that if it's being called on a fixed mapping then the mapping
will definitely be freed (since buffers are ref-counted). Then this
function decrements the ref-count and returns.
If the ref-count hits zero then __nvgpu_vm_unmap_ref() is called
which just calls __nvgpu_vm_unmap() with the relevant batch struct
if present. This is where the real work is done. __nvgpu_vm_unmap()
clears the GMMU mapping, removes the mapped buffer from the various
lists and trees it may be in and then calls the
nvgpu_vm_unmap_system() function. This function handles any OS
specific stuff and must be defined by all VM OS implementations.
There's a a short cut used by some other core VM code to free
mappings without going through nvgpu_vm_map(). Mostly they just
directly decrement the mapping ref-count which can then call
__nvgpu_vm_unmap_ref() if the ref-count hits zero.
JIRA NVGPU-30
JIRA NVGPU-71
Change-Id: Ic626d37ab936819841bab45214f027b40ffa4e5a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583982
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This
removes some usages of dma_buf from the mm_gk20a.c code, too, which
helps make mm_gk20a.c less Linux specific.
Also delete some header files that are no longer necessary in
gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now
quite close to being Linux free.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566629
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This patch begins the major rework of the GPU's virtual memory manager
(VMM). The VMM is the piece of code that handles the userspace interface
to buffers and their mappings into the GMMU. The core data structure is
the VM - for now still known as 'struct vm_gk20a'. Each one of these
structs represents one addres space to which channels or TSGs may bind
themselves to.
The VMM splits the interface up into two broad categories. First there's
the common, OS independent interfaces; and second there's the OS specific
interfaces.
OS independent
--------------
This is the code that manages the lifetime of VMs, the buffers inside
VMs (search, batch mapping) creation, destruction, etc.
OS Specific
-----------
This handles mapping of buffers represented as they are represented by
the OS (dma_buf's for example on Linux).
This patch is by no means complete. There's still Linux specific functions
scattered in ostensibly OS independent code. This is the first step. A
patch that rewrites everything in one go would simply be too big to
effectively review.
Instead the goal of this change is to simply separate out the basic
OS specific and OS agnostic interfaces into their own header files. The
next series of patches will start to pull the relevant implementations
into OS specific C files and common C files.
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: I242c7206047b6c769296226d855b7e44d5c4bfa8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464939
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>