The function parameter of nvgpu_vm_map function is fixed for MISRA
where implicit assignment of objects to a narrower or different
essential type not allowed.This fixes few enum violations.
JIRA NVGPU-1584
Change-Id: I2353f7501c3326f792f5942b2e247badf03349cf
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986509
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in vm.c:
* nvgpu_mapped_buf_from_ref
* vm_gk20a_from_ref
The implementation of the following routine has also been updated
using the same approach to eliminate the direct reference to
container_of():
* gk20a_from_as
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I530f8d0cbe37445535b82578953eb5193ccf682c
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989570
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a flag that let's userspace enable the unified VM functionality
on a selective bassis. This feature is working for all cases except
a single MODS trace. This will allow test coverage to be selectively
added in certain userspace tests as well to help prevent this feature
from bit rotting (as it has historically done).
Also update the unit test for the page table management in the GMMU
to reflect this new flag. It's been set to false since the target
platform for safety is currently not using unified address spaces.
Bug 200438879
Change-Id: Ibe005472910d1668e8372754be8dd792773f9d8c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951864
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split the nvgpu_sgt code out from the nvgpu_mem code. Although the
two chunks of code are related the SGT code is distinct and as
such should be its own unit. To do this a new source file has been
added - nvgpu_sgt.c - which contains all the nvgpu_sgt common APIs.
These are the facade APIs to abstract the actual details of how any
given nvgpu_sgt is actually implemented.
An abstract unit - nvgpu_sgt_os - was also defined. This unit
exists solely for the nvgpu_sgt unit to call so that the OS
specific nvgpu_sgt_os_create_from_mem() API can be moved from the
common nvgpu_sgt unit. Note this also updates the name of what the
OS specific units are expected to call. Common code may still use
the generic nvgpu_sgt_create_from_mem() API.
JIRA NVGPU-1391
Change-Id: I37f5b2bbf9f84c0fb6bc296c3e04ea13518bd4d0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946012
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fixed the erroneous timeout message in sync-unmap and corrected the
condition for returning ETIMEDOUT.
In the original codeflow, after waiting for mapped_buffer release,
nvgpu_timeout_expired is called to check whether to return ETIMEDOUT.
However, if there is a delay between the end of the waiting and the
nvgpu_timeout_expired check, ETIMEDOUT is returned regardless if the
mapped_buffer is released with the timeout message printed. This is
an incorrect behavior. This patch fixes it by letting the refcount of
the mapped_buffer be the only source to determine the return value.
Bug 200434475
Change-Id: I8ca170c811da415c24045ab643da26476bc7463c
Signed-off-by: Kyle Guo <kyleg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945388
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in common code.
JIRA NVGPU-1036
Change-Id: Id6dea92df371e71b22b54cd7a521fc22812f9b69
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929899
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update the PD cache code to handle 64KB pages. To do this the
number of partial/full lists is expanded for when we have 64K
pages. Currently we only explicitly support 4K and 64K page
sizes. Other pages sizes (16K for example) will fail compilation
during preprocessing.
This change also cleans up the definitions for some of the
internal structs. They have been moved into pd_cache.c since
they are not used outside of pd_cache.c.
This allows the following functions to be removed from the global
context:
__nvgpu_pd_cache_alloc_direct()
__nvgpu_pd_cache_free_direct()
They have been replaced by calls to nvgpu_pd_{alloc,free}().
The nvgpu_pd_mem_entry alloc_map also had to be expanded to a
real bitmap. 32 or 64 bits is not sufficient for packing 256
byte PDs into a 64K page (there's 256 PDs per nvgpu_pd_mem_entry
in that case). To prevent doing too many find_first_zero
operations on the bitmap an 'allocs' field was also added which
tracks how many allocs are done. We can use this instead of
comparing a mask against the bitmap to determine if an
nvgpu_pd_mem_entry is full.
Note: there's still a limitation with the TLB invalidate code:
it simply assumes an nvgpu_mem is a 1 to 1 with a PDB. This means
we can't invalidate a PDB allocated at an offset greater than 0
in a nvgpu_pd_mem_entry. This in turn means we must always use a
full page size for a context's PDB.
Bug 1977822
Change-Id: I6a7a3a95b7c902bc6487cba05fde58fbc4a25da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718755
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule-14.4 doesn't allow the usage of function pointers & integer
types as booleans in the controlling expression of an if statement or
an iteration statement.
Fix violations where a function pointer or a function whose return
value is an integer, is used as a boolean in the controlling expression
of if and loop statements.
JIRA NVGPU-1021
Change-Id: Ic5336268394ba4396ce80744c25930d2fb44dc42
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932147
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fixed timeout checking flow in sync-unmap where timeout message could
be printed while there is not a timeout.
The original codeflow doesn't check refcount of a mapped_buffer again
after 10ms waiting. So if the buffer is released during the last
round of waiting, the timeout message will still be printed. The new
codeflow combines refcount and timeout checking. So that there won't
be an inconsistency between the two.
Bug 200434475
Change-Id: Id0a0aebcb24906a1ec7113e669227788e729564b
Signed-off-by: Kyle Guo <kyleg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1930236
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch moves the increment and decrement of the user mapped
buffer count to the insert/remove mapped buffer functions since
this value should only ever change when these functions are called.
Bug 200105199
Change-Id: I5b0a86d00e9e948c48e313153a668eb2e10fca49
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917791
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Fixes for all the pd_cache functions that use '__' prefixes. This
was trivial: the '__' prefix was simply deleted.
JIRA NVGPU-1029
Change-Id: Ia91dabe3ef97fb17a2a85105935fb3a72d7c2c5e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813643
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Handle the 21.2 fixes for nvgpu_mem.c and mm.c; this deletes the
'__' prefixes and slightly renames the __nvgpu_aperture_mask()
function since there's a coherent version and a general version.
Change-Id: Iee871ad90db3f2622f9099bd9992eb994e0fbf34
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813623
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
This fixes places in the public allocator APIs. This consists of
the various init routines which are used to create an allocator
and the debug macro used within the allocator code.
The buddy allocator was handled by collapsing the internal
'__' prepended version with the non-prefixed version. The only
required change was in the page_allocator code which now had to
pass in a NULL vm pointer (since the VM is not needed for managing
VIDMEM).
JIRA NVGPU-1029
Change-Id: I484a144e61789bf594c525c1ca307b96d120830f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813578
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The issues are:
1. Non-fixed allocs must take into account explicit PTE size
requests. Previously the PTE size was determines from the
allocation size which was incorect. To do this, the PTE size
is now plumbed through all GPU VA allocations. This is what
the new alloc_pte() op does.
2. Fix buddy PTE size assignment. This changes a '<=' into a
'<' in the buddy allocation logic. Effectively this is now
leaving the PTE size for buddy blocks equal to the PDE block
size as 'ANY'.
This prevents a buddy block of PDE size which has yet to be
allocated from having a specific PDE size. Without this its
possible to do a fixed alloc that fails unexpectedly due to
mismatching PDE sizes.
Consider two PDE block sized fixed allocs that are contained
in one buddy twice the size of a PDE block. Let's call these
fixed allocs S and B (small and big). Let's assume that two
fixed allocs are done, each targeting S and B, in that order.
With the current logic the first alloc, when we create the
two buddies S and B, causes both S and B to have a PTE size of
SMALL. Now when the second alloc happens we attempt to find
a buddy B with a PTE size of either BIG or ANY. But we cannot
becasue B already has size SMALL. This casues us to appear
like we have a conflicting fixed alloc despite this not being
the case.
3. Misc cleanups & bug fixes:
- Clean up some MISRA issues
- Delete an extraneous unlock that could have caused a
deadlock.
Bug 200105199
Change-Id: Ib5447ec6705a5a289ac0cf3d5e90c79b5d67582d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768582
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for calls to nvgpu_mutex_init and
improves related error handling.
JIRA NVGPU-677
Change-Id: I609fa138520cc7ccfdd5aa0e7fd28c8ca0b3a21c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805598
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
In balloc_get_order_list() the argument "int order" has been changed to
a u64 because all callers of this function pass a u64 argument.
JIRA NVGPU-992
Change-Id: Ie2964f9f1dfb2865a9bd6e6cdd65e7cda6c1f638
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1784419
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_semaphore_pool_alloc() returns an ERR_PTR instead of NULL which
the caller checks on failure. Common code should not use ERR_PTRs
though, so modify nvgpu_semaphore_pool_alloc() to return error code
separately and fix nvgpu_init_sema_pool() to consider this.
Jira NVGPU-513
Change-Id: I435c0d2794d226774ed4c6b3bcbdde1e741854d8
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673458
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint
address map in user space
We already map whole syncpoint shim to each address space with base address
being vm->syncpt_ro_map_gpu_va
This new API exposes this base GPU_VA address of syncpoint map, and unit size
of each syncpoint to user space.
User space can then calculate address of each syncpoint as
syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size)
Note that this syncpoint address is read_only, and should be only used for
inserting semaphore acquires.
Adding semaphore release with this address would result in MMU_FAULT
Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported
on Xavier SoC
Bug 200327559
Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649365
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Lots of code paths were split to T19x specific code paths and structs
due to split repository. Now that repositories are merged, fold all of
them back to main code paths and structs and remove the T19x specific
Kconfig flag.
Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a field in vm_gk20a to identify guest managed VM, with the
corresponding checks to ensure that there's no kernel section for
guest managed VMs.
Also make the __nvgpu_vm_init function available globally, so that
the vm can be allocated elsewhere, requisite fields set, and passed
to the function to initialize the vm.
Change-Id: Iad841d1b8ff9c894fe9d350dc43d74247e9c5512
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1617171
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add an alignment check for compressible-kind fixed-address
mappings. If we're using page size smaller than the comptag line
coverage window, the GPU VA and the physical buffer offset must be
aligned in respect to that window.
Bug 1995897
Bug 2011640
Bug 2011668
Change-Id: If68043ee2828d54b9398d77553d10d35cc319236
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1606439
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a translation layer to convert from the NVGPU_AS_* flags to
to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags.
This allows the common MM code to not depend on the UAPI header
defined for Linux.
In addition to this change a couple of other small changes were
made:
1. Deprecate, print a warning, and ignore usage of the
NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag.
2. Move the t19x IO coherence flag from the t19x UAPI header
to the regular UAPI header.
JIRA NVGPU-293
Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>