Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the current code, gk20a.h includes io.h which gets directly included
in a lot of other files. io.h contains methods which uses a struct
gk20a as a parameter leading to a circular dependency between io.h
and gk20a.h. This can be mitigated by removing io.h from gk20a.h as
part of larger effort to moving gk20a.h to nvgpu/gk20a.h
JIRA NVGPU-597
Change-Id: I93e504fa9371b88152737b342a75580c65e8f712
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.
Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.
Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.
Bug 2070609
Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The current code does not properly calculate the indexes within the PDE
to access the proper entry, and it has a bug in assignement of the big
page entries. This change fixes the issue by:
(1) Passing a pointer to the level structure and dereferencing the
index offset to the next level.
(2) Changing the format of the address.
(3) Ensuring big pages are only selected if their address is set.
Bug 200364599
Change-Id: I46e32560ee341d8cfc08c077282dcb5549d2a140
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1610562
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Bhosale <dbhosale@nvidia.com>
Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory,
kernel does not need to know the details about the PTE kinds
anymore. Thus, we can remove the kind_gk20a.h header and the code
related to kind table setup, as well as simplify buffer mapping code
a bit.
Bug 1902982
Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move gk20a/platform_gk20a.h to linux specific directory as
common/linux/platform_gk20a.h since this file includes all linux specific
stuff
Fix #includes in all the files to include this file with correct path
Remove #include of this file where it is no more needed
Fix gk20a_init_sim_support() to receive struct gk20a as parameter
instead of receiving linux specific struct platform_device
NVGPU-316
Change-Id: I5ec08e776b753af4d39d11c11f6f068be2ac236f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589938
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move much of the remaining generic MM code to a new common location:
common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This
mostly consists of init and cleanup code to handle the common MM
data structures like the VIDMEM code, address spaces for various
engines, etc.
A few more indepth changes were made as well.
1. alloc_inst_block() has been added to the MM HAL. This used to be
defined directly in the gk20a code but it used a register. As a
result, if this register hypothetically changes in the future,
it would need to become a HAL anyway. This path preempts that
and for now just defines all HALs to use the gk20a version.
2. Rename as much as possible: global functions are, for the most
part, prepended with nvgpu (there are a few exceptions which I
have yet to decide what to do with). Functions that are static
are renamed to be as consistent with their functionality as
possible since in some cases function effect and function name
have diverged.
JIRA NVGPU-30
Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574499
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Convert the work_struct used by the vidmem background clearing to
a thread to make it more cross platform. The thread waits on a
condition variable to determine when work needs to be done. The
signal comes from the DMA API when it enqueues a new nvgpu_mem that
needs clearing.
Add logic for handling suspend: the CE cannot be accessed while
the GPU is suspended. As such the background thread must be paused
while the GPU is suspended and the CE is not available.
Several other changes were also made:
o Move the code that enqueues a nvgpu_mem from the DMA API
code to a function in the VIDMEM code.
o Move nvgpu_vidmem_get_pending_alloc() to the Linux specific
code as this function is only used there. It's a trivial
function that QNX can easily implement as well.
o Remove the was_empty logic from the enqueue. Now just always
signal the condition variable when anew nvgpu_mem comes in.
o Move CE suspend to after MM suspend.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574498
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Refactor the last nvgpu_vm functions from the mm_gk20a.c code. This
removes some usages of dma_buf from the mm_gk20a.c code, too, which
helps make mm_gk20a.c less Linux specific.
Also delete some header files that are no longer necessary in
gk20a/mm_gk20a.c which are Linux specific. The mm_gk20a.c code is now
quite close to being Linux free.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I72b370bd85a7b029768b0fb4827d6abba42007c3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566629
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Move most of the dma_buf usage present in the mm_gk20a.c code
out to Linux specific code and some commom/mm code. There's
two primary groups of code:
1. dma_buf priv field code (for holding comptag data)
2. Comptag usage that relies on dma_buf pointers
For (1) the dma_buf code was simply moved to common/linux/dmabuf.c
since most of this code is clearly Linux specific.
The comptag code was a bit more complicated since there is two
parts to the comptag code. Firstly there's the code that manages
the comptag memory. This is essentially a simple allocator. This
was moved to common/mm/comptags.c since it can be shared across
all chips. The second set of code is moved to
common/linux/comptags.c since it is the interface between dma_bufs
and the comptag memory.
Two other fixes were done as well:
- Add struct gk20a to the comptag allocator init so that
the proper nvgpu_vzalloc() function could be used.
- Add necessary includes to common/linux/vm_priv.h.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: I96c57f2763e5ebe18a2f2ee4b33e0e1a2597848c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566628
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove an SGL reference from the mm_gk20a.c code. This code is
common code and as such all linuxisms need to be fixed. It just
so happens that this particular function is only used by the
CDE code which is only present in Linux. So simply move this
function over to the CDE code.
JIRA NVGPU-30
JIRA NVGPU-225
JIRA NVGPU-226
Change-Id: Ifb0cb427c742c6d9cada382ace4a52f52474c379
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1576436
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Split VIDMEM support into its own code files organized as such:
common/mm/vidmem.c - Base vidmem support
common/linux/vidmem.c - Linux specific user-space interaction
include/nvgpu/vidmem.h - Vidmem API definitions
Also use the config to enable/disable VIDMEM support in the makefile
and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible
from the source code.
And lastly update a while-loop that iterated over an SGT to use the
new for_each construct for iterating over SGTs.
Currently this organization is not perfectly adhered to. More patches
will fix that.
JIRA NVGPU-30
JIRA NVGPU-138
Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.
JIRA NVGPU-68
Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a function to do address translation for IOMMU capable GPUs.
When an iGPU is behind and IOMMU it can pick whether to use that
IOMMU for translation by adding a bit to physical addresses. This
function takes care of that.
However, this required an abstracted nvgpu_iommuable() API to
check whether a GPU is behind an IOMMU. This patch adds that API
for Linux.
JIRA NVGPU-68
Change-Id: I489d14475167c019c294407372395df78c8b5feb
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542965
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
This change solves two problems:
(*) the possibility of a crash due to interrupting the gpu
initialization following a bind
(*) a IOVA memory leak that could prevent the GPU from binding after
about 200 bind/unbind cycles
A detailed list of fixes:
- chek that arbiter is initialized before freeing it.
- do not re-enable interrupts when MSI is enabled on unbind.
- free the semaphore sea on unbind.
- ensure we dont double load the vbios.
- check return value of nvgpu_mutex_init for semaphores.
- add corresponding nvgpu_mutex_destroy calls.
bug 1816516
Change-Id: Ia8af73019e0e1183998855d55bb3eea09672a8b7
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1465302
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: David Jarrett <djarrett@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1563019
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.
This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.
Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.
Jira NVGPU-186
Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.
The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.
To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.
This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.
Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Allow userspace to control directly the PTE kind for the mappings by
supplying NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL for MAP_BUFFER_EX.
In particular, in this mode, the userspace will tell the kernel
whether the kind is compressible, and if so, what is the
incompressible fallback kind. By supplying only the compressible kind,
the userspace can require that the map kind will not be demoted to the
incompressible fallback kind in case of comptag allocation failure.
Add also a GPU characteristics flag
NVGPU_GPU_FLAGS_SUPPORT_MAP_DIRECT_KIND_CTRL to signal whether direct
kind control is supported.
Fix indentation of nvgpu_as_map_buffer_ex_args header comment.
Bug 1705731
Change-Id: I317ab474ae53b78eb8fdd31bd6bca0541fcba9a4
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1543462
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Explicitly page align DMA memory allocations. Non-page aligned
DMA allocs can lead to nvgpu_mem structs that have a size that's
not page aligned. Those allocs, in some cases, can cause vGPU
maps to return an error.
More generally DMA allocs in Linux are never going to not be
page aligned both in size and address. So might as well page align
the alloc size to make code else where in the driver more simple.
To imlpement this an aligned_size field has been added to struct
nvgpu_mem. This field has the real page aligned size of the
allocation. The original size is still saved in size.
Change-Id: Ie08cfc4f39d5f97db84a54b8e314ad1fa53b72be
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1547902
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove the kernel feature to map compbit backing store areas to GPU VA.
The feature is not used, and the relevant user space code is getting
removed, too.
Change-Id: I94f8bb9145da872694fdc5d3eb3c1365b2f47945
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1547898
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.
Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.
JIRA NVGPU-30
Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Ensure that all debug prints are consistent from chip to chip
and function to function. The following maps letters in the
debug print to their meaning:
C Mapping is cachable
v Mapping is volatile
S Mapping is sparse
P Mapping is private (VPR/WPR)
c Mapping is coherent
V Mapping is valid
JIRA NVGPU-30
Change-Id: Ia890af88677c3e6d3fdd8c4fe266158c35b8afcd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1514903
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Tested-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
In some cases page directories require less than a full page of memory.
For example, on Pascal, the final PD level for large pages is only 256 bytes;
thus 16 PDs can fit in a single page. To allocate an entire page for each of
these 256 B PDs is extremely wasteful. This patch aims to alleviate the
wasted DMA memory from having small PDs in a full page by packing multiple
small PDs into a single page.
The packing is implemented as a slab allocator - each page is a slab and
from each page multiple PD instances can be allocated. Several modifications
to the nvgpu_gmmu_pd struct also needed to be made to support this. The
nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem
struct so that each nvgpu_gmmu_pd knows what portion of the memory it's
using.
The nvgpu_pde_phys_addr() function and the pd_write() functions also require
some changes since the PD no longer is always situated at the start of the
nvgpu_mem.
Initialization and cleanup of the page tables for each VM was slightly
modified to work through the new pd_cache implementation. Some PDs (i.e
the PDB), despite not being a full page, still require a full page for
alignment purposes (HW requirements). Thus a direct allocation method for
PDs is still provided. This is also used when a PD that could in principle
be cached is greater than a page in size.
Lastly a new debug flag was added for the pd_cache code.
JIRA NVGPU-30
Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1506610
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit