MISRA rule 12.2 states that the right hand operand of a shift
operator shall lie in the range zero to one less than the width
in bits of the essential type of the left hand operand. This
patch will fix these violations by casting them to an appropriate
type or using the relevant BITxx() macros.
JIRA NVGPU-666
Change-Id: I57b6081e9bd98c45ca9f7aa5f35e1d2d66ed0134
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945655
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: I61a2d24830428ffc2655bd9c45bb5403c7f22c09
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update the PD cache code to handle 64KB pages. To do this the
number of partial/full lists is expanded for when we have 64K
pages. Currently we only explicitly support 4K and 64K page
sizes. Other pages sizes (16K for example) will fail compilation
during preprocessing.
This change also cleans up the definitions for some of the
internal structs. They have been moved into pd_cache.c since
they are not used outside of pd_cache.c.
This allows the following functions to be removed from the global
context:
__nvgpu_pd_cache_alloc_direct()
__nvgpu_pd_cache_free_direct()
They have been replaced by calls to nvgpu_pd_{alloc,free}().
The nvgpu_pd_mem_entry alloc_map also had to be expanded to a
real bitmap. 32 or 64 bits is not sufficient for packing 256
byte PDs into a 64K page (there's 256 PDs per nvgpu_pd_mem_entry
in that case). To prevent doing too many find_first_zero
operations on the bitmap an 'allocs' field was also added which
tracks how many allocs are done. We can use this instead of
comparing a mask against the bitmap to determine if an
nvgpu_pd_mem_entry is full.
Note: there's still a limitation with the TLB invalidate code:
it simply assumes an nvgpu_mem is a 1 to 1 with a PDB. This means
we can't invalidate a PDB allocated at an offset greater than 0
in a nvgpu_pd_mem_entry. This in turn means we must always use a
full page size for a context's PDB.
Bug 1977822
Change-Id: I6a7a3a95b7c902bc6487cba05fde58fbc4a25da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718755
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the integer variables err, ret, status are used
as booleans in the controlling expression of if and loop statements.
JIRA NVGPU-1019
Change-Id: I8c9ad786a741b78293d0ebc4e1c33d4d0fc8f9b4
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921260
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Handle the 21.2 fixes for nvgpu_mem.c and mm.c; this deletes the
'__' prefixes and slightly renames the __nvgpu_aperture_mask()
function since there's a coherent version and a general version.
Change-Id: Iee871ad90db3f2622f9099bd9992eb994e0fbf34
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813623
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all loop bodies must be enclosed in braces
including single statement loop bodies. This patch fix the MISRA
violations due to single statement loop bodies without braces by adding
them.
JIRA NVGPU-989
Change-Id: If79f56f92b94d0114477b66a6f654ac16ee8ea27
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1791194
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.
Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.
Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When generating the aperture field for the PDE being programmed
we must use the next PD not the current PD. This is important for
cases on the dGPU where VIDMEM runs out.
In such cases the page table may reside in both VIDMEM and SYSMEM.
Thus, if a PDE points to a PDE in a different type of memory
(VIDMEM -> SYSMEM or SYSMEM -> VIDMEM) then the aperture will not
be programmed correctly if the code uses the current PD for
picking the next PD aperture.
Bug 2082475
Change-Id: Ic1a8d1e2c2237712039dc298b97095d3bbc6c844
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676831
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.
Bug 2070609
Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-MMU_CTRL_USE_PDB_BIG_PAGE_SIZE is set to TRUE and hence
RAMIN_BIG_PAGE_SIZE should be set to 64KB i.e. val 1.
By default this is set to 128KB i.e. val 0.
-This change will also fix an issue where perfbuffer_enable and
nvgpu_init_hwpm function pass 0 as big page size while initializing
inst_block and due to which ramin_big_page_size does not get updated
to 64KB and remains set to unsupported 128KB value.
-Volta supports 64KB for big pages. Selecting 128KB for
big pages results in an UNBOUND_INSTANCE fault.
Bug 200327596
Change-Id: Ie304e4e5ff7bedaead27e9380d64c59013dd64ca
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639540
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The current code does not properly calculate the indexes within the PDE
to access the proper entry, and it has a bug in assignement of the big
page entries. This change fixes the issue by:
(1) Passing a pointer to the level structure and dereferencing the
index offset to the next level.
(2) Changing the format of the address.
(3) Ensuring big pages are only selected if their address is set.
Bug 200364599
Change-Id: I46e32560ee341d8cfc08c077282dcb5549d2a140
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1610562
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Bhosale <dbhosale@nvidia.com>
Move much of the remaining generic MM code to a new common location:
common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This
mostly consists of init and cleanup code to handle the common MM
data structures like the VIDMEM code, address spaces for various
engines, etc.
A few more indepth changes were made as well.
1. alloc_inst_block() has been added to the MM HAL. This used to be
defined directly in the gk20a code but it used a register. As a
result, if this register hypothetically changes in the future,
it would need to become a HAL anyway. This path preempts that
and for now just defines all HALs to use the gk20a version.
2. Rename as much as possible: global functions are, for the most
part, prepended with nvgpu (there are a few exceptions which I
have yet to decide what to do with). Functions that are static
are renamed to be as consistent with their functionality as
possible since in some cases function effect and function name
have diverged.
JIRA NVGPU-30
Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574499
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.
JIRA NVGPU-68
Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reorganize HAL initialization to remove inheritance and construct
the gpu_ops struct at compile time. This patch only covers the
mm sub-module of the gpu_ops struct.
Perform HAL function assignments in hal_gxxxx.c through the
population of a chip-specific copy of gpu_ops.
Jira NVGPU-74
Change-Id: Ieb87a62f047510e51c52e6563d8e3fd5a65b5f28
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537753
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
If the debug message printed to the Linux kernel is too long then the
end of the message gets truncated. This causes loss of valuable PTE
information.
Change-Id: I05e890819ec176c6aa909b04352e8a40f1936ecf
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537077
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reorganize HAL initialization to remove inheritance and construct
the gpu_ops struct at compile time. This patch only covers the
mm sub-module of the gpu_ops struct.
Perform HAL function assignments in hal_gxxxx.c through the
population of a chip-specific copy of gpu_ops.
Jira NVGPU-74
Change-Id: I289284e6e528fc7951c959c8765ccf9349eec33b
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1533351
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.
Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.
JIRA NVGPU-30
Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Ensure that all debug prints are consistent from chip to chip
and function to function. The following maps letters in the
debug print to their meaning:
C Mapping is cachable
v Mapping is volatile
S Mapping is sparse
P Mapping is private (VPR/WPR)
c Mapping is coherent
V Mapping is valid
JIRA NVGPU-30
Change-Id: Ia890af88677c3e6d3fdd8c4fe266158c35b8afcd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1514903
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Tested-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Use sysmem_coherent aperture if the buffer mappings are requested
to be IO coherent. Use sysmem_noncoherent aperture otherwise. This
is implemented by adding a new coherent field to the GMMU attrs
struct.
Jira GPUT19X-17
Bug 1651331
Bug 200283998
Change-Id: I5cfb71b5913d4db50ebf10331b19f5a4216456bf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master/r/1514438
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
We do not support coherence for gp10b, hence clean up related code
Remove API gp10b_mm_phys_addr_translate() and use physical address
instead
Also, since now gp10b_mm_iova_addr() becomes equivalent to
gk20a_mm_iova_addr(), remove gp10b_mm_iova_addr() altogether
We first set gk20a_mm_iova_addr() to get_iova_addr() pointer anyways
so we continue using gk20a version of the API
Jira GPUT19X-17
Bug 1651331
Bug 200283998
Change-Id: Ic1ca198fcde7ddbcd23516bff8a2e65c9eae32b1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master/r/1512598
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
In some cases page directories require less than a full page of memory.
For example, on Pascal, the final PD level for large pages is only 256 bytes;
thus 16 PDs can fit in a single page. To allocate an entire page for each of
these 256 B PDs is extremely wasteful. This patch aims to alleviate the
wasted DMA memory from having small PDs in a full page by packing multiple
small PDs into a single page.
The packing is implemented as a slab allocator - each page is a slab and
from each page multiple PD instances can be allocated. Several modifications
to the nvgpu_gmmu_pd struct also needed to be made to support this. The
nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem
struct so that each nvgpu_gmmu_pd knows what portion of the memory it's
using.
The nvgpu_pde_phys_addr() function and the pd_write() functions also require
some changes since the PD no longer is always situated at the start of the
nvgpu_mem.
Initialization and cleanup of the page tables for each VM was slightly
modified to work through the new pd_cache implementation. Some PDs (i.e
the PDB), despite not being a full page, still require a full page for
alignment purposes (HW requirements). Thus a direct allocation method for
PDs is still provided. This is also used when a PD that could in principle
be cached is greater than a page in size.
Lastly a new debug flag was added for the pd_cache code.
JIRA NVGPU-30
Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1506610
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Update the high level mapping logic. Instead of iterating over the
GPU VA iterate over the scatter-gather table chunks. As a result
each GMMU page table update call is simplified dramatically.
This also modifies the chip level code to no longer require an SGL
as an argument. Each call to the chip level code will be guaranteed
to be contiguous so it only has to worry about making a mapping from
virt -> phys.
This removes the dependency on Linux that the chip code currently
has. With this patch the core GMMU code still uses the Linux SGL but
the logic is highly transferable to a different, nvgpu specific,
scatter gather list format in the near future.
The last major update is to push most of the page table attribute
arguments to a struct. That struct is passed on through the various
mapping levels. This makes the funtions calls more simple and
easier to follow.
JIRA NVGPU-30
Change-Id: Ibb6b11755f99818fe642622ca0bd4cbed054f602
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1484104
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Make default big page size query a HAL op instead of per-platform
constant. This allows querying for default big page size without
accessing Linux specific gk20a_platform structure.
JIRA NVGPU-38
Change-Id: Ibfbd1319764fdae5fdb06700fb64d23f6f3dd01a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master/r/1507928
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Support only VM pointers and ref-counting for maintaining VMs. This
dramatically reduces the complexity of the APIs, avoids the API
abuse that has existed, and ensures that future VM usage is
consistent with current usage.
Also remove the combined VM free/instance block deletion. Any place
where this was done is now replaced with an explict free of the
instance block and a nvgpu_vm_put().
JIRA NVGPU-12
JIRA NVGPU-30
Change-Id: Ib73e8d574ecc9abf6dad0b40a2c5795d6396cc8c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1480227
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>