Commit Graph

20 Commits

Author SHA1 Message Date
Alex Waterman
3f7b312e89 gpu: nvgpu: Adjust error condition checking in pd_alloc code
The following error checking code in  nvgpu_pd_cache_alloc() cannot
hit the greater than PAGE_SIZE check:

  if ((bytes & (bytes - 1U)) != 0U ||
      (bytes >= PAGE_SIZE ||
       bytes < NVGPU_PD_CACHE_MIN)) {
          /* ... */

This is because the nvgpu_pd_cache_alloc() function is only called,
specifically when, bytes is less than PAGE_SIZE! As such we would
only see this case when there's a bug.

So change the error condition to now check only for bytes being a
power of 2 and being greater than NVGPU_PD_CACHE_MIN. The greater
than page size check has been turned into an assert since this
should really never happen in practice unless there's a bug.

JIRA NVGPU-1323

Change-Id: I97cd701aa4f045345606b90c97a8478b4a06e189
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946731
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-14 12:43:50 -08:00
Amurthyreddy
4de2add5e9 gpu: nvgpu: MISRA 14.4 boolean fixes
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.

Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.

JIRA NVGPU-1022

Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-09 13:28:19 -08:00
Alex Waterman
86319055f4 gpu: nvgpu: Use assert in pd_cache instead of WARN()
This makes unit testing easier because it hides a difficult to
test branch under POSIX. This branch is not functional to the
rest of the code in the pd_cache so moving it to the POSIX
header should avoid headache of testing it in the pd_cache
code.

JIRA NVGPU-1323

Change-Id: Id5ca2627c83cf6dbbe68dd8ad7bfe9def71761cc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945145
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-08 21:44:31 -08:00
Alex Waterman
40b50c059a gpu: nvgpu: Set pd_cache to NULL after it's freed
Clear the pointer to the non-existent pd_cache after the
pd_cache is freed. This is good practice and makes it easier
to verify that this function has accomplished its intended
purpose in the unit tests.

JIRA NVGPU-1323

Change-Id: I1e1c20344a385bc96b293f2007485e7f4d99f947
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1941535
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Scott Long <scottl@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-04 21:25:38 -08:00
Alex Waterman
aee5511bc8 gpu: nvgpu: Update pd_cache to handle 64K pages
Update the PD cache code to handle 64KB pages. To do this the
number of partial/full lists is expanded for when we have 64K
pages. Currently we only explicitly support 4K and 64K page
sizes. Other pages sizes (16K for example) will fail compilation
during preprocessing.

This change also cleans up the definitions for some of the
internal structs. They have been moved into pd_cache.c since
they are not used outside of pd_cache.c.

This allows the following functions to be removed from the global
context:

  __nvgpu_pd_cache_alloc_direct()
  __nvgpu_pd_cache_free_direct()

They have been replaced by calls to nvgpu_pd_{alloc,free}().

The nvgpu_pd_mem_entry alloc_map also had to be expanded to a
real bitmap. 32 or 64 bits is not sufficient for packing 256
byte PDs into a 64K page (there's 256 PDs per nvgpu_pd_mem_entry
in that case). To prevent doing too many find_first_zero
operations on the bitmap an 'allocs' field was also added which
tracks how many allocs are done. We can use this instead of
comparing a mask against the bitmap to determine if an
nvgpu_pd_mem_entry is full.

Note: there's still a limitation with the TLB invalidate code:
it simply assumes an nvgpu_mem is a 1 to 1 with a PDB. This means
we can't invalidate a PDB allocated at an offset greater than 0
in a nvgpu_pd_mem_entry. This in turn means we must always use a
full page size for a context's PDB.

Bug 1977822

Change-Id: I6a7a3a95b7c902bc6487cba05fde58fbc4a25da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718755
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-29 21:56:59 -07:00
Adeel Raza
dc37ca4559 gpu: nvgpu: MISRA fixes for composite expressions
MISRA rules 10.6, 10.7, and 10.8 prevent mixing of types in composite
expressions. Resolve these violations by casting variables/constants to
the appropriate types.

Jira NVGPU-850
Jira NVGPU-853
Jira NVGPU-851

Change-Id: If6db312187211bc428cf465929082118565dacf4
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1931156
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-25 11:13:38 -07:00
Sagar Kamble
3030707936 gpu: nvgpu: init auto objects for MISRA 9.1
Address MISRA Rule 9.1 violation: The value of an object with automatic
storage duration shall not be read before it has been set.

JIRA NVGPU-881

Change-Id: I63fde303b0a3e05f16b9ce518684a37c774f0f43
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929824
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-23 15:44:50 -07:00
Amurthyreddy
c114b9e77e gpu: nvgpu: MISRA 14.4 err/ret/status as boolean
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement

Fix violations where the integer variables err, ret, status are used
as booleans in the controlling expression of if and loop statements.

JIRA NVGPU-1019

Change-Id: Ia950828797b8eff4bc754269ea2d9fa272f59436
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1919111
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-12 17:35:11 +05:30
Nicolas Benech
ea0a0c8485 gpu: nvgpu: Fix all MISRA 17.7 violations in mm
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations in MM code.

JIRA NVGPU-677

Change-Id: Iafcbe99050ba79e571107ccf9d2a89bbeb6239b1
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1847890
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-12 17:35:09 +05:30
Debarshi Dutta
421e64aad7 gpu: nvgpu: move header location of gk20a.h
Update header path of gk20a.h in files present in common/
to <nvgpu/gk20a.h>

Jira NVGPU-597

Change-Id: I3431dae93ada9bd561454c89a0b99c5292ab4a8d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1832024
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-25 00:20:25 -07:00
Amulya
941ac9a9d0 nvgpu: common: MISRA 10.1 boolean fixes
Fix violations where a variable of type non-boolean is used as a
boolean in gpu/nvgpu/common.

JIRA NVGPU-646

Change-Id: I9773d863b715f83ae1772b75d5373f77244bc8ca
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1807132
GVS: Gerrit_Virtual_Submit
Tested-by: Amulya Murthyreddy <amurthyreddy@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-19 03:24:12 -07:00
Alex Waterman
7405f69ae2 gpu: nvgpu: Fix MISRA 21.2 violations (pd_cache.c)
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.

Fixes for all the pd_cache functions that use '__' prefixes. This
was trivial: the '__' prefix was simply deleted.

JIRA NVGPU-1029

Change-Id: Ia91dabe3ef97fb17a2a85105935fb3a72d7c2c5e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813643
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-12 17:48:28 -07:00
Nicolas Benech
2eface802a gpu: nvgpu: Fix mutex MISRA 17.7 violations
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for calls to nvgpu_mutex_init and
improves related error handling.

JIRA NVGPU-677

Change-Id: I609fa138520cc7ccfdd5aa0e7fd28c8ca0b3a21c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805598
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-05 20:39:08 -07:00
Sai Nikhil
2f97e683fe gpu: nvgpu: common: fix MISRA Rule 10.4
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.

Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.

This fix violations where an arithmetic operation is performed on
signed and unsigned int types.

In balloc_get_order_list() the argument "int order" has been changed to
a u64 because all callers of this function pass a u64 argument.

JIRA NVGPU-992

Change-Id: Ie2964f9f1dfb2865a9bd6e6cdd65e7cda6c1f638
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1784419
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-08-29 08:59:31 -07:00
Srirangan
553fdf3534 gpu: nvgpu: common: mm: Fix MISRA 15.6 violations
MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces,
including single statement blocks. Fix errors due to single statement
if blocks without braces, introducing the braces.

JIRA NVGPU-671

Change-Id: I129cc170d27c7f1f2e193b326b95ebbe3c75ebab
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795600
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-08-16 15:11:33 -07:00
Alex Waterman
86a94230c6 gpu: nvgpu: Add nvgpu/bug.h include to some MM files
Add <nvgpu/bug.h> to MM files that use any of the BUG, BUG_ON,
WARN, WARN_ON, etc, macros but do not yet include <nvgpu/bug.h>.

JIRA NVGPU-401

Change-Id: I538219683d2a52b15abf147ff4bcf6375b6cb8a0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599960
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-30 17:30:12 -08:00
Alex Waterman
e620bbccdd gpu: nvgpu: Request CONTIG allocs for large PDs
Request explicitly contiguous DMA memory for large page directory
allocations. Large in this case means greater than PAGE_SIZE. This
is necessary if the GPU's DMA allocator is set to, by default,
allocate discontiguous memory.

Bug 2015747

Change-Id: I3afe9c2990522058f6aa45f28030bc82a369ca69
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593093
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 10:37:00 -08:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
Alex Waterman
43ae97000b gpu: nvgpu: Use non-contig mem in pd_cache
In the PD caching code use a non-contiguous DMA alloc for PAGE_SIZE
and below allocations. There's no need for using the special contig
pool of mem for these page sized allocs so wasting said mem can lead
us to OOM problems pretty quickly (think large sparse textures, for
example).

Also turn several pd_dbg() statements for printing OOM errors into
nvgpu_err()s since knowing exactly where an alloc fails is very
convenient.

Bug 200326705

Change-Id: Ib7c45020894d4bdd73cc92179ef707e472714d61
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1527294
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-07-30 23:11:42 -07:00
Alex Waterman
583704620d gpu: nvgpu: Implement PD packing
In some cases page directories require less than a full page of memory.
For example, on Pascal, the final PD level for large pages is only 256 bytes;
thus 16 PDs can fit in a single page. To allocate an entire page for each of
these 256 B PDs is extremely wasteful. This patch aims to alleviate the
wasted DMA memory from having small PDs in a full page by packing multiple
small PDs into a single page.

The packing is implemented as a slab allocator - each page is a slab and
from each page multiple PD instances can be allocated. Several modifications
to the nvgpu_gmmu_pd struct also needed to be made to support this. The
nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem
struct so that each nvgpu_gmmu_pd knows what portion of the memory it's
using.

The nvgpu_pde_phys_addr() function and the pd_write() functions also require
some changes since the PD no longer is always situated at the start of the
nvgpu_mem.

Initialization and cleanup of the page tables for each VM was slightly
modified to work through the new pd_cache implementation. Some PDs (i.e
the PDB), despite not being a full page, still require a full page for
alignment purposes (HW requirements). Thus a direct allocation method for
PDs is still provided. This is also used when a PD that could in principle
be cached is greater than a page in size.

Lastly a new debug flag was added for the pd_cache code.

JIRA NVGPU-30

Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1506610
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-07-06 14:44:16 -07:00