Commit Graph

25 Commits

Author SHA1 Message Date
srajum
07583dffed gpu: nvgpu: fix MISRA 5.7 and 10.4 violations
- Rule 5.7 doesn't allow an identifier to be reused.
  This change renames variable "ops" to resolve this violation.

- Rule 10.4 says both operands of operators in which arithmetic
  operations will be do shall be of same type.

JIRA NVGPU-6056

Change-Id: Ic88f398c49d122cee206efcf88afd1edf951b042
Signed-off-by: srajum <srajum@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2561772
(cherry picked from commit c129465413db2c28bfcb0a039962cb65e2fca1ea)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2677518
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Dinesh T <dt@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
GVS: Gerrit_Virtual_Submit
2022-03-08 05:31:29 -08:00
Richard Zhao
9ab1271269 gpu: nvgpu: common: fix compile error of new compile flags
It's preparing to add bellow CFLAGS:
    -Werror -Wall -Wextra \
    -Wmissing-braces -Wpointer-arith -Wundef \
    -Wconversion -Wsign-conversion \
    -Wformat-security \
    -Wmissing-declarations -Wredundant-decls -Wimplicit-fallthrough

Jira GVSCI-11640

Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Change-Id: Ia8f508c65071aa4775d71b8ee5dbf88a33b5cbd5
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2555056
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2022-01-13 12:36:14 -08:00
Richard Zhao
cecb0666f6 gpu: nvgpu: pd_cache: always zero partial on free
After a partial cache is freed, it is possible to be re-used next time.
So always zero the partial on free.

Jira GVSCI-10977

Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Change-Id: I7779169793e32d396db187d6e6e072d6f194e91e
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2548471
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Dinesh T <dt@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2021-06-28 18:10:28 -07:00
Sagar Kamble
44c4611fda gpu: nvgpu: update pd clear condition to address IOMMU prefetch issue
IOMMU fault is observed with 64KB PAGE_SIZE. This is due to IOMMU
prefetching stale/invalid pd entries. IOMMU can prefetch more
than 4K worth of entries.

Clear pd when NVGPU_PD_CACHE_SIZE is more than 4K.

Bug 200719161

Change-Id: Iac2a9bcfbcfaa36840da1fa85594520a6fd4eaaf
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2521912
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2021-04-30 11:07:48 -07:00
Antony Clince Alex
c36752fe3d gpu: nvgpu: sim: make ring buffer independent of PAGE_SIZE
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it  happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.

Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.

Bug 200658101
Jira NVGPU-6018

Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Peter Daifuku
a331fd4b3a gpu: nvgpu: pd_cache enablement for >4k allocations in qnx
Mapping of large buffers to GMMU end up needing many
pages for the PTE tables. Allocating these one by one
can end up being a performance bottleneck, particularly
in the virtualized case.

This is adding the following changes:

 - As the TLB invalidation doesn't have access to mem_off,
   allow top-level allocation by alloc_cache_direct().
 - Define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab
   for the PD cache, effectively set to 64K bytes
 - Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE
   When freeing up cached entries, avoid prefetch errors by
   invalidating the entry (memset to 0).
 - Try to fall back to direct allocation of smaller chunk for
   contiguous allocation failures.
 - Unit test changes.

Bug 200649243

Change-Id: I0a667af0ba01d9147c703e64fc970880e52a8fbc
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404371
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Nicolas Benech
16b80d2c5c gpu: nvgpu: pd_cache: add BUG_ON to guard divide by 0
In the unlikely event of a corruption of pentry->pd_size this new
BUG_ON prevents a potential divide by 0. This change is mostly to
increase safety as it is unlikely for a divide by 0 to occur in this
instance.

JIRA NVGPU-4949

Change-Id: Ibdf80670f35a63dd20d06082cde23fb424931933
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2291022
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Scott Long
d864904a49 gpu: nvgpu: mm: misra 12.1 fixes
MISRA Advisory Rule states that the precedence of operators within
expressions should be made explicit.

This change removes the Advisory Rule 12.1 violations from mm code.

Jira NVGPU-3178

Change-Id: I51c53c3200530c8fb2b958d9d7d77b9366d9a202
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: http://git-master.nvidia.com/r/c/linux-nvgpu/+/2276837
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:10:29 -06:00
Adeel Raza
252ddc4f05 gpu: nvgpu: add coverity whitelisting support
Add macros for whitelisting coverity violations. These macros use pragma
directives. The pragma directives and whitelisting macros are only
enabled when a coverity scan is being run.

The whitelisting macros have been added to a new header called
static_analysis.h. The contents of safe_ops.h (CERT C safe ops) have
been moved into static_analysis.h because this will be the new header
for static analysis related macros/defines/etc.

JIRA NVGPU-3820

Change-Id: I9c63f20f670880b420415535738034619314b7c3
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:05:52 -06:00
Scott Long
26d955be23 gpu: nvgpu: mm: fix misra 2.7 violations
Advisory Rule 2.7 states that there should be no unused
parameters in functions.

This patch removes unused function parameters from the following:

 * release_as_share_id() -> remove 'id' param
 * nvgpu_pd_cache_look_up() -> remove 'g' param
 * nvgpu_vm_get_pte_size_fixed_map() -> remove 'size' param

Jira NVGPU-3178

Change-Id: Id2c3b5378bba9bdc0312742fd8393fb3ec67c4df
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2178650
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-08-20 09:56:59 -07:00
Philip Elcan
5e75f90f1c gpu: nvgpu: mm: fix CERT-C bugs in pd_cache
Fix CERT-C INT-30 and INT-31 violations in
nvgpu.common.mm.gmmu.pd_cache by using safe ops.

JIRA NVGPU-3637

Change-Id: Iecc7769e46c5c9c7dabaa852067e8f4052a73ac5
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2146987
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-07-03 21:45:46 -07:00
Vedashree Vidwans
d4d04d060d gpu: nvgpu: fix misra violations mm.gmmu.pd_cache
MISRA Directive 4.7 requires calling function to test returned error
information as soon as called function returns.
This patch fixes this violation in nvgpu/common/mm/gmmu/pd_cache.c.

Change-Id: I2d93bf7423d2d37aacfd14c365a0681c0dd3a49d
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2143187
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-27 12:45:44 -07:00
ajesh
a6cbfca58c gpu: nvgpu: fix MISRA violations in bitops unit
Fix the following MISRA rule violations in bitops unit,
MISRA Rule 10.1
MISRA Rule 10.3
MISRA Rule 10.4
MISRA Rule 11.8
MISRA Rule 21.2
Introduce nvgpu specific functions for bitops and bitmap operations
with unsigned integer as parameter for offset.  OS specific type
conversions and handling of these inerfaces are taken care in the
respective OS files.

Jira NVGPU-3545

Change-Id: Ib1ef76563db6ba1d879a0b4d365b2958ea03f85c
Signed-off-by: ajesh <akv@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2129513
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-11 22:26:41 -07:00
Thomas Fleury
97762279b7 gpu: nvgpu: make nvgpu_init_mutex return void
Make the nvgpu_init_mutex function return void.
In linux case, this doesn't affect anything since mutex_init
returns void.
For posix, we assert() and die if pthread_mutex_init fails.

This alleviates the need to error inject for _every_
nvgpu_mutex_init function in the driver.

Jira NVGPU-3476

Change-Id: Ibc801116dc82cdfcedcba2c352785f2640b7d54f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2130538
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-05 10:25:52 -07:00
Nicolin Chen
a8e6d13652 gpu: nvgpu: Delete NVGPU_DMA_FORCE_CONTIGUOUS
The flag NVGPU_DMA_FORCE_CONTIGUOUS simply means that the memory
or the pages should be forced contiguous. Meanwhile, the other
flag NVGPU_DMA_PHYSICALLY_ADDRESSED means that the memory should
be contiguous from GPU perspective, either physically contiguous
when IOMMU is not used, or virtually contiguous by IOMMU.

Thus the NVGPU_DMA_FORCE_CONTIGUOUS flag is now redundant.

This patch cleans up the NVGPU_DMA_FORCE_CONTIGUOUS flag.

Bug 200444660

Change-Id: I63bb06fea728b34ec2c6f831504392d42c426d55
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2035403
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-14 03:38:19 -07:00
Nicolas Benech
cb48b30737 gpu: nvgpu: create pd_cache_priv.h
struct nvgpu_pd_cache is now in the pd_cache_priv.h header that
can then be used by unit tests.

JIRA NVGPU-677

Change-Id: I2307cf6b74a1835031e00d7b32dc03d2a3ed820c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2020599
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-05 11:14:50 -08:00
Nicolas Benech
ee6ef2a719 gpu: nvgpu: resolve MISRA 17.7 for WARN_ON
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)

JIRA NVGPU-677

Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-05 11:14:46 -08:00
Alex Waterman
ba85fc999b gpu: nvgpu: Move pd_cache declarations to new header
The pd_cache header declarations were oriignally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.

Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allowed the pd_cache.h header to not include the gmmu.h
header file.

Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.

JIRA NVGPU-1444

Change-Id: Ib9e9e5a54544de403bfcd8e11c30de05721ddbcc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966352
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:05:11 -08:00
Terje Bergstrom
60e31ff091 gpu: nvgpu: Remove mm_gk20a.h dep from pd_cache
pd_cache.c includes mm_gk20a.h. It does not seem to need it, so
drop the include.

Change-Id: Ifd95009f2b8bddf15e904b94c202dd9be322da6c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964676
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:04:58 -08:00
Alex Waterman
27f3cd5290 Revert "gpu: nvgpu: Move pd_cache declarations to new header"
This reverts commit 15603b9fd5.

Causes a build break in the PD cache unit test. Not sure how this
passed GVS - must have been a race or something? Unclear.

Change-Id: Ia484a801d098d69441326fa1dd40a1c86e2e23ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966335
2018-12-05 13:24:03 -08:00
Alex Waterman
15603b9fd5 gpu: nvgpu: Move pd_cache declarations to new header
The pd_cache header declarations were originally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.

Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allows the pd_cache.h header to not include the gmmu.h
header file.

Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.

JIRA NVGPU-1444

Change-Id: Iead9a0d998396d2289ffcb3b48765d770400397b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965271
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-05 12:24:52 -08:00
Nicolas Benech
f80d2a01f4 gpu: nvgpu: clean MISRA 17.7 in pd_cache.c
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fixes for all 17.7 violations in pd_cache.c

JIRA NVGPU-677.

Change-Id: Idd5534ce82107071a1d47250f87e6a1046989433
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964639
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-04 16:14:46 -08:00
Philip Elcan
b531d6d44d gpu: nvgpu: fix MISRA 10.3 errors in pd_cache
MISRA Rule 10.3 prohibits assigning objects to different or narrower
types. This change resolves all of the 10.3 violations in the pd_cache
unit.

JIRA NVGPU-1008

Change-Id: I5b547e0e208caea2e4204708c3a50d98919409f8
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1962046
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-29 18:55:08 -08:00
Alex Waterman
7225562936 gpu: nvgpu: Re-allocate PDs when they increase in size
The problem here, and the solution, requires some background
so let's start there.

During page table programming page directories (PDs) are
allocated as needed. Each PD can range in size, depending on
chip, from 256 bytes all the way up to 32KB (gk20a 2-level
page tables).

In HW, two distinct PTE sizes are supported: large and small.
The HW supports mixing these at will. The second to last level
PDE has pointers to both a small and large PD with
corresponding PTEs. Nvgpu doesn't handle that well and as a
result historically we split the GPU virtual address space
up into a small page region and a large page region. This
makes the GMMU programming logic easier since we now only have
to worry about one type of PD for any given region.

But this presents issues for CUDA and UVM. They want to be
able to mix PTE sizes in the same GPU virtual memory range.

In general we still don't support true dual page directories.
That is page directories with both the small and large next
level PD populated. However, we will allow adjecent PDs to
have different sized next-level PDs.

Each last level PD maps the same amount. On Pascal+ that's
2MB. This is true regardless of the PTE coverage (large or
small). That means the last level PD will be different in
size depending on the PTE size.

So - going back to the SW we allocate PDs as needed when
programming the page tables. When we do this allocation we
allocate just enough space for the PD to contain the
necessary number of PTEs for the page size. The problem
manifests when a PD flips in size from large to small PTEs.

Consider the following mapping operations:

  map(gpu_va -> phys) [large-pages]
  unmap(gpu_va)
  map(gpu_va -> phys) [small-pages]

In the first map/unmap we go and allocate all the necessary
PDs and PTEs to build this translation. We do so assuming a
large page size. When unmapping, as an optimzation/quirk of
nvgpu, we leave the PDs around. We know they may well be used
again in the future.

But if we swap the size of the mapping from large to small
then we now need more space in the PD for PTEs. But the logic
in the GMMU coding assumes if the PD has memory allocated then
that memory is sufficient. This worked back when there was no
potential for a PD to swap in page size. But now that there is
we have to re-allocate the PD doesn't have enough space for
the required PTEs.

So that's the fix - reallocate PDs when they require more
space than they currently have.

Change-Id: I9de70da6acfd20c13d7bdd54232e4d4657840394
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1933076
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 13:13:47 -08:00
Alex Waterman
6be166affa gpu: nvgpu: Add new subdirs to common/mm
Add two new sub-directories under MM: gmmu and allocators.

The allocators directory is for all the allocator code we have.
There's a fair amount and as such could be considered a component
with a bunch of sub-units.

The new GMMU directory will contain the GMMU component (which used to
be a single unit). The new GMMU component is comprised of the
page_table and pd_cache units. Also when we migrate the chip specific
GMMU code out of mm_gk20a.c and mm_gp10b.c it will be placed in this
new GMMU directory.

JIRA NVGPU-1390

Change-Id: I7aa47ea2a32612b7d69972671fccb72770e1ae09
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1944385
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:36 -08:00