Commit Graph

15 Commits

Author SHA1 Message Date
Philip Elcan
5e75f90f1c gpu: nvgpu: mm: fix CERT-C bugs in pd_cache
Fix CERT-C INT-30 and INT-31 violations in
nvgpu.common.mm.gmmu.pd_cache by using safe ops.

JIRA NVGPU-3637

Change-Id: Iecc7769e46c5c9c7dabaa852067e8f4052a73ac5
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2146987
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-07-03 21:45:46 -07:00
Vedashree Vidwans
d4d04d060d gpu: nvgpu: fix misra violations mm.gmmu.pd_cache
MISRA Directive 4.7 requires calling function to test returned error
information as soon as called function returns.
This patch fixes this violation in nvgpu/common/mm/gmmu/pd_cache.c.

Change-Id: I2d93bf7423d2d37aacfd14c365a0681c0dd3a49d
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2143187
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-27 12:45:44 -07:00
ajesh
a6cbfca58c gpu: nvgpu: fix MISRA violations in bitops unit
Fix the following MISRA rule violations in bitops unit,
MISRA Rule 10.1
MISRA Rule 10.3
MISRA Rule 10.4
MISRA Rule 11.8
MISRA Rule 21.2
Introduce nvgpu specific functions for bitops and bitmap operations
with unsigned integer as parameter for offset.  OS specific type
conversions and handling of these inerfaces are taken care in the
respective OS files.

Jira NVGPU-3545

Change-Id: Ib1ef76563db6ba1d879a0b4d365b2958ea03f85c
Signed-off-by: ajesh <akv@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2129513
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-11 22:26:41 -07:00
Thomas Fleury
97762279b7 gpu: nvgpu: make nvgpu_init_mutex return void
Make the nvgpu_init_mutex function return void.
In linux case, this doesn't affect anything since mutex_init
returns void.
For posix, we assert() and die if pthread_mutex_init fails.

This alleviates the need to error inject for _every_
nvgpu_mutex_init function in the driver.

Jira NVGPU-3476

Change-Id: Ibc801116dc82cdfcedcba2c352785f2640b7d54f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2130538
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-05 10:25:52 -07:00
Nicolin Chen
a8e6d13652 gpu: nvgpu: Delete NVGPU_DMA_FORCE_CONTIGUOUS
The flag NVGPU_DMA_FORCE_CONTIGUOUS simply means that the memory
or the pages should be forced contiguous. Meanwhile, the other
flag NVGPU_DMA_PHYSICALLY_ADDRESSED means that the memory should
be contiguous from GPU perspective, either physically contiguous
when IOMMU is not used, or virtually contiguous by IOMMU.

Thus the NVGPU_DMA_FORCE_CONTIGUOUS flag is now redundant.

This patch cleans up the NVGPU_DMA_FORCE_CONTIGUOUS flag.

Bug 200444660

Change-Id: I63bb06fea728b34ec2c6f831504392d42c426d55
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2035403
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-14 03:38:19 -07:00
Nicolas Benech
cb48b30737 gpu: nvgpu: create pd_cache_priv.h
struct nvgpu_pd_cache is now in the pd_cache_priv.h header that
can then be used by unit tests.

JIRA NVGPU-677

Change-Id: I2307cf6b74a1835031e00d7b32dc03d2a3ed820c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2020599
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-05 11:14:50 -08:00
Nicolas Benech
ee6ef2a719 gpu: nvgpu: resolve MISRA 17.7 for WARN_ON
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)

JIRA NVGPU-677

Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-05 11:14:46 -08:00
Alex Waterman
ba85fc999b gpu: nvgpu: Move pd_cache declarations to new header
The pd_cache header declarations were oriignally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.

Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allowed the pd_cache.h header to not include the gmmu.h
header file.

Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.

JIRA NVGPU-1444

Change-Id: Ib9e9e5a54544de403bfcd8e11c30de05721ddbcc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966352
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:05:11 -08:00
Terje Bergstrom
60e31ff091 gpu: nvgpu: Remove mm_gk20a.h dep from pd_cache
pd_cache.c includes mm_gk20a.h. It does not seem to need it, so
drop the include.

Change-Id: Ifd95009f2b8bddf15e904b94c202dd9be322da6c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964676
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:04:58 -08:00
Alex Waterman
27f3cd5290 Revert "gpu: nvgpu: Move pd_cache declarations to new header"
This reverts commit 15603b9fd5.

Causes a build break in the PD cache unit test. Not sure how this
passed GVS - must have been a race or something? Unclear.

Change-Id: Ia484a801d098d69441326fa1dd40a1c86e2e23ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966335
2018-12-05 13:24:03 -08:00
Alex Waterman
15603b9fd5 gpu: nvgpu: Move pd_cache declarations to new header
The pd_cache header declarations were originally part of the
gmmu.h header. This is not good from a unit isolation perspective
so this patch moves all the pd_cache specifics over to a new
header file: <nvgpu/pd_cache.h>.

Also a couple of static inlines that were possible when the code
was part of gmmu.h were turned into real, first class functions.
This allows the pd_cache.h header to not include the gmmu.h
header file.

Also fix an issue in the nvgpu_pd_write() function where the data
was being passed as a size_t for some reason. This has now been
changed to a u32.

JIRA NVGPU-1444

Change-Id: Iead9a0d998396d2289ffcb3b48765d770400397b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965271
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-05 12:24:52 -08:00
Nicolas Benech
f80d2a01f4 gpu: nvgpu: clean MISRA 17.7 in pd_cache.c
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fixes for all 17.7 violations in pd_cache.c

JIRA NVGPU-677.

Change-Id: Idd5534ce82107071a1d47250f87e6a1046989433
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964639
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-04 16:14:46 -08:00
Philip Elcan
b531d6d44d gpu: nvgpu: fix MISRA 10.3 errors in pd_cache
MISRA Rule 10.3 prohibits assigning objects to different or narrower
types. This change resolves all of the 10.3 violations in the pd_cache
unit.

JIRA NVGPU-1008

Change-Id: I5b547e0e208caea2e4204708c3a50d98919409f8
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1962046
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-29 18:55:08 -08:00
Alex Waterman
7225562936 gpu: nvgpu: Re-allocate PDs when they increase in size
The problem here, and the solution, requires some background
so let's start there.

During page table programming page directories (PDs) are
allocated as needed. Each PD can range in size, depending on
chip, from 256 bytes all the way up to 32KB (gk20a 2-level
page tables).

In HW, two distinct PTE sizes are supported: large and small.
The HW supports mixing these at will. The second to last level
PDE has pointers to both a small and large PD with
corresponding PTEs. Nvgpu doesn't handle that well and as a
result historically we split the GPU virtual address space
up into a small page region and a large page region. This
makes the GMMU programming logic easier since we now only have
to worry about one type of PD for any given region.

But this presents issues for CUDA and UVM. They want to be
able to mix PTE sizes in the same GPU virtual memory range.

In general we still don't support true dual page directories.
That is page directories with both the small and large next
level PD populated. However, we will allow adjecent PDs to
have different sized next-level PDs.

Each last level PD maps the same amount. On Pascal+ that's
2MB. This is true regardless of the PTE coverage (large or
small). That means the last level PD will be different in
size depending on the PTE size.

So - going back to the SW we allocate PDs as needed when
programming the page tables. When we do this allocation we
allocate just enough space for the PD to contain the
necessary number of PTEs for the page size. The problem
manifests when a PD flips in size from large to small PTEs.

Consider the following mapping operations:

  map(gpu_va -> phys) [large-pages]
  unmap(gpu_va)
  map(gpu_va -> phys) [small-pages]

In the first map/unmap we go and allocate all the necessary
PDs and PTEs to build this translation. We do so assuming a
large page size. When unmapping, as an optimzation/quirk of
nvgpu, we leave the PDs around. We know they may well be used
again in the future.

But if we swap the size of the mapping from large to small
then we now need more space in the PD for PTEs. But the logic
in the GMMU coding assumes if the PD has memory allocated then
that memory is sufficient. This worked back when there was no
potential for a PD to swap in page size. But now that there is
we have to re-allocate the PD doesn't have enough space for
the required PTEs.

So that's the fix - reallocate PDs when they require more
space than they currently have.

Change-Id: I9de70da6acfd20c13d7bdd54232e4d4657840394
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1933076
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 13:13:47 -08:00
Alex Waterman
6be166affa gpu: nvgpu: Add new subdirs to common/mm
Add two new sub-directories under MM: gmmu and allocators.

The allocators directory is for all the allocator code we have.
There's a fair amount and as such could be considered a component
with a bunch of sub-units.

The new GMMU directory will contain the GMMU component (which used to
be a single unit). The new GMMU component is comprised of the
page_table and pd_cache units. Also when we migrate the chip specific
GMMU code out of mm_gk20a.c and mm_gp10b.c it will be placed in this
new GMMU directory.

JIRA NVGPU-1390

Change-Id: I7aa47ea2a32612b7d69972671fccb72770e1ae09
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1944385
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:36 -08:00