Commit Graph

19 Commits

Author SHA1 Message Date
Bitan Biswas
f090e6aa23 drivers: gpu: remove archdata.iommu
Fix k5.9 build error for archdata.iommu
Replace use of dev->archdata.iommu with iommu_get_domain_for_dev()

Change-Id: Ic1efb864046a08a7ea9b1810114bdadef20f6adf
Signed-off-by: Bitan Biswas <bbiswas@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2402360
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: Sagar Kamble <skamble@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
19c80f89be gpu: nvgpu; fix MISRA errors in nvgpu.common.mm
Rule 2.2 doesn't allow unused variable assignments. The reason is
presence of unused variable assignments may indicate error in program's
logic.
Rule 21.x doesn't allow reserved identifier or macro names starting with
'_' to be reused or defined.

Jira NVGPU-3864

Change-Id: I8ee31c0ee522cd4de00b317b0b4463868ac958ef
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2163723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-08-01 21:57:18 -07:00
Sagar Kamble
a16cc2dde3 gpu: nvgpu: compile out vidmem from safety build
Safety build does not support vidmem. This patch compiles out vidmem
related changes - vidmem, dma alloc, cbc/acr/pmu alloc based on
vidmem and corresponding tests like pramin, page allocator &
gmmu_map_unmap_vidmem..
As vidmem is applicable only in case of DGPUs the code is compiled
out using CONFIG_NVGPU_DGPU.

JIRA NVGPU-3524

Change-Id: Ic623801112484ffc071195e828ab9f290f945d4d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2132773
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-25 04:37:08 -07:00
Philip Elcan
1e95144194 gpu: nvgpu: Fix MISRA 21.2 violations in log.h
MISRA 21.2 prohibits naming functions with double underscore. So, rename
__nvgpu_log_dbg() and __nvgpu_log_msg() to nvgpu_log_dbg_impl() and
nvgpu_log_msg_impl(), respectively.

JIRA NVGPU-3368

Change-Id: I4548820f6772875088d095539b6da92051e08653
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2118043
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-05-15 22:29:40 -07:00
Nicolin Chen
b0d6964325 gpu: nvgpu: Add non-contiguous memory allocation
The latest GPU uses nvlink and its own MMU to access memory,
instead of SMMU like others. So it doesn't go through IOMMU
framework to allocate physically non-contiguous memory. The
DMA API had a pair of downstream functions to allocate the
memory for this situation, but it is removed since it's not
likely acceptable for upstream kernel.

In order not to hack the dma-direct ops that by its meaning
is supposed to provide contiguous memory, this patch adds a
pair of memory-allocation functions inside the gpu driver,
since nvgpu is the only user.

This pair of functions are only used when GPU driver doesn't
go through either dma-direct (FORCE_CONTIGUOUS) or iommu. It
also requires GPU driver to map the non-contiguous pages.

Bug 200444660

Change-Id: I26678a3f8d63bba340872beeecbb7b0e1e7a35fa
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029680
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-14 03:38:28 -07:00
Nicolin Chen
a8e6d13652 gpu: nvgpu: Delete NVGPU_DMA_FORCE_CONTIGUOUS
The flag NVGPU_DMA_FORCE_CONTIGUOUS simply means that the memory
or the pages should be forced contiguous. Meanwhile, the other
flag NVGPU_DMA_PHYSICALLY_ADDRESSED means that the memory should
be contiguous from GPU perspective, either physically contiguous
when IOMMU is not used, or virtually contiguous by IOMMU.

Thus the NVGPU_DMA_FORCE_CONTIGUOUS flag is now redundant.

This patch cleans up the NVGPU_DMA_FORCE_CONTIGUOUS flag.

Bug 200444660

Change-Id: I63bb06fea728b34ec2c6f831504392d42c426d55
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2035403
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-14 03:38:19 -07:00
Nicolin Chen
ac3c3e2b69 gpu: nvgpu: Simplify nvgpu_dma_free_sys()
The original free routine has three options:
    if (NVGPU_DMA_NO_KERNEL_MAPPING)
        dma_free_attrs(d, mem->aligned_size, mem->priv.pages,
    else if (other flags)
        dma_free_attrs(d, mem->aligned_size, mem->cpu_va,
    else /* No flags */
        dma_free_coherent(d, mem->aligned_size, mem->cpu_va,

The last dma_free_coherent() can be unwrapped to dma_free_attrs
with its dma_attrs=0, while the former two are identical except
cpu_addr. So this patch merges these three into one single call
but differentiate the cpu_addr and dma_attrs parameters.

Note that the dma_free_attrs returns 0 when flags is not set.

Bug 200444660

Change-Id: I92ec0390138c79c5109973e476ea0ea719d4e2b9
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029679
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-14 03:38:08 -07:00
Nicolin Chen
31ac769454 gpu: nvgpu: Remove device_is_iommuable
The downstream device_is_iommuable() is removed.
Check the dev->archdata.iommu pointer instead.

Bug 200385990

Change-Id: I1fe400beddc8b4f2262368b5e0e8726abca007a6
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030334
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Ashish Mhetre <amhetre@nvidia.com>
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-03-07 14:24:36 -08:00
Alex Waterman
ad838b6c09 gpu: nvgpu: Add error prints for DMA failures
Add error prints for DMA alloc failures. This way when there is a
DMA alloc failure the failure is clear. Without this it's hard to
know what exactly is causing any given -ENOMEM issue or what the
specifics of said ENOMEM case are.

Change-Id: Ia535895ae07bc1704edaed564edbb6f6dfbf6518
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1976441
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-01 11:59:04 -08:00
Alex Waterman
3282d0c50a gpu: nvgpu: Use stack buffer for DMA flags string
Instead of using kmalloc() to get a buffer for storing the
computed flags string during DMA debugging use a stack
buffer. This removes the need for a kmalloc() call. The
problem with kmalloc() is that if a dma_alloc() fails due
to being out of memory the kmalloc may likely fail, too!

Also simplify the logic now that there's no need to do
any error checking for a kmalloc() call.

Change-Id: I45c1fd16658212188a1206a2edf17b28f3c06c9e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1976440
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-01 11:59:00 -08:00
Alex Waterman
f766c6af91 gpu: nvgpu: Make "phys" nvgpu_mem impl
Make a physical nvgpu_mem implementation in the common code. This
implementation assumes a single, contiguous, physical range. GMMU
mappability is provided by building a one entry SGT.

Since this is now "common" code the original Linux code has been
moved to commom/mm/nvgpu_mem.c.

Also fix the '__' prefix in the nvgpu_mem function. This is not
necessary as this function, although somewhat tricky, is expected
to be used by arbitrary users within the nvgpu driver.

JIRA NVGPU-1029
Bug 2441531

Change-Id: I42313e5c664df3cd94933cc63ff0528326628683
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1995866
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-30 16:44:06 -08:00
Ketan Patil
4af6d70713 gpu: nvgpu: Clean up dma_attrs handling code
The dma_attr type is changed from "struct" to "unsigned long"
after kernel 4.4 Remove all such dma_attrs handling instances.

Bug 2485656

Change-Id: I07052df763d9d77b0be824a9303da2240d17c701
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2002701
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-28 20:27:57 -08:00
Nicolin Chen
7b19a825bd gpu: nvgpu: Remove DMA_NO_KERNEL_MAPPING WAR for coherent chips
The commit 3fdd8e38b2 ("gpu: nvgpu: Use our own vmap() for
coherent DMA buffers") added an NVGPU_DMA_NO_KERNEL_MAPPING
flag for coherent chips to work around a memory mapping bug
suspiciously from DMA API.

However, this requires dma-mapping code of ARM64 to support
a legacy DMA_NO_KERNEL_MAPPING attribute for DMA allocation,
which will not likely get upstreamed -- it is not long-term
sustainable. So the plan is to remove this flag from ARM64
part.

The results of 3D benchmarks and GVS sanity tests show that
the system has no regressions in stability, and no mapping
issue being observed after removing this WAR. In case that
GPU code encounters mapping issue in the future, we should
fix from the general DMA API side instead.

Bug 2424160

Change-Id: Ice91f2b2c924beb2f83762cb02efbd53fe7df1c0
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2001294
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-28 12:43:47 -08:00
Alex Waterman
0645492bae gpu: ngpu: Add PHYSICALLY_ADDRESSED flag to Linux DMA debug string
Add this flag name to the DMA debug string that is used for
sizing the buf used to print DMA debugging info. This was
missed when adding this new DMA flag.

Change-Id: I2d97f8532f512811f7804e03fff2dbaabe8479a7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971677
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-13 16:36:13 -08:00
Thomas Fleury
2b762363ac gpu: nvgpu: flag for physically addressed buffers
Some buffers like userd are physically addressed. If nvlink is
enabled, or device is not iommuable, this requires buffer to be
physically contiguous.

Add NVGPU_DMA_PHYSICALLY_ADDRESSED to identify such buffers, in
order to force physically contiguous allocation, only in above
cases.

Bug 2422486

Change-Id: I6426e23b064904e812e6b33e6d706391648a51ae
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959034
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-27 21:42:57 -08:00
Amurthyreddy
710aab6ba4 gpu: nvgpu: MISRA 14.4 boolean fixes
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.

Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.

JIRA NVGPU-1022

Change-Id: I957f8ca1fa0eb00928c476960da1e6e420781c09
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1941002
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-07 10:35:13 -08:00
Konsta Holtta
9de6d20abb gpu: nvgpu: add FOREIGN_SGT mem flag
Add an internal flag NVGPU_MEM_FLAG_FOREIGN_SGT to specify that the sgt
member of an nvgpu_mem must not be freed when the nvgpu_mem is freed.

Bug 200145225

Change-Id: I044fb91a5f9d148f38fb0cbf63d0cdfd64a070ce
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1819801
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-29 08:04:34 -07:00
Debarshi Dutta
7e1dbd8303 gpu: nvgpu: move header location of gk20a.h
1) Update header path of gk20a.h files present in os/
to <nvgpu/gk20a.h>

2) os_fence_android_sema.c indirectly was dependent on gk20a.h via
semaphore.h. So, added #include <nvgpu/gk20a.h> in
os_fence_android_sema.c and replaced the header with forward
declaration of struct gk20a in semaphore.h

Jira NVGPU-597

Change-Id: I96e23befeb80713f3a399071eb5498f6f580211d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1842868
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-25 13:10:19 -07:00
Alex Waterman
b44c7fdb11 gpu: nvgpu: Move common DMA code to common/mm
This migrates the common DMA code (os agnostic) to the
common directory. This new unit will be the common DMA
allocator that lets users allocate SYSMEM, VIDMEM, or
either. Other units will be responsible for actually
handling the mechanics of allocating VIDMEM or SYSMEM.

Also update the names of the DMA related files so that
tmake doesn't complain about duplicate C file names. To
do this call the common DMA file dma.c and prepend the
OS to the other DMA files. So now we have:

  common/mm/dma.c
  os/posix/posix-dma.c
  os/linux/linux-dma.c

JIRA NVGPU-990

Change-Id: I22d2d41803ad89be7d9c28f87864ce4fedf10836
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1799807
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-05 20:38:42 -07:00