Commit Graph

21 Commits

Author SHA1 Message Date
Alex Waterman
7a3dbdd43f gpu: nvgpu: Add for_each construct for nvgpu_sgts
Add a macro to iterate across nvgpu_sgts. This makes it easier on
developers who may accidentally forget to move to the next SGL.

JIRA NVGPU-243

Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566627
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:29:53 -07:00
Alex Waterman
84f2356b13 gpu: nvgpu: Remove sg_phys() from GMMU code
Remove the last sg_phys() call from the GMMU code and replace it
with a generic nvgpu_mem API. This new API, nvgpu_mem_get_phys_addr(),
returns the physical address of an nvgpu_mem struct.

Also, implement this new API in the Linux specific nvgpu_mem code
since it requires access to the underlying SGT/SGL.

JIRA NVGPU-68

Change-Id: Idf88701a2a8515464c658c26e0de493c82ff850d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542964
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:19:06 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
Sunny He
17c581d755 gpu: nvgpu: SGL passthrough implementation
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.

This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.

Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.

Jira NVGPU-186

Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:55:24 -07:00
Alex Waterman
0090ee5aca gpu: nvgpu: nvgpu SGL implementation
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.

The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.

To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.

This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.

Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:52:48 -07:00
Debarshi Dutta
81868a187f gpu: nvgpu: Nvgpu abstraction for linux barriers.
construct wrapper nvgpu_* methods to replace
mb,rmb,wmb,smp_mb,smp_rmb,smp_wmb,read_barrier_depends and
smp_read_barrier_depends.

NVGPU-122

Change-Id: I8d24dd70fef5cb0fadaacc15f3ab11531667a0df
Signed-off-by: Debarshi <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541199
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-08-22 03:53:51 -07:00
Alex Waterman
df64963847 gpu: nvgpu: Fix length passed to VIDMEM map
The call to __set_pd_level() for vidmem allocs had the wrong
length being passed in. This was a silent error since the subsequent
__set_pd_level() calls overwrote the bad mappings. However this
caused significantly more PDE/PTE writes than necessary since
each chunk could be mapped N times where N is the number of chunks
in an SGL.

Change-Id: Ied7247b70825dc91b9eea1c3350f4ef370ab1a52
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537078
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-08-14 01:00:24 -07:00
Alex Waterman
1da69dd8b2 gpu: nvgpu: Remove mm.get_iova_addr
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.

Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.

JIRA NVGPU-30

Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-08-04 14:54:32 -07:00
Peter Daifuku
c16797e35c gpu: nvgpu: fix warnings for GPUs with real vidmem
Fix kernel warnings for GPUs with real vidmem:

- dma.c: in nvgpu_dma_alloc_flags, ignore incoming flags when using vidmem,
  since anything but NVGPU_DMA_NO_KERNEL_MAPPING will end up generating
  kernel warnings, and the vidmem mapping functions ignore the other flags
  anyway.

- gmmu.c: in __nvgpu_gmmu_update_page_table, use appropriate function for
  memory type to retrieve physical address

Bug 1967748

Change-Id: I6fc01fd5f2c5cd7b81cba70ab59cc3c8fe4cda19
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530877
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-08-03 08:45:00 -07:00
Alex Waterman
90d388ebf8 gpu: nvgpu: Add get/set PTE routines
Add new routines for accessing and modifying PTEs in situ. They are:

  __nvgpu_pte_words()
  __nvgpu_get_pte()
  __nvgpu_set_pte()

All the details of modifying a page table entry are handled within.

Note, however, that these routines will not build page tables. If a PTE
does not exist then said PTE will not be created. Instead -EINVAL will
be returned. But, keep in mind, a PTE marked as invalid still exists.
So this API can be used to mark an invalid PTE valid.

JIRA NVGPU-30

Change-Id: Ic8615f209a0c4eb6fa64af9abadcfb3b2c11ee73
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1510447
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-07-12 07:44:47 -07:00
Alex Waterman
57abaabb76 gpu: nvgpu: Cleanup GMMU debug printing
Ensure that all debug prints are consistent from chip to chip
and function to function. The following maps letters in the
debug print to their meaning:

  C  Mapping is cachable
  v  Mapping is volatile
  S  Mapping is sparse
  P  Mapping is private (VPR/WPR)
  c  Mapping is coherent
  V  Mapping is valid

JIRA NVGPU-30

Change-Id: Ia890af88677c3e6d3fdd8c4fe266158c35b8afcd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1514903
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Tested-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-07-07 07:05:40 -07:00
Alex Waterman
6065b8c3ac gpu: nvgpu: Add t19x GMMU attributes
Add t19x specific flags into the GMMU attributes struct.

Jira GPUT19X-10
Bug 200279508

Change-Id: Ib45b83705fa1ca4ff6d14da0a2f132050e7d2cd5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1514876
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Tested-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-07-07 07:05:39 -07:00
Deepak Nibade
40c19c67d0 gpu: nvgpu: support platform specific physical address translation
On some GPUs certain physical address bits have special meaning. This
patch adds support for setting those bits based on the GMMU attributes
struct.

Jira GPUT19X-10
Bug 200279508

Change-Id: I32b8a028be7fd62af06a60c393a8c9251de0ef3c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master/r/1512600
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-07-07 07:05:39 -07:00
Deepak Nibade
d479a781c6 gpu: nvgpu: use coherent aperture for coherent buffers
Use sysmem_coherent aperture if the buffer mappings are requested
to be IO coherent. Use sysmem_noncoherent aperture otherwise. This
is implemented by adding a new coherent field to the GMMU attrs
struct.

Jira GPUT19X-17
Bug 1651331
Bug 200283998

Change-Id: I5cfb71b5913d4db50ebf10331b19f5a4216456bf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master/r/1514438
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-07-07 07:05:38 -07:00
Alex Waterman
583704620d gpu: nvgpu: Implement PD packing
In some cases page directories require less than a full page of memory.
For example, on Pascal, the final PD level for large pages is only 256 bytes;
thus 16 PDs can fit in a single page. To allocate an entire page for each of
these 256 B PDs is extremely wasteful. This patch aims to alleviate the
wasted DMA memory from having small PDs in a full page by packing multiple
small PDs into a single page.

The packing is implemented as a slab allocator - each page is a slab and
from each page multiple PD instances can be allocated. Several modifications
to the nvgpu_gmmu_pd struct also needed to be made to support this. The
nvgpu_mem is now a pointer and there's an explicit offset into the nvgpu_mem
struct so that each nvgpu_gmmu_pd knows what portion of the memory it's
using.

The nvgpu_pde_phys_addr() function and the pd_write() functions also require
some changes since the PD no longer is always situated at the start of the
nvgpu_mem.

Initialization and cleanup of the page tables for each VM was slightly
modified to work through the new pd_cache implementation. Some PDs (i.e
the PDB), despite not being a full page, still require a full page for
alignment purposes (HW requirements). Thus a direct allocation method for
PDs is still provided. This is also used when a PD that could in principle
be cached is greater than a page in size.

Lastly a new debug flag was added for the pd_cache code.

JIRA NVGPU-30

Change-Id: I64c8037fc356783c1ef203cc143c4d71bbd5d77c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1506610
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-07-06 14:44:16 -07:00
Alex Waterman
c1393d5b68 gpu: nvgpu: gmmu programming rewrite
Update the high level mapping logic. Instead of iterating over the
GPU VA iterate over the scatter-gather table chunks. As a result
each GMMU page table update call is simplified dramatically.

This also modifies the chip level code to no longer require an SGL
as an argument. Each call to the chip level code will be guaranteed
to be contiguous so it only has to worry about making a mapping from
virt -> phys.

This removes the dependency on Linux that the chip code currently
has. With this patch the core GMMU code still uses the Linux SGL but
the logic is highly transferable to a different, nvgpu specific,
scatter gather list format in the near future.

The last major update is to push most of the page table attribute
arguments to a struct. That struct is passed on through the various
mapping levels. This makes the funtions calls more simple and
easier to follow.

JIRA NVGPU-30

Change-Id: Ibb6b11755f99818fe642622ca0bd4cbed054f602
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1484104
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-07-06 14:44:15 -07:00
Alex Waterman
cadd5120d3 gpu: nvgpu: Remove fmodel GMMU allocation
Remove the special cases for fmodel in the GMMU allocation code. There
is no reason to treat fmodel any different than regular DMA memory.

If there is no IOMMU the DMA api will handle that perfectly acceptably.

JIRA NVGPU-30

Change-Id: Icceb832735a98b601b9f41064dd73a6edee29002
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master/r/1507562
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-06-27 13:15:58 -07:00
Alex Waterman
048c6b062a gpu: nvgpu: Separate GMMU mapping impl from mm_gk20a.c
Separate the non-chip specific GMMU mapping implementation code
out of mm_gk20a.c. This puts all of the chip-agnostic code into
common/mm/gmmu.c in preparation for rewriting it.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I6f7fdac3422703f5e80bb22ad304dc27bba4814d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1480228
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-06-06 17:09:22 -07:00
Alex Waterman
66a2511a36 gpu: nvgpu: Begin removing variables in struct gk20a
Begin removing all of the myriad flag variables in struct gk20a and
replace that with one API that checks for flags being enabled or
disabled. The API is as follows:

  bool nvgpu_is_enabled(struct gk20a *g, int flag);
  bool __nvgpu_set_enabled(struct gk20a *g, int flag, bool state);

These APIs allow many of the gk20a flags to be replaced by defines.
This makes flag usage consistent and saves a small amount of memory in
struct gk20a. Also it makes struct gk20a easier to read since there's
less clutter scattered through out.

JIRA NVGPU-84

Change-Id: I6525cecbe97c4e8379e5f53e29ef0b4dbd1a7fc2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1488049
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-30 13:24:35 -07:00
Alex Waterman
fbafc7eba4 gpu: nvgpu: Refactor VM init/cleanup
Refactor the API for initializing and cleaning up VMs.

This also involved moving a bunch of GMMU code out into the
gmmu code since part of initializing a VM involves initializing
the page tables for the VM.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I4710f08c26a6e39806f0762a35f6db5c94b64c50
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477746
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-05-26 03:33:57 -07:00
Alex Waterman
c3fa78b1d9 gpu: nvgpu: Separate GMMU out of mm_gk20a.c
Begin moving (and renaming) the GMMU code into common/mm/gmmu.c. This
block of code will be responsible for handling the platform/OS
independent GMMU operations.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: Ide761bab75e5d84be3dcb977c4842ae4b3a7c1b3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464083
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-11 06:04:12 -07:00