Commit Graph

67 Commits

Author SHA1 Message Date
Alex Waterman
0c5d0c6a9e gpu: nvgpu: Begin reorganizing VM mapping/unmapping
Move vm_priv.h to <nvgpu/linux/vm.h> and rename nvgpu_vm_map()
to nvgpu_vm_map_linux(). Also remove a redundant unmap function
from the unmap path. These changes are the beginning of reworking
the nvgpu Linux mapping and unmapping code.

The rest of this patch is just the necessary changes to use the
new map function naming and the new path to the Linux vm header.

Patch Series Goal
-----------------

There's two major goals for this patch series. Note that these
goals are not achieved in this patch. There will be subsequent
patches.

  1.  Remove all last vestiges of Linux code from common/mm/vm.c
  2.  Implement map caching in the common/mm/vm.c code

To accomplish this firstly the VM mapping code needs to have the
struct nvgpu_mapped_buf data struct be completely Linux free. That
means implementing an abstraction for this to hold the Linux stuff
that mapped buffers carry about (SGT, dma_buf). This is why the
vm_priv.h code has been moved: it will need to be included by the
<nvgpu/vm.h> header so that the OS specific struct can be pulled
into struct nvgpu_mapped_buf.

Next renaming the nvgpu_vm_map() to nvgpu_vm_map_linux() is in
preparation for adding a new nvgpu_vm_map() that handles the
map caching with nvgpu_mapped_buf. The mapping code is fairly
straight forward: nvgpu_vm_map does OS generic stuff; each OS
then calls this function from an nvgpu_vm_map_<OS>() or the like
that does any OS specific adjustments/management.

Freeing buffers is much more tricky however. The maps are all
reference counted since userspace does not track buffers and
expects us to handle this instead. Ugh! Since there's ref-counts
the free code will require a callback into the OS specific code
since the OS specific code cannot free a buffer directly. THis
make's the path for freeing a buffer quite convoluted.

JIRA NVGPU-30
JIRA NVGPU-71

Change-Id: I5e0975f60663a0d6cf0a6bd90e099f51e02c2395
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1578896
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-10-24 15:16:50 -07:00
Peter Daifuku
57fb527a7e gpu: nvgpu: vgpu: flatten out vgpu hal
Instead of calling the native HAL init function then adding
multiple layers of modification for VGPU, flatten out the sequence
so that all entry points are set statically and visible in a
single file.

JIRA ESRM-30

Change-Id: Ie424abb48bce5038874851d399baac5e4bb7d27c
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574616
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:20:18 -07:00
Alex Waterman
edb1166613 gpu: nvgpu: rename ops.mm.get_physical_addr_bits
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.

JIRA NVGPU-68

Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:21:47 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
Sunny He
17c581d755 gpu: nvgpu: SGL passthrough implementation
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.

This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.

Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.

Jira NVGPU-186

Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:55:24 -07:00
Alex Waterman
0090ee5aca gpu: nvgpu: nvgpu SGL implementation
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.

The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.

To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.

This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.

Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:52:48 -07:00
Alex Waterman
1da69dd8b2 gpu: nvgpu: Remove mm.get_iova_addr
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.

Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.

JIRA NVGPU-30

Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-08-04 14:54:32 -07:00
Alex Waterman
c21f5bca9a gpu: nvgpu: Remove extraneous VM init/deinit APIs
Support only VM pointers and ref-counting for maintaining VMs. This
dramatically reduces the complexity of the APIs, avoids the API
abuse that has existed, and ensures that future VM usage is
consistent with current usage.

Also remove the combined VM free/instance block deletion. Any place
where this was done is now replaced with an explict free of the
instance block and a nvgpu_vm_put().

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: Ib73e8d574ecc9abf6dad0b40a2c5795d6396cc8c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1480227
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-06-06 17:09:16 -07:00
Alex Waterman
c2b63150cd gpu: nvgpu: Unify vm_init for vGPU and regular GPU
Unify the initialization routines for the vGPU and regular GPU paths.
This helps avoid any further code divergence. This also assumes that
the code running on the regular GPU essentially works for the vGPU.
The only addition is that the regular GPU path calls an API in the
vGPU code that sends the necessary RM server message.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I37af1993fd8b50f666ae27524d382cce49cf28f7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1480226
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-06-06 17:09:11 -07:00
Stephen Warren
2e338c77ea gpu: nvgpu: remove duplicate \n from log messages
nvgpu_log/info/warn/err() internally add a \n to the end of the message.
Hence, callers should not include a \n at the end of the message. Doing
so results in duplicate \n being printed, which ends up creating empty
log messages. Remove the duplicate \n from all err/warn messages.

Bug 1928311

Change-Id: I99362c5327f36146f28ba63d4e68181589735c39
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-on: http://git-master/r/1487232
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-26 03:34:30 -07:00
Alex Waterman
0bb47c3675 gpu: nvgpu: Add and use VM init/deinit APIs
Remove the VM init/de-init from the HAL and instead use a single
set of routines that init/de-init VMs. This prevents code divergence
between vGPUs and regular GPUs.

This patch also clears up the naming of the routines a little bit.
Since some VMs are used inplace and others are dynamically allocated
the APIs for freeing them were confusing. Also some free calls also
clean up an instance block (this is API abuse - but this is how it
currently exists).

The new API looks like this:

void __nvgpu_vm_remove(struct vm_gk20a *vm);
void nvgpu_vm_remove(struct vm_gk20a *vm);
void nvgpu_vm_remove_inst(struct vm_gk20a *vm,
			  struct nvgpu_mem *inst_block);
void nvgpu_vm_remove_vgpu(struct vm_gk20a *vm);

int nvgpu_init_vm(struct mm_gk20a *mm,
		  struct vm_gk20a *vm,
		  u32 big_page_size,
		  u64 low_hole,
		  u64 kernel_reserved,
		  u64 aperture_size,
		  bool big_pages,
		  bool userspace_managed,
		  char *name);
void nvgpu_deinit_vm(struct vm_gk20a *vm);

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: Ia4016384c54746bfbcaa4bdd0d29d03d5d7f7f1b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477747
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-05-26 03:33:57 -07:00
Alex Waterman
fbafc7eba4 gpu: nvgpu: Refactor VM init/cleanup
Refactor the API for initializing and cleaning up VMs.

This also involved moving a bunch of GMMU code out into the
gmmu code since part of initializing a VM involves initializing
the page tables for the VM.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I4710f08c26a6e39806f0762a35f6db5c94b64c50
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477746
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-05-26 03:33:57 -07:00
Alex Waterman
b70bad4b9f gpu: nvgpu: Refactor gk20a_vm_alloc_va()
This function is an internal function to the VM manager that allocates
virtual memory space in the GVA allocator. It is unfortunately used in
the vGPU code, though. In any event, this patch cleans up and moves the
implementation of these functions into the VM common code.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I24a3d29b5fcb12615df27d2ac82891d1bacfe541
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477745
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-24 12:14:13 -07:00
Alex Waterman
29cc82844e gpu: nvgpu: Split vm_area management into vm code
The vm_reserve_va_node struct is essentially a special VM area that
can be used for sparse mappings and fixed mappings. The name of this
struct is somewhat confusing (as node is typically used for list
items). Though this struct is a part of a list it doesn't really
make sense to call this a list item since it's much more. Based on
that the struct has been renamed to nvgpu_vm_area to capture the
actual use of the struct more accurately.

This also moves all of the management code of vm areas to a new file
devoted solely to vm_area management.

Also add a brief overview of the VM architecture. This should help
other people follow along the hierachy of ownership and lifetimes in
the rather complex MM code.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: If85e1cf868031d0dc265e7bed50b58a2aed2602e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-19 15:34:12 -07:00
Alex Waterman
014ace5a85 gpu: nvgpu: Split VM implementation out
This patch begins splitting out the VM implementation from mm_gk20a.c and
moves it to common/linux/vm.c and common/mm/vm.c. This split is necessary
because the VM code has two portions: first, an interface for the OS
specific code to use (i.e userspace mappings), and second, a set of APIs
for the driver to use (init, cleanup, etc) which are not OS specific.

This is only the beginning of the split - there's still a lot of things
that need to be carefully moved around.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I3b57cba245d7daf9e4326a143b9c6217e0f28c96
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1477743
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-19 15:34:06 -07:00
Alex Waterman
d37e8f7dcf gpu: nvgpu: Split VM interface out
This patch begins the major rework of the GPU's virtual memory manager
(VMM). The VMM is the piece of code that handles the userspace interface
to buffers and their mappings into the GMMU. The core data structure is
the VM - for now still known as 'struct vm_gk20a'. Each one of these
structs represents one addres space to which channels or TSGs may bind
themselves to.

The VMM splits the interface up into two broad categories. First there's
the common, OS independent interfaces; and second there's the OS specific
interfaces.

OS independent
--------------

  This is the code that manages the lifetime of VMs, the buffers inside
  VMs (search, batch mapping) creation, destruction, etc.

OS Specific
-----------

  This handles mapping of buffers represented as they are represented by
  the OS (dma_buf's for example on Linux).

This patch is by no means complete. There's still Linux specific functions
scattered in ostensibly OS independent code. This is the first step. A
patch that rewrites everything in one go would simply be too big to
effectively review.

Instead the goal of this change is to simply separate out the basic
OS specific and OS agnostic interfaces into their own header files. The
next series of patches will start to pull the relevant implementations
into OS specific C files and common C files.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I242c7206047b6c769296226d855b7e44d5c4bfa8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464939
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-19 15:34:01 -07:00
Terje Bergstrom
a0fa2b0258 gpu: nvgpu: Add wrapper nvgpu/bug.h
Add wrapper header file nvgpu/bug.h. It #includes <linux/bug.h>
in Linux.

JIRA NVGPU-13

Change-Id: I7bf02ba554333f7cbd79d72bd1cb423c81ebcb49
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1461545
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-13 08:56:06 -07:00
Terje Bergstrom
5405070ecd gpu: nvgpu: vgpu: Use new error macros
gk20a_err() and gk20a_warn() require a struct device pointer,
which is not portable across operating systems. The new nvgpu_err()
and nvgpu_warn() macros take struct gk20a pointer. Convert code
to use the more portable macros.

JIRA NVGPU-16

Change-Id: I071e8c50959bfa81730ca964d912bc69f9c7e6ad
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1457355
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-10 12:24:27 -07:00
Alex Waterman
8f2d4a3f4a gpu: nvgpu: Move DMA API to dma.h
Make an nvgpu DMA API include file so that the intricacies of the
Linux DMA API can be hidden from the calling code.

Also document the nvgpu DMA API.

JIRA NVGPU-12

Change-Id: I7578e4c726ad46344b7921179d95861858e9a27e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1323326
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 18:14:58 -07:00
Alex Waterman
c9665079d7 gpu: nvgpu: rename mem_desc to nvgpu_mem
Renaming was done with the following command:

  $ find -type f | \
    xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g'

Also rename mem_desc.[ch] to nvgpu_mem.[ch].

JIRA NVGPU-12

Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1325547
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 18:14:53 -07:00
Alex Waterman
b69020bff5 gpu: nvgpu: Rename gk20a_mem_* functions
Rename the functions used for mem_desc access to nvgpu_mem_*.

JIRA NVGPU-12

Change-Id: Ibfdc1112d43f0a125e4487c250e3f977ffd2cd75
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1323325
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 18:14:48 -07:00
Deepak Nibade
ce3c30f14f gpu: nvgpu: use nvgpu rbtree to store mapped buffers
Use nvgpu rbtree instead of linux rbtree to store
mapped buffers for each VM

Move to use "struct nvgpu_rbtree_node" instead of
"struct rb_node"
And similarly use rbtree APIs from <nvgpu/rbtree.h>
instead of linux APIs

Jira NVGPU-13

Change-Id: Id96ba76e20fa9ecad016cd5d5a6a7d40579a70f2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1453043
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 10:57:28 -07:00
Deepak Nibade
cd3cf04cac gpu: nvgpu: use nvgpu list for VA lists
Use nvgpu list APIs instead of linux list APIs
for reserved VA list and buffer VA list

Jira NVGPU-13

Change-Id: I83c02345d54bca03b00270563567227510cfce6b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1454013
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-04-03 08:55:20 -07:00
Alex Waterman
2e15a2d1ac gpu: nvgpu: Use new kmem API functions (vgpu/*)
Use the new kmem API functions in vgpu/*. Also reshuffle the order
of some allocs in the vgpu init code to allow usage of the nvgpu
kmem APIs.

Bug 1799159
Bug 1823380

Change-Id: I6c6dcff03b406a260dffbf89a59b368d31a4cb2c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1318318
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-28 09:39:07 -07:00
Terje Bergstrom
ca762e4220 gpu: nvgpu: Move all FB programming to FB HAL
Move all programming of FB to fb_*.c files, and remove the inclusion
of FB hardware headers from other files.

TLB invalidate function took previously a pointer to VM, but the new
API takes only a PDB mem_desc, because FB does not need to know about
higher level VM.

GPC MMU is programmed from the same function as FB MMU, so added
dependency to GR hardware header to FB.

GP106 ACR was also triggering a VPR fetch, but that's not applicable
to dGPU, so removed that call.

Change-Id: I4eb69377ac3745da205907626cf60948b7c5392a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1321516
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-17 08:44:03 -07:00
Deepak Nibade
8cdb91c527 gpu: nvgpu: remove use of DEFINE_MUTEX()
API DEFINE_MUTEX() is defined in Linux and might
not be available in other OSs.
Hence remove its usage from nvgpu

Declare and explicitly initialize below mutexes
for both nvgpu and vgpu
g->mm.priv_lock
g->mm.tlb_lock

Jira NVGPU-13

Change-Id: If72885a6da0227a1552303206172f1f2b751471d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1298042
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-02-22 04:15:08 -08:00
Deepak Nibade
8ee3aa4b31 gpu: nvgpu: use common nvgpu mutex/spinlock APIs
Instead of using Linux APIs for mutex and spinlocks
directly, use new APIs defined in <nvgpu/lock.h>

Replace Linux specific mutex/spinlock declaration,
init, lock, unlock APIs with new APIs
e.g
struct mutex is replaced by struct nvgpu_mutex and
mutex_lock() is replaced by nvgpu_mutex_acquire()

And also include <nvgpu/lock.h> instead of including
<linux/mutex.h> and <linux/spinlock.h>

Add explicit nvgpu/lock.h includes to below
files to fix complilation failures.
gk20a/platform_gk20a.h
include/nvgpu/allocator.h

Jira NVGPU-13

Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1293187
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-02-22 04:15:02 -08:00
Aparna Das
28b0d6cfa8 gpu: nvgpu: remove call to invalidate tlb
Guest doesn't explicitly send command to the RM server
to invalidate tlb which is done implicitly when mapping
or unmapping buffer. Remove support for this call.

Bug 1665111

Change-Id: Icf2edae7feffa35b1dbf87c227b3e98b506e6519
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: http://git-master/r/1287728
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-02-14 11:15:27 -08:00
Alex Waterman
aa36d3786a gpu: nvgpu: Organize semaphore_gk20a.[ch]
Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore
code is common to all chips.

Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu
and rename it to semaphore.h. Also update all places where the header
is inluced to use the new path.

This revealed an odd location for the enum gk20a_mem_rw_flag. This should
be in the mm headers. As a result many places that did not need anything
semaphore related had to include the semaphore header file. Fixing this
oddity allowed the semaphore include to be removed from many C files that
did not need it.

Bug 1799159

Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284627
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-02-13 18:14:45 -08:00
Alex Waterman
7e403974d3 gpu: nvgpu: Simplify ref-counting on VMs
Simplify ref-counting on VMs: take a ref when a VM is bound to a
channel and drop a ref when a channel is freed.

Previously ref-counts were scattered over the driver. Also the CE
and CDE code would bind channels with custom rolled code. This was
because the gk20a_vm_bind_channel() function took an as_share as
the VM argument (the VM was then inferred from that as_share).
However, it is trivial to abtract that bit out and allow a central
bind channel function that just takes a VM and a channel.

Bug 1846718

Change-Id: I156aab259f6c7a2fa338408c6c4a3a464cd44a0c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1261886
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-02-07 14:54:02 -08:00
Alex Waterman
b9b94c073c gpu: nvgpu: Remove separate fixed address VMA
Remove the special VMA that could be used for allocating fixed
addresses. This feature was never used and is not worth maintaining.

Bug 1396644
Bug 1729947

Change-Id: I06f92caa01623535516935acc03ce38dbdb0e318
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265302
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-01-31 16:23:07 -08:00
Alex Waterman
d630f1d99f gpu: nvgpu: Unify the small and large page address spaces
The basic structure of this patch is to make the small page allocator
and the large page allocator into pointers (where they used to be just
structs). Then assign each of those pointers to the same actual
allocator since the buddy allocator has supported mixed page sizes
since its inception.

For the rest of the driver some changes had to be made in order to
actually support mixed pages in a single address space.

1. Unifying the allocation page size determination

   Since the allocation and map operations happen at distinct
   times both mapping and allocation of GVA space must agree
   on page size. This is because the allocation has to separate
   allocations into separate PDEs to avoid the necessity of
   supporting mixed PDEs.

   To this end a function __get_pte_size() was introduced which
   is used both by the balloc code and the core GPU MM code. It
   determines page size based only on the length of the mapping/
   allocation.

2. Fixed address allocation + page size

   Similar to regular mappings/GVA allocations fixed address
   mapping page size determination had to be modified. In the
   past the address of the mapping determined page size since
   the address space split was by address (low addresses were
   small pages, high addresses large pages). Since that is no
   longer the case the page size field in the reserve memory
   ioctl is now honored by the mapping code. When, for instance,
   CUDA makes a memory reservation it specifies small or large
   pages. When CUDA requests mappings to be made within that
   address range the page size is then looked up in the reserved
   memory struct.

   Fixed address reservations were also modified to now always
   allocate at a PDE granularity (64M or 128M depending on
   large page size. This prevents non-fixed allocations from
   ending up in the same PDE and causing kernel panics or GMMU
   faults.

3. The rest...

   The rest of the changes are just by products of the above.
   Lots of places required minor updates to use a pointer to
   the GVA allocator struct instead of the struct itself.

Lastly, this change is not truly complete. More work remains to be
done in order to fully remove the notion that there was such a thing
as separate address spaces for different page sizes. Basically after
this patch what remains is cleanup and proper documentation.

Bug 1396644
Bug 1729947

Change-Id: If51ab396a37ba16c69e434adb47edeef083dce57
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265300
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-01-31 16:23:07 -08:00
Alex Waterman
6df3992b60 gpu: nvgpu: Move allocators to common/mm/
Move the GPU allocators to common/mm/ since the allocators are common
code across all GPUs. Also rename the allocator code to move away from
gk20a_ prefixed structs and functions.

This caused one issue with the nvgpu_alloc() and nvgpu_free() functions.
There was a function for allocating either with kmalloc() or vmalloc()
depending on the size of the allocation. Those have now been renamed to
nvgpu_kalloc() and nvgpu_kfree().

Bug 1799159

Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1274400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-01-09 12:33:16 -08:00
Terje Bergstrom
d09d259d74 gpu: nvgpu: vgpu: Do not overwrite err code on fail
vgpu_vm_alloc_share() wants to return -EINVAL if VMA areas requested
do not fulfill the criteria. The error code gets overwritten by a
call to vgpu_comm_sendrecv(), which makes vgpu_vm_alloc_share() always
return 0.

Change-Id: I93f56025f963d1d4ad2f9b06139fce742d3be41b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1249961
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2016-11-11 08:20:31 -08:00
Alex Waterman
2fa54c94a6 gpu: nvgpu: Remove global debugfs variable
Remove a global debugfs variable and instead save the allocator
debugfs root node in the gk20a struct.

Bug 1799159

Change-Id: If4eed34fa24775e962001e34840b334658f2321c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1225611
(cherry picked from commit 1908fde10bb1fb60ce898ea329f5a441a3e4297a)
Reviewed-on: http://git-master/r/1242390
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-10-26 11:10:01 -07:00
Richard Zhao
e1438818b9 gpu: nvgpu: vgpu: add vgpu private data and helper functions
Move vgpu private data to a dedicated structure and allocate it
at probe time. Also add virt_handle helper function which is used
everywhere.

JIRA VFND-2103

Change-Id: I125911420be72ca9be948125d8357fa85d1d3afd
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1185206
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
2016-08-15 11:41:16 -07:00
Alex Waterman
0793de62b2 gpu: nvgpu: Change the allocator flag naming scheme
Move to a more generic name of GPU_ALLOC_*.

Change-Id: Icbbd366847a9d74f83f578e4d9ea917a6e8ea3e2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1176445
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2016-07-19 11:31:50 -07:00
Alex Waterman
b6569319c7 gpu: nvgpu: Support multiple types of allocators
Support multiple types of allocation backends. Currently there is
only one allocator implementation available: a buddy allocator.
Buddy allocators have certain limitations though. For one the
allocator requires metadata to be allocated from the kernel's
system memory. This causes a given buddy allocation to potentially
sleep on a kmalloc() call.

This patch has been created so that a new backend can be created
which will avoid any dynamic system memory management routines
from being called.

Bug 1781897

Change-Id: I98d6c8402c049942f13fee69c6901a166f177f65
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1172115
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2016-07-19 11:21:46 -07:00
Konsta Holtta
b8915ab5aa gpu: nvgpu: support in-kernel vidmem mappings
Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that
buffers represented as a mem_desc and present in vidmem can be mapped to
gpu.

JIRA DNVGPU-18
JIRA DNVGPU-76

Change-Id: I46cf87e27229123016727339b9349d5e2c835b3e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1169308
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-07-06 03:34:23 -07:00
Richard Zhao
71c8d62657 gpu: nvgpu: vgpu: add set mmu debug mode support
JIRA VFND-1005
Bug 1594604

Change-Id: Ic159a1aff9cee508194f1f5dff7a16eb0e47ad64
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/833498
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-12-04 12:01:46 -08:00
Sami Kiminki
9d2c9072c8 gpu: nvgpu: User-space managed address space support
Implement NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED, which
enables creating userspace-managed GPU address spaces.

When an address space is marked as userspace-managed, the following
changes are in effect:

- Only fixed-address mappings are allowed.
- VA space allocation for fixed-address mappings is not required,
  except to mark space as sparse.
- Maps and unmaps are always immediate. In particular, the mapping
  ref increments at kickoffs and decrements at job completion are
  skipped.

Bug 1614735
Bug 1623949
Bug 1660392

Change-Id: I834fe19b3f65e9b02c268952383eddee0e465759
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/738558
Reviewed-on: http://git-master/r/833253
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-11-18 09:45:07 -08:00
Terje Bergstrom
37255d42cc gpu: nvgpu: vgpu: Alloc kernel address space
JIRA VFND-890

Change-Id: I8eba041b663cead94f2cc3d75d6458d472f1a755
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/815378
(cherry picked from commit 4b52329e955758ec4368abcb463ce4e3a2653237)
Reviewed-on: http://git-master/r/820499
2015-10-22 09:27:30 -07:00
Aingara Paramakuru
39e8bff2fc gpu: nvgpu: vgpu: T18x support
Add vgpu framework and build for T18x.

Bug 1677153
JIRA VFND-693

Change-Id: Icf9fd8e0b5769228aee59c54f9b000b992e5fcca
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/792559
Reviewed-on: http://git-master/r/806178
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-09-29 08:12:15 -07:00
Sami Kiminki
eade809c26 gpu: nvgpu: Separate kernel and user GPU VA regions
Separate the kernel and userspace regions in the GPU virtual address
space. Do this by reserving the last part of the GPU VA aperture for
the kernel, and extend GPU VA aperture accordingly for regular address
spaces. This prevents the kernel polluting the userspace-visible GPU
VA regions, and thus, makes the success of fixed-address mapping more
predictable.

Bug 200077571

Change-Id: I63f0e73d4c815a4a9fa4a9ce568709974690ef0f
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/747191
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-09-07 12:37:15 -07:00
Richard Zhao
a88e58cc9d gpu: nvgpu: vgpu: add t210 gm20b support
- add hal initializaiton
- create folders vgpu/gk20a and vgpu/gm20b for specific code

Bug 1653185

Change-Id: If94d45e22a1d73d2e4916673736cc29751be4e40
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/774148
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Ken Adams <kadams@nvidia.com>
2015-08-19 05:12:00 -07:00
Terje Bergstrom
63714e7cc1 gpu: nvgpu: Implement priv pages
Implement support for privileged pages. Use them for kernel allocated buffers.

Change-Id: I720fc441008077b8e2ed218a7a685b8aab2258f0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/761919
2015-07-03 17:59:12 -07:00
Sami Kiminki
e7ba93fefb gpu: nvgpu: Initial MAP_BUFFER_BATCH implementation
Add batch support for mapping and unmapping. Batching essentially
helps transform some per-map/unmap overhead to per-batch overhead,
namely gk20a_busy()/gk20a_idle() calls, GPU L2 flushes, and GPU TLB
invalidates. Batching with size 64 has been measured to yield >20x
speed-up in low-level fixed-address mapping microbenchmarks.

Bug 1614735
Bug 1623949

Change-Id: Ie22b9caea5a7c3fc68a968d1b7f8488dfce72085
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/733231
(cherry picked from commit de4a7cfb93e8228a4a0c6a2815755a8df4531c91)
Reviewed-on: http://git-master/r/763812
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-06-30 08:35:23 -07:00
Bharat Nihalani
b8aa486109 Revert "Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space""""
This reverts commit 2e5803d0f2b7d7a1577a40f45ab9f3b22ef2df80 since
the issue seen with bug 200106514 is fixed with change
http://git-master/r/#/c/752080/.

Bug 200112195

Change-Id: I588151c2a7ea74bd89dc3fd48bb81ff2c49f5a0a
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/752503
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-06-04 10:41:00 -07:00
Bharat Nihalani
1d8fdf5695 Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space"""
This reverts commit ce1cf06b9a8eb6314ba0ca294e8cb430e1e141c0 since
it causes GPU pbdma interrupt to be generated.

Bug 200106514

Change-Id: If3ed9a914c4e3e7f3f98c6609c6dbf57e1eb9aad
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/749291
2015-06-02 20:18:55 -07:00
Alex Waterman
01f359f3f1 Revert "Revert "gpu: nvgpu: New allocator for VA space""
This reverts commit 7eb42bc239dbd207208ff491c3fb65c3d83274d8.

The original commit was actually fine.

Change-Id: I564ce6530ac73fcfad17dcec9c53f0353b4f02d4
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/743300
(cherry picked from commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff)
Reviewed-on: http://git-master/r/743301
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-05-19 13:09:00 -07:00