Commit Graph

32 Commits

Author SHA1 Message Date
Alex Waterman
b6569319c7 gpu: nvgpu: Support multiple types of allocators
Support multiple types of allocation backends. Currently there is
only one allocator implementation available: a buddy allocator.
Buddy allocators have certain limitations though. For one the
allocator requires metadata to be allocated from the kernel's
system memory. This causes a given buddy allocation to potentially
sleep on a kmalloc() call.

This patch has been created so that a new backend can be created
which will avoid any dynamic system memory management routines
from being called.

Bug 1781897

Change-Id: I98d6c8402c049942f13fee69c6901a166f177f65
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1172115
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2016-07-19 11:21:46 -07:00
Alex Waterman
fbc21ed2ee gpu: nvgpu: split address space for fixed allocs
Allow a special address space node to be split out from the
user adress space or fixed allocations. A debugfs node,

  /d/<gpu>/separate_fixed_allocs

Controls this feature. To enable it:

  # echo <SPLIT_ADDR> > /d/<gpu>/separate_fixed_allocs

Where <SPLIT_ADDR> is the address to do the split on in the
GVA address range. This will cause the split to be made in
all subsequent address space ranges that get created until it
is turned off. To turn this off just echo 0x0 into the same
debugfs node.

Change-Id: I21a3f051c635a90a6bfa8deae53a54db400876f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1030303
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-03-25 13:19:17 -07:00
Sami Kiminki
9d2c9072c8 gpu: nvgpu: User-space managed address space support
Implement NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED, which
enables creating userspace-managed GPU address spaces.

When an address space is marked as userspace-managed, the following
changes are in effect:

- Only fixed-address mappings are allowed.
- VA space allocation for fixed-address mappings is not required,
  except to mark space as sparse.
- Maps and unmaps are always immediate. In particular, the mapping
  ref increments at kickoffs and decrements at job completion are
  skipped.

Bug 1614735
Bug 1623949
Bug 1660392

Change-Id: I834fe19b3f65e9b02c268952383eddee0e465759
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/738558
Reviewed-on: http://git-master/r/833253
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-11-18 09:45:07 -08:00
Sami Kiminki
eade809c26 gpu: nvgpu: Separate kernel and user GPU VA regions
Separate the kernel and userspace regions in the GPU virtual address
space. Do this by reserving the last part of the GPU VA aperture for
the kernel, and extend GPU VA aperture accordingly for regular address
spaces. This prevents the kernel polluting the userspace-visible GPU
VA regions, and thus, makes the success of fixed-address mapping more
predictable.

Bug 200077571

Change-Id: I63f0e73d4c815a4a9fa4a9ce568709974690ef0f
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/747191
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-09-07 12:37:15 -07:00
Sami Kiminki
e7ba93fefb gpu: nvgpu: Initial MAP_BUFFER_BATCH implementation
Add batch support for mapping and unmapping. Batching essentially
helps transform some per-map/unmap overhead to per-batch overhead,
namely gk20a_busy()/gk20a_idle() calls, GPU L2 flushes, and GPU TLB
invalidates. Batching with size 64 has been measured to yield >20x
speed-up in low-level fixed-address mapping microbenchmarks.

Bug 1614735
Bug 1623949

Change-Id: Ie22b9caea5a7c3fc68a968d1b7f8488dfce72085
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/733231
(cherry picked from commit de4a7cfb93e8228a4a0c6a2815755a8df4531c91)
Reviewed-on: http://git-master/r/763812
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-06-30 08:35:23 -07:00
Bharat Nihalani
b8aa486109 Revert "Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space""""
This reverts commit 2e5803d0f2b7d7a1577a40f45ab9f3b22ef2df80 since
the issue seen with bug 200106514 is fixed with change
http://git-master/r/#/c/752080/.

Bug 200112195

Change-Id: I588151c2a7ea74bd89dc3fd48bb81ff2c49f5a0a
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/752503
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-06-04 10:41:00 -07:00
Bharat Nihalani
1d8fdf5695 Revert "Revert "Revert "gpu: nvgpu: New allocator for VA space"""
This reverts commit ce1cf06b9a8eb6314ba0ca294e8cb430e1e141c0 since
it causes GPU pbdma interrupt to be generated.

Bug 200106514

Change-Id: If3ed9a914c4e3e7f3f98c6609c6dbf57e1eb9aad
Signed-off-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-on: http://git-master/r/749291
2015-06-02 20:18:55 -07:00
Alex Waterman
01f359f3f1 Revert "Revert "gpu: nvgpu: New allocator for VA space""
This reverts commit 7eb42bc239dbd207208ff491c3fb65c3d83274d8.

The original commit was actually fine.

Change-Id: I564ce6530ac73fcfad17dcec9c53f0353b4f02d4
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/743300
(cherry picked from commit e99aa2485f8992eabe3556f3ebcb57bdc8ad91ff)
Reviewed-on: http://git-master/r/743301
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-05-19 13:09:00 -07:00
Sami Kiminki
520ff00e87 gpu: nvgpu: Implement compbits mapping
Implement NVGPU_AS_IOCTL_GET_BUFFER_COMPBITS_INFO for requesting info
on compbits-mappable buffers; and NVGPU_AS_IOCTL_MAP_BUFFER_COMPBITS,
which enables mapping compbits to the GPU address space of said
buffers. This, subsequently, enables moving comptag swizzling from GPU
to CDEH/CDEV formats to userspace.

Compbits mapping is conservative and it may map more than what is
strictly needed. This is because two reasons: 1) mapping must be done
on small page alignment (4kB), and 2) GPU comptags are swizzled all
around the aggregate cache line, which means that the whole cache line
must be visible even if only some comptag lines are required from
it. Cache line size is not necessarily a multiple of the small page
size.

Bug 200077571

Change-Id: I5ae88fe6b616e5ea37d3bff0dff46c07e9c9267e
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/719710
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-05-18 11:33:19 +05:30
Terje Bergstrom
aa25a952ea Revert "gpu: nvgpu: New allocator for VA space"
This reverts commit 2e235ac150fa4af8632c9abf0f109a10973a0bf5.

Change-Id: I3aa745152124c2bc09c6c6dc5aeb1084ae7e08a4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/741469
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com>
Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2015-05-12 02:46:39 -07:00
Alex Waterman
a2e8523645 gpu: nvgpu: New allocator for VA space
Implement a new buddy allocation scheme for the GPU's VA space.
The bitmap allocator was using too much memory and is not a scaleable
solution as the GPU's address space keeps getting bigger. The buddy
allocation scheme is much more memory efficient when the majority
of the address space is not allocated.

The buddy allocator is not constrained by the notion of a split
address space. The bitmap allocator could only manage either small
pages or large pages but not both at the same time. Thus the bottom
of the address space was for small pages, the top for large pages.
Although, that split is not removed quite yet, the new allocator
enables that to happen.

The buddy allocator is also very scalable. It manages the relatively
small comptag space to the enormous GPU VA space and everything in
between. This is important since the GPU has lots of different sized
spaces that need managing.

Currently there are certain limitations. For one the allocator does
not handle the fixed allocations from CUDA very well. It can do so
but with certain caveats. The PTE page size is always set to small.
This means the BA may place other small page allocations in the
buddies around the fixed allocation. It does this to avoid having
large and small page allocations in the same PDE.

Change-Id: I501cd15af03611536490137331d43761c402c7f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/740694
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-05-11 08:53:25 -07:00
Terje Bergstrom
b3a85df53b gpu: nvgpu: SMMU bypass
Improve GMMU mapping code to cope with discontiguous buffers.

Add debugfs entry that allows bypassing SMMU and disabling big pages.

Bug 1605769

Change-Id: I14d32c62293a16ff8c7195377c75a85fa8061083
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/717503
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/737533
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Tested-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
2015-05-05 13:59:01 -07:00
Konsta Holtta
5b6e8995b2 gpu: nvgpu: use vm param to vm_map/unmap_buffer
Pass vm instead of as share to the userspace buffer mapping functions,
since they need to be called also from other places than just the AS
device ioctls, and as share is specific to them.

Bug 1573150

Change-Id: I994872f23ea7b1582361f3f4fabbd64b4786419c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/674020
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 18:04:48 -07:00
Konsta Holtta
9171334101 gpu: nvgpu: fix struct file memleak in alloc_as
Free also newly allocated struct file in error conditions with fput, and
pair it by not trying to release the resulting null as_share on release.

Bug 1597056

Change-Id: Ifad5c3a829b2c459ed6a738ecdc1ac2ac7e1678a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/671527
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 18:02:34 -07:00
Terje Bergstrom
383f176a9d gpu: nvgpu: Submit coverity fixes
Clear ioctl buffer and fix double free, and error case memory leak.

Bug 200059216

Change-Id: I21cc2b0f6a7e8fca09f72caf4c54d570b13f400b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/655347
2015-03-18 12:12:25 -07:00
Sami Kiminki
cc6ccd2e3f gpu: nvgpu: Implement NVGPU_AS_IOCTL_GET_VA_REGIONS
Implement NVGPU_AS_IOCTL_GET_VA_REGIONS which returns a list of GPU VA
regions for different page sizes. This is required for the userspace
for safe fixed-address address space allocation.

Bug 1551752

Change-Id: I63ddde30935db2471bec498dae0caa870e89c1a5
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/590814
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:12:08 -07:00
Deepak Nibade
b3f575074b gpu: nvgpu: fix sparse warnings
Fix below sparse warnings :

warning: Using plain integer as NULL pointer
warning: symbol <variable/funcion> was not declared. Should it be static?
warning: Initializer entry defined twice

Also, remove dead functions

Bug 1573254

Change-Id: I29d71ecc01c841233cf6b26c9088ca8874773469
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/593363
Reviewed-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
2015-03-18 12:12:01 -07:00
Sami Kiminki
abfc843557 gpu: nvgpu: Fix AS IOCTL return code for failed user write
Fix return code in gk20a_as_dev_ioctl() in case of failed
copy_to_user().

Change-Id: I8b86c0dfca92c8c508006dc33673ccd926497819
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/590813
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:55 -07:00
Konsta Holtta
a870ff1d29 gpu: nvgpu: Remove get and put client routines
gk20a_get_client() and gk20a_put_client() routines are effectively dead
code. The GPU has been using pm_runtime for reference counting whether
the device should be turned on or off, and gk20a_get_client() and
gk20a_put_client() have had no positive effect on the behaviour.

In worst case these functions trigger some issues as they may trigger
code paths that should not be run. There is also a race between get/put
and busy/idle.

This patch removes the functions and reworks as_gk20a.c to correctly use
gk20a_busy()/gk20a_idle() where put/get was required.

Additionally, finalize_poweron() is moved to gk20a_busy(), similarly as
it was with gk20a_get_client(). If pm_runtime is not in use, the device
is only powered on and never off. Currently this affects vgpu power
management since it does not use pm_runtime yet.

Bug 1562096

Change-Id: I3162655f83457e9caccd9264eed36b5d51e60c52
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/414998
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:49 -07:00
Terje Bergstrom
2eb6dcb469 gpu: nvgpu: Implement 64k large page support
Implement support for 64kB large page size. Add an API to create an
address space via IOCTL so that we can accept flags, and assign one
flag for enabling 64kB large page size.

Also adds APIs to set per-context large page size. This is possible
only on Maxwell, so return error if caller tries to set large page
size on Kepler.

Default large page size is still 128kB.

Change-Id: I20b51c8f6d4a984acae8411ace3de9000c78e82f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:46 -07:00
Konsta Holtta
2d0bcfa331 gpu: nvgpu: add __must_check to gk20a_busy
The return value of gk20a_busy must be checked since it may not succeed
in some cases. Add the __must_check attribute that generates a compiler
warning for code that does not read the return value and fix all uses of
the function to take error cases into account.

Bug 200040921

Change-Id: Ibc2b119985fa230324c88026fe94fc5f1894fe4f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/542552
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:34 -07:00
Konsta Holtta
28476a5b13 gpu: nvgpu: create new nvgpu ioctl header
Move nvgpu ioctls from the many user space interface headers to a new
single nvgpu.h header under include/uapi. No new code or replaced names
are introduced; this change only moves the definitions and changes
include directives accordingly.

Bug 1434573

Change-Id: I4d02415148e437a4e3edad221e08785fac377e91
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/542651
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:33 -07:00
Konsta Holtta
719923ad9f gpu: nvgpu: rename gpu ioctls and structs to nvgpu
To help remove the nvhost dependency from nvgpu, rename ioctl defines
and structures used by nvgpu such that nvhost is replaced by nvgpu.
Duplicate some structures as needed.

Update header guards and such accordingly.

Change-Id: Ifc3a867713072bae70256502735583ab38381877
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/542620
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:33 -07:00
Konsta Holtta
6b85e32d6c gpu: nvgpu: fix -EINVAL retval in ioctls
Proper error number for invalid request number is EINVAL instead of
EFAULT, so change it in ioctl calls.

Change-Id: I8fddd34e012700550e9e30fe17ba7152b3a0417b
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/542563
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:31 -07:00
Terje Bergstrom
b05d85a29d gpu: nvgpu: Change error for invalid ioctl to dbg
Change loglevel of text for invalid ioctl to dbg.

Bug 20038780

Change-Id: I0a2ba97d9c21b2225f8d3db59c80b70c2f2c679e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/501171
GVS: Gerrit_Virtual_Submit
2015-03-18 12:11:23 -07:00
Aingara Paramakuru
1fd722f592 gpu: nvgpu: support gk20a virtualization
The nvgpu driver now supports using the Tegra graphics virtualization
interfaces to support gk20a in a virtualized environment.

Bug 1509608

Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/440122
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:01 -07:00
Kirill Artamonov
d364553f7c gpu: nvgpu: implement mapping for sparse allocation
Implement support for partial buffer mappings.

Whitelist gr_pri_bes_crop_hww_esr accessed by
fec during sparse texture initialization.

bug 1456562
bug 1369014
bug 1361532

Change-Id: Ib0d1ec6438257ac14b40c8466b37856b67e7e34d
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Reviewed-on: http://git-master/r/375012
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:09:59 -07:00
Terje Bergstrom
66bb831f44 gpu: nvgpu: Register as subdomain of host1x
Add gk20a as a sub power domain of host1x. This enforces keeping
host1x on when using gk20a.

Bug 200003112

Change-Id: I08db595bc7b819d86d33fb98af0d8fb4de369463
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/407006
(cherry picked from commit 009812b3e510518740e9c7e89b8b8b80439fe26a)
Reviewed-on: http://git-master/r/408013
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:09:48 -07:00
Matt Pedro
6c6936858a Revert "gpu: nvgpu: Keep host1x on when GPU on"
This reverts commit 20d48a759b032116e3092e1df76518065da59879.

Change-Id: I93718a314b70ee9284a83ca69964883e670ad78d
Signed-off-by: Matt Pedro <mapedro@nvidia.com>
Reviewed-on: http://git-master/r/407969
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:09:48 -07:00
Arto Merilainen
219eb3d26b gpu: nvgpu: Fixes to static offset mappings
This patch addresses two issues in fixes offset mappings:
- VA unmapping did not use lists safely. This caused an application hang
if the application did not free all (fixed offset) buffers before quiting.
- GPU was not powered closing AS node. If the address space had areas that
were not freed, the driver tried to access hw without powering it up first.

Change-Id: Ida526d222ea4e03b8d765eca16574ddc1823e60d
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/405872
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:09:47 -07:00
Terje Bergstrom
4ac110cb8a gpu: nvgpu: Register as subdomain of host1x
Add gk20a as a sub power domain of host1x. This enforces keeping
host1x on when using gk20a.

Bug 200003112

Change-Id: I08db595bc7b819d86d33fb98af0d8fb4de369463
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/407543
Reviewed-by: Riham Haidar <rhaidar@nvidia.com>
Tested-by: Riham Haidar <rhaidar@nvidia.com>
2015-03-18 12:09:20 -07:00
Arto Merilainen
a9785995d5 gpu: nvgpu: Add NVIDIA GPU Driver
This patch moves the NVIDIA GPU driver to a new location.

Bug 1482562

Change-Id: I24293810b9d0f1504fd9be00135e21dad656ccb6
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/383722
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:08:53 -07:00