Commit Graph

21 Commits

Author SHA1 Message Date
Sourab Gupta
47fe66a461 gpu: nvgpu: compile nvgpu allocator for QNX
The patch has the changes for compilation of
common nvgpu allocator for QNX.
This includes some cross-OS compilation changes
and removing some Linux'isms from the allocator.

Change-Id: Ib1ecceec77b497513a196597bff4441615577548
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540306
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-17 04:28:25 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
Sunny He
17c581d755 gpu: nvgpu: SGL passthrough implementation
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.

This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.

Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.

Jira NVGPU-186

Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:55:24 -07:00
Alex Waterman
0090ee5aca gpu: nvgpu: nvgpu SGL implementation
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.

The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.

To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.

This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.

Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:52:48 -07:00
Alex Waterman
8f2f979428 gpu: nvgpu: cleanup allocator debugging
Remove debugging features that did not really get used and make
the debugging code use the nvgpu_log() functionality. This ties
the allocator debugging into the larger nvgpu debug framework.

Also modify many of the places CONFIG_DEBUG_FS was used to
conditionally compile allocator debug code to use __KERNEL__
instead. This is because that debug code can still be called even
when debugfs is not present in Linux.

Change-Id: I112ebe1cae22d6f8db96d023993498093e18d74a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1544439
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-08-28 08:54:36 -07:00
Alex Waterman
8662fae334 gpu: nvgpu: Add mem usage to page allocator debug
Add the amount of memory used to the page allocator debug dump.

Change-Id: Icd4b4a0489068aaa3f60221b792de7f8dbf0092c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1543695
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-08-23 22:57:29 -07:00
Deepak Nibade
6090a8a7ee gpu: nvgpu: move debugfs code to linux module
Since all debugfs code is Linux specific, remove
it from common code and move it to Linux module

Debugfs code is now divided into below
module specific files :

common/linux/debug.c
common/linux/debug_cde.c
common/linux/debug_ce.c
common/linux/debug_fifo.c
common/linux/debug_gr.c
common/linux/debug_mm.c
common/linux/debug_allocator.c
common/linux/debug_kmem.c
common/linux/debug_pmu.c
common/linux/debug_sched.c

Add corresponding header files for above modules too
And compile all of above files only if CONFIG_DEBUG_FS is set

Some more details of the changes made

- Move and rename gk20a/debug_gk20a.c to common/linux/debug.c
- Move and rename gk20a/debug_gk20a.h to include/nvgpu/debug.h

- Remove gm20b/debug_gm20b.c and gm20b/debug_gm20b.h and call
  gk20a_init_debug_ops() directly from gm20b_init_hal()

- Update all debug APIs to receive struct gk20a as parameter
  instead of receiving struct device pointer
- Update API gk20a_dmabuf_get_state() to receive struct gk20a
  pointer instead of struct device

- Include <nvgpu/debug.h> explicitly in all files where debug
  operations are used
- Remove "gk20a/platform_gk20a.h" include from HAL files
  which no longer need this include

- Add new API gk20a_debug_deinit() to deinitialize debugfs
  and call it from gk20a_remove()
- Move API gk20a_debug_dump_all_channel_status_ramfc() to
  gk20a/fifo_gk20a.c

Jira NVGPU-62

Change-Id: I076975d3d7f669bdbe9212fa33d98529377feeb6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1488902
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2017-06-02 06:53:35 -07:00
Alex Waterman
6070d99a50 gpu: nvgpu: Remove <linux/mm.h> from the page allocator
Remove the <linux/mm.h> include from the VIDMEM page allocator. To
do this PAGE_SIZE needed to be defined for VIDMEM. Technically using
the Linux PAGE_SIZE macro for VIDMEM was a bug since PAGE_SIZE need
not be 4K on Linux so this patch also fixes a theoretical bug.

Also usage of ERR_PTR(), PTR_ERR() and IS_ERR() was removed. These
are Linux specific error handling macros.

Change-Id: Iadaf5b8e0154b0c3adf593449023005afffad90d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1472371
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-05-14 00:05:12 -07:00
Terje Bergstrom
b3e1ce04b9 gpu: nvgpu: Put debugfs dependencies inside #ifdef
Put all debugfs dependencies inside #ifdef CONFIG_DEBUG_FS. This
includes some functions in allocators that were used only for
debugging.

Remove include of linux/debugfs.h on files that do not deal with
debugfs.

linux/debugfs.h implicitly included linux/fs.h, which we relied on.
Add explicit include of linux/fs.h for all files where this is the
case.

Change-Id: I16feffae6b0e3a2edf366075cdc01ade86be06f9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1467897
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
2017-04-24 11:05:17 -07:00
Deepak Nibade
a6fd699931 gpu: nvgpu: Add wrapper nvgpu/log2.h
Add wrapper header file nvgpu/log2.h.
It #includes <linux/log2.h> in Linux.

JIRA NVGPU-13

Change-Id: Ie434e62f7ef2dce7692b1c2c12b4ad6453f1534a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1464719
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-20 11:35:14 -07:00
Deepak Nibade
78fe154ff7 gpu: nvgpu: use nvgpu list for page allocator
Use nvgpu list APIs instead of linux list APIs
for page allocator lists

Jira NVGPU-13

Change-Id: I3ee64a5cdc2ced4ca9c4ba7ad6271915a66d90f5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1462076
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-04-19 12:15:56 -07:00
Deepak Nibade
486173a000 gpu: nvgpu: use nvgpu rbtree for page allocator
Use nvgpu rbtree instead of linux rbtree for page allocator
Move to use nvgpu_rbtree_node structure and
nvgpu_rbtree_* APIs

Jira NVGPU-13

Change-Id: I3faf843762652c6005186cbe715377050f65ee2c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1457858
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2017-04-18 01:15:12 -07:00
Terje Bergstrom
a0fa2b0258 gpu: nvgpu: Add wrapper nvgpu/bug.h
Add wrapper header file nvgpu/bug.h. It #includes <linux/bug.h>
in Linux.

JIRA NVGPU-13

Change-Id: I7bf02ba554333f7cbd79d72bd1cb423c81ebcb49
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1461545
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-13 08:56:06 -07:00
Terje Bergstrom
7665421874 gpu: nvgpu: Replace use of bitops.h and kernel.h
Remove use of linux/kernel.h and linux/compiler.h. We don't use
anything in those headers.

Also replace use of linux/bitops.h with new wrapper nvgpu/bitops.h.

JIRA NVGPU-13

Change-Id: Iefa6b4598d5a5e7fc386c0a7a554e778a87010d6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1460777
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
2017-04-12 07:01:12 -07:00
Alex Waterman
335b3fa2fe gpu: nvgpu: Remove vmalloc.h and slab.h usage
Remove all usage of vmalloc.h and slab.h outside of the Linux specific
kmem API implementation code.

Bug 1799159
Bug 1823380

Change-Id: I5b2a91bd1057b272efeaddc24902f6133b35024f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1331703
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-04 16:57:08 -07:00
Deepak Nibade
0d8830394a gpu: nvgpu: use nvgpu list for page chunks
Use nvgpu list APIs instead of linux list APIs
to store chunks of page allocator

Jira NVGPU-13

Change-Id: I63375fc2df683e018c48a90b76eca368438cc32f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1326814
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-04-03 08:55:19 -07:00
Alex Waterman
c11228d48b gpu: nvgpu: Use new kmem API functions (common/*)
Use the new kmem API functions in common/* and common/mm/*.

Add a struct gk20a pointer to struct nvgpu_allocator in order
to store the gk20a pointer used for allocating memory.

Bug 1799159
Bug 1823380

Change-Id: I881ea9545e8a8f0b75d77a1e35dd1812e0bb654e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1318315
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-03-26 09:55:10 -07:00
Alex Waterman
cf0ef133e6 gpu: nvgpu: Move kmem_caches to allocator
Instead of using a single static kmem_cache for each type of
data structure the allocators may want to allocate each
allocator now has its own instance of the kmem_cache. This is
done so that each GPU driver instance can accurately track how
much memory it is using.

In order to support this on older kernels a new NVGPU API has
been made,

  nvgpu_kmem_cache_create(struct gk20a *g, size_t size)

To handle the possibility that caches cannot be created with
the same name.

This patch also fixes numerous places where kfree() was wrongly
used to free kmem_cache allocs.

Bug 1799159
Bug 1823380

Change-Id: Id674f9a5445fde3f95db65ad6bf3ea990444603d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1283826
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-02-10 11:57:31 -08:00
Alex Waterman
24e8ee192a gpu: nvgpu: Fix call to wrong free function
Fix a mistake in which the wrong free call is used.

Bug 1799159
Bug 1823380

Change-Id: I3b60949cabbdb6b4d193c6687657cad606462687
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1283142
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-02-10 11:57:31 -08:00
Alex Waterman
d630f1d99f gpu: nvgpu: Unify the small and large page address spaces
The basic structure of this patch is to make the small page allocator
and the large page allocator into pointers (where they used to be just
structs). Then assign each of those pointers to the same actual
allocator since the buddy allocator has supported mixed page sizes
since its inception.

For the rest of the driver some changes had to be made in order to
actually support mixed pages in a single address space.

1. Unifying the allocation page size determination

   Since the allocation and map operations happen at distinct
   times both mapping and allocation of GVA space must agree
   on page size. This is because the allocation has to separate
   allocations into separate PDEs to avoid the necessity of
   supporting mixed PDEs.

   To this end a function __get_pte_size() was introduced which
   is used both by the balloc code and the core GPU MM code. It
   determines page size based only on the length of the mapping/
   allocation.

2. Fixed address allocation + page size

   Similar to regular mappings/GVA allocations fixed address
   mapping page size determination had to be modified. In the
   past the address of the mapping determined page size since
   the address space split was by address (low addresses were
   small pages, high addresses large pages). Since that is no
   longer the case the page size field in the reserve memory
   ioctl is now honored by the mapping code. When, for instance,
   CUDA makes a memory reservation it specifies small or large
   pages. When CUDA requests mappings to be made within that
   address range the page size is then looked up in the reserved
   memory struct.

   Fixed address reservations were also modified to now always
   allocate at a PDE granularity (64M or 128M depending on
   large page size. This prevents non-fixed allocations from
   ending up in the same PDE and causing kernel panics or GMMU
   faults.

3. The rest...

   The rest of the changes are just by products of the above.
   Lots of places required minor updates to use a pointer to
   the GVA allocator struct instead of the struct itself.

Lastly, this change is not truly complete. More work remains to be
done in order to fully remove the notion that there was such a thing
as separate address spaces for different page sizes. Basically after
this patch what remains is cleanup and proper documentation.

Bug 1396644
Bug 1729947

Change-Id: If51ab396a37ba16c69e434adb47edeef083dce57
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265300
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-01-31 16:23:07 -08:00
Alex Waterman
6df3992b60 gpu: nvgpu: Move allocators to common/mm/
Move the GPU allocators to common/mm/ since the allocators are common
code across all GPUs. Also rename the allocator code to move away from
gk20a_ prefixed structs and functions.

This caused one issue with the nvgpu_alloc() and nvgpu_free() functions.
There was a function for allocating either with kmalloc() or vmalloc()
depending on the size of the allocation. Those have now been renamed to
nvgpu_kalloc() and nvgpu_kfree().

Bug 1799159

Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1274400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-01-09 12:33:16 -08:00