Commit Graph

31 Commits

Author SHA1 Message Date
Dinesh
8a94781aa9 gpu: nvgpu: Change pramin lock to mutex
As spinlock contention will eat cpu cycle, the pramin lock
can be changed to mutex.
Vidmem allocation is fully protected and vidmem pending is
an atomic variable. So the lock acquisition is removed.


JIRA NVGPU-4550

Change-Id: I0cecb8f4ee7e840fd698311572aedebbc8f49177
Signed-off-by: Dinesh <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321251
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
a615604411 gpu: nvgpu: fix MISRA 11.2 nvgpu_sgl
MISRA rule 11.2 doesn't allow conversions of a pointer from or to an
incomplete type. These type of conversions may result in a pointer
aligned incorrectly and may further result in undefined behavior.

This patch addresses rule 11.2 violations related to pointers to and
from struct nvgpu_sgl. This patch replaces struct nvgpu_sgl pointers by
void pointers.

Jira NVGPU-3736

Change-Id: I8fd5766eacace596f2761b308bce79f22f2cb207
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2267876
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:10:29 -06:00
Vedashree Vidwans
2fc673df49 gpu: nvgpu: update nvgpu_mem to accept u64 args
Currently, nvgpu_vidmem_buf_access_memory() accepts u64 size/offset
values to access memory. However, underlying nvgpu_mem read and write
functions truncate size/offset value to u32. So, any VIDMEM buffer
larger than 4GB will be inaccessible above 4GB by userspace IOCTL.

This patch updates nvgpu_mem_rd_n() and nvgpu_mem_wr_n() to accept
u64 size and u64 offset values.

BUG-2489032

Change-Id: I299742b1813e5e343a96ce25f649a39e792c3393
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2143138
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-06-28 12:28:06 -07:00
Philip Elcan
dfe7c30c79 gpu: nvgpu: pramin: fix MISRA 10.3 violations
MISRA Rule 10.3 prohibits assigning objects of different essential or
narrower type. This fixes MISRA 10.3 violations in the common/parmin
unit.

JIRA NVGPU-3023

Change-Id: I69b0765ae467df7bd7773cc92d99ce0506020a82
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2087843
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-04-03 15:55:09 -07:00
Sai Nikhil
303fc7496c gpu: nvgpu: common: fix MISRA Rule 10.4 Violations
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.

Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.

This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.

JIRA NVGPU-992

Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-11 10:26:16 -08:00
Amurthyreddy
4de2add5e9 gpu: nvgpu: MISRA 14.4 boolean fixes
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.

Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.

JIRA NVGPU-1022

Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-09 13:28:19 -08:00
Nicolas Benech
89125cb4f5 gpu: nvgpu: pramin: add error checking for SGLs
If the total size of SGLs is lower than the size to copy,
we will reach the end of the list so the sgl var will become NULL,
and calling nvgpu_sgt_get_length will cause a null pointer dereference.
This change will cause a BUG() which should be clearer than a NULL
pointer dereference. There is no easy way to add more advanced error
checking and handling, and an SGL bug would most likely be linked to
another bug in the OS or OS layer.

JIRA NVGPU-1279

Change-Id: Ide83f2b91ecae25f3a0f3202febfb115110315d7
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1923706
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-10-16 23:41:08 -07:00
Debarshi Dutta
421e64aad7 gpu: nvgpu: move header location of gk20a.h
Update header path of gk20a.h in files present in common/
to <nvgpu/gk20a.h>

Jira NVGPU-597

Change-Id: I3431dae93ada9bd561454c89a0b99c5292ab4a8d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1832024
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-25 00:20:25 -07:00
Nicolas Benech
0e58ebaae1 gpu: nvgpu: Fix nvgpu_readl MISRA 17.7 violations
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for calls to nvgpu_readl.

JIRA NVGPU-677

Change-Id: I432197cca67a10281dfe407aa9ce2dd8120030f0
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1807528
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-06 21:33:41 -07:00
Srirangan
3fbaee7099 gpu: nvgpu: common: Fix MISRA 15.6 violations
MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces,
including single statement blocks. Fix errors due to single statement
if blocks without braces, introducing the braces.

JIRA NVGPU-671

Change-Id: I4d9933c51a297a725f48cbb15520a70494d74aeb
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1800833
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-08-22 21:55:49 -07:00
Debarshi Dutta
82a90170d3 gk20a: nvgpu: Remove io.h dependency from gk20a.h
In the current code, gk20a.h includes io.h which gets directly included
in a lot of other files. io.h contains methods which uses a struct
gk20a as a parameter leading to a circular dependency between io.h
and gk20a.h. This can be mitigated by removing io.h from gk20a.h as
part of larger effort to moving gk20a.h to nvgpu/gk20a.h

JIRA NVGPU-597

Change-Id: I93e504fa9371b88152737b342a75580c65e8f712
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-07-30 11:24:06 -07:00
Terje Bergstrom
6ea52c59b0 gpu: nvgpu: Implement common nvgpu_mem_rd* functions
nvgpu_mem_rd*() functions were implemented per OS. They also used
nvgpu_pramin_access_batched() and implemented a big portion of logic
for using PRAMIN in OS specific code.

Make the implementation for the functions generic. Move all PRAMIN
logic to PRAMIN and simplify the interface provided by PRAMIN.

Change-Id: I1acb9e8d7d424325dc73314d5738cb2c9ebf7692
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1753708
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-07-02 10:19:19 -07:00
Terje Bergstrom
dbb8792baf gpu: nvgpu: Move setting of BAR0_WINDOW to bus
Move setting of BAR0_WINDOW to bus HAL. Also moves the usage of spinlock to
common code so that pramin_gk20a.[ch] can be deleted.

JIRA NVGPU-588

Change-Id: I3ceabc56016711b2c93f31fedf07daa778a4873a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730890
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:07 -07:00
Konsta Holtta
2788943d38 gpu: nvgpu: remove broken force_pramin feature
The forced PRAMIN reads and writes for sysmem buffers haven't worked in
a while since the PRAMIN access code was refactored to work with
vidmem-only sgt allocs. This feature was only ever meant for testing and
debugging PRAMIN access and early dGPU support, but that is stable
enough now so just delete the broken feature instead of fixing it.

Change-Id: Ib31dae4550f3b6fea3c426a2e4ad126864bf85d2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1723725
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-24 04:37:44 -07:00
Konsta Holtta
ba8fa334f4 gpu: nvgpu: introduce explicit nvgpu_sgl type
The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl)
argument which is a void pointer. Change the type signatures to take
struct nvgpu_sgl * which is an opaque marker type that makes it more
difficult to pass around wrong arguments, as anything goes for void *.
Explicit types add also self-documentation to the code.

For some added safety, some explicit type casts are now required in
implementors of the nvgpu_sgt_ops interface when converting between the
general nvgpu_sgl type and implementation-specific types.  This is not
purely a bad thing because the casts explain clearly where type
conversions are happening.

Jira NVGPU-30
Jira NVGPU-52
Jira NVGPU-305

Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643555
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 12:24:06 -08:00
Terje Bergstrom
da9b1bbac2 gpu: nvgpu: Introduce include/nvpgu/sizes.h
We use SZ_* #defines in some parts of nvgpu, but we don't explicitly
include a header that defines it. Add include/nvgpu/sizes.h that in
Linux #includes linux/sizes.h.

Change-Id: I8f506d85c7eaa12e649f5874a87533e2f0fe9438
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1607575
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-12-01 08:37:08 -08:00
Terje Bergstrom
be3750bc9e gpu: nvgpu: Abstract IO aperture accessors
Add abstraction of IO aperture accessors. Add new functions
gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to
aperture fields from common code.

Implement Linux version of the abstraction by moving gk20a_readl()
and gk20a_writel() to new Linux specific io.c. Move the fields
defining IO aperture to nvgpu_os_linux.

Add t19x specific IO aperture initialization functions and add t19x
specific section to nvgpu_os_linux.

JIRA NVGPU-259

Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569698
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:55 -07:00
Alex Waterman
ff9c3fc20a gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_alloc
Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much
as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs
still need to be used, however, since sharing buffers in userspace is
done by dma_buf FD. The best way to pass the vidmem buf through the
dma_buf is by SGL pointer.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540709
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:34 -07:00
Alex Waterman
88d5f6b415 gpu: nvgpu: Rename vidmem APIs
Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure
consistency and that all the non-static vidmem functions are
properly namespaced.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:23 -07:00
Terje Bergstrom
37ec670601 gpu: nvgpu: Move PRAMIN functions to nvgpu_mem
PRAMIN batch access functions are only used by nvgpu_mem. The way
the functions are written is Linux specific, so move the
implementation from common PRAMIN code.

JIRA NVGPU-259

Change-Id: I6e2aba08c98568c651a86fe8ca7f9f5220d67348
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569697
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 08:49:05 -07:00
Alex Waterman
3c37701377 gpu: nvgpu: Split VIDMEM support from mm_gk20a.c
Split VIDMEM support into its own code files organized as such:

  common/mm/vidmem.c     - Base vidmem support
  common/linux/vidmem.c  - Linux specific user-space interaction
  include/nvgpu/vidmem.h - Vidmem API definitions

Also use the config to enable/disable VIDMEM support in the makefile
and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible
from the source code.

And lastly update a while-loop that iterated over an SGT to use the
new for_each construct for iterating over SGTs.

Currently this organization is not perfectly adhered to. More patches
will fix that.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 08:01:04 -07:00
Alex Waterman
7a3dbdd43f gpu: nvgpu: Add for_each construct for nvgpu_sgts
Add a macro to iterate across nvgpu_sgts. This makes it easier on
developers who may accidentally forget to move to the next SGL.

JIRA NVGPU-243

Change-Id: I90154a5d23f0014cb79bbcd5b6e8d8dbda303820
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566627
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:29:53 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
David Nieto
7134e9e852 gpu: nvgpu: prevent crash during unbind
This change solves crashes during bind that were introduced in the driver
during the OS unification refactoring due to lack of coverage of the remove()
function.

The fixes during remove are:

(1) Prevent NULL dereference on GPUs with secure boot
(2) Prevent NULL dereferences when fecs_trace is not enabled
(3) Added PRAMIN blocker during driver removal if HW is no longer accesible
(4) Prevent double free of debugfs nodes as they are handled on the
debugfs_remove_recursive() call
(5) quiesce() can now be called without checking is HW accesible flag is set
(6) added function to free irq so no IRQ association is left on the driver after
it is removed
(7) prevent NULL dereference on nvgpu_thread_stop() if the thread is already
stopped

JIRA: EVLR-1739

Change-Id: I787d38f202d5267a6b34815f23e1bc88110e8455
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1563005
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 15:44:25 -07:00
Sunny He
17c581d755 gpu: nvgpu: SGL passthrough implementation
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.

This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.

Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.

Jira NVGPU-186

Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:55:24 -07:00
Alex Waterman
0090ee5aca gpu: nvgpu: nvgpu SGL implementation
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.

The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.

To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.

This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.

Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:52:48 -07:00
Alex Waterman
4412728b96 gpu: nvgpu: Fix offset units in PRAMIN code
The offset units in the nvgpu_pramin_access_batched() code changes
midway through the function. In the first section it is treated as
bytes but then in the while-loop iterating over the PRAMIN window
and page_alloc_chunks it becomes an offset in words.

This patch leaves the offset field in bytes and converts to words
where needed.

Change-Id: Iba964171679dfc27645238b297ed467a450b5cbc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1537079
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-08-14 01:00:26 -07:00
Alex Waterman
e32f62fadf gpu: nvgpu: Move Linux nvgpu_mem fields
Hide the Linux specific nvgpu_mem fields so that in subsequent patches
core code can instead of using struct sg_table it can use mem_desc.
Routines for accessing system specific fields will be added as needed.

This is the first step in a fairly major overhaul of the GMMU mapping
routines. There are numerous issues with the current design (or lack
there of): massively coupled code, system dependencies, disorganization,
etc.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I2e7d3ae3a07468cfc17c1c642d28ed1b0952474d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464076
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-20 16:14:32 -07:00
Alex Waterman
c9665079d7 gpu: nvgpu: rename mem_desc to nvgpu_mem
Renaming was done with the following command:

  $ find -type f | \
    xargs sed -i 's/struct mem_desc/struct nvgpu_mem/g'

Also rename mem_desc.[ch] to nvgpu_mem.[ch].

JIRA NVGPU-12

Change-Id: I69395758c22a56aa01e3dffbcded70a729bf559a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1325547
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 18:14:53 -07:00
Deepak Nibade
0d8830394a gpu: nvgpu: use nvgpu list for page chunks
Use nvgpu list APIs instead of linux list APIs
to store chunks of page allocator

Jira NVGPU-13

Change-Id: I63375fc2df683e018c48a90b76eca368438cc32f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1326814
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-04-03 08:55:19 -07:00
Alex Waterman
dd88aed5cc gpu: nvgpu: Split out pramin code
Split out the pramin interface code in preparation for splitting
out the mem_desc code.

JIRA NVGPU-12

Change-Id: I3f03447ea213cc15669b0934fa706e7cb22599b7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1323323
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-31 17:21:34 -07:00