Commit Graph

21 Commits

Author SHA1 Message Date
Terje Bergstrom
83efad7adb gpu: nvgpu: Move FB size query to FB
Vidmem size query was in mm_xxx.c. It involves reading a register from
FB, so move the query to FB HAL.

JIRA NVGPU-1063

Change-Id: I30dfd2c4fdcdd6c841f85aaab7431d52473759bd
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1801425
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-10 15:23:08 -07:00
Deepak Nibade
8e66c5816d gpu: nvgpu: make bootstrap allocations contiguous
We use bootstrap vidmem allocator for all the vidmem allocations that happen
boot time
And we need to program physical address for all the potential vidmem buffers
that we program into h/w and are needed during boot

So force the allocator to allocate contigous memory

We otherwise see a warning dump when we program physical address of memory which
is allocated in multiple pages

Bug 2180284
Jira NVGPUT-12

Change-Id: Ib9c2d42ea463bc424c2cb4da8ffd8ebae436e0f6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805467
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-06 16:12:20 -07:00
Deepak Nibade
577c69322e gpu: nvgpu: increase bootstrap vidmem carveout to 256M
We right now have a bootstrap carveout in vidmem of size 16M and having base
address at {total_vidmem_size - 256M}
So this design divides rest of the vidmem into two chunks

And the size of bootstrap carveout is also small and insufficient for vidmem
allocations during boot

Hence increase the bootstrap vidmem carevout to 256M and move it to the end
of entire vidmem

Rename the carevout name for wpr_co to bootstrap_co as it is more appropriate

Also update __nvgpu_vidmem_do_clear_all() to clear only one chunk of vidmem
instead of two

Bug 2180284
Jira NVGPUT-12

Change-Id: I9c8d62bcd705c7112385df3d4f714e0190b48e17
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805466
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-09-06 16:12:17 -07:00
Konsta Holtta
390185200f gpu: nvgpu: clean up channel header includes
Remove a few unnecessary includes from channel_gk20a.h and add them to c
files where needed.

Jira NVGPU-967

Change-Id: Ic38132c776a56b6966424806faab7871575b6c10
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1804609
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-08-24 14:57:44 -07:00
Aparna Das
2df33e32e4 gpu: nvgpu: do not access register in vidmem destroy
Do vidmem destroy only if get_vidmem_size HAL op is
set which will skip this for iGPU. Do not read vidmem
size explicitly in vidmem destroy in shutdown path after
prepare poweroff.

Bug 200427479

Change-Id: Ic919b03d44b5505646b449fd74f9f5d3e9e0dfee
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776388
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-07-19 22:15:02 -07:00
Nitin Kumbhar
f9da1781f6 gpu: nvgpu: skip destroy if vidmem not initialized
The vidmem shall be destroyed only if it has been
initialized. If not skipped, it accesses mutexes
which are in invalid state. This results in BUG like:

BUG: spinlock bad magic on CPU#0, rmmod/1560

Also, destroy vidmem bootstrap allocator which is
set up in nvgpu_vidmem_init().

Bug 1987855

Change-Id: I68e91422a54b40feeb9071158b797828e2391303
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730535
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:06 -07:00
Terje Bergstrom
dd739fcb03 gpu: nvgpu: Remove gk20a_dbg* functions
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.

Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.

Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-09 18:26:04 -07:00
Shashank Singh
e1200259ba gpu: nvgpu: fix misc issue for dgpu code on QNX build
- QNX is pulling dgpu code from linux which has
  multiple build failure on QNX. Like QNX needs
  explicit declaration for all non-static functions.
  Some linux specific headers need to be put under
  __KERNEL__ flag.

Change-Id: I15af1a1f6a069c82f9a81449f4f7c7d48612de42
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665752
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-10 09:43:51 -07:00
Thomas Fleury
6c33a010d8 gpu: nvgpu: add placeholder for IPA to PA
Add __nvgpu_sgl_phys function that can be used to implement IPA
to PA translation in a subsequent change.
Adapt existing function prototypes to add pointer to gpu context,
as we will need to check if IPA to PA translation is needed.

JIRA EVLR-2442
Bug 200392719

Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673142
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-13 00:04:16 -07:00
Alex Waterman
da9b549cd1 gpu: nvgpu: Correctly plumb -EAGAIN from vidmem allocations
Userspace can and should retry vidmem allocations if there are pending
clears still to be executed by the GPU. But this requires the -EAGAIN
to properly propagate back to userspace.

Bug 200378648

Change-Id: Ib930711270439843e043d65c2e87b60612a76239
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669099
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-08 04:27:39 -08:00
Konsta Holtta
ba8fa334f4 gpu: nvgpu: introduce explicit nvgpu_sgl type
The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl)
argument which is a void pointer. Change the type signatures to take
struct nvgpu_sgl * which is an opaque marker type that makes it more
difficult to pass around wrong arguments, as anything goes for void *.
Explicit types add also self-documentation to the code.

For some added safety, some explicit type casts are now required in
implementors of the nvgpu_sgt_ops interface when converting between the
general nvgpu_sgl type and implementation-specific types.  This is not
purely a bad thing because the casts explain clearly where type
conversions are happening.

Jira NVGPU-30
Jira NVGPU-52
Jira NVGPU-305

Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643555
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 12:24:06 -08:00
Konsta Holtta
91114cd6d4 gpu: nvgpu: ce: drop prefence support
Delete the gk20a_fence_in argument in gk20a_ce_execute_ops. It has never
been used and is in the way of some upcoming code cleanup.

NVGPU-43

Change-Id: Ie61e1a2f4945b1e34d64880044c265d26fa822d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1646036
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-26 10:50:33 -08:00
Terje Bergstrom
da9b1bbac2 gpu: nvgpu: Introduce include/nvpgu/sizes.h
We use SZ_* #defines in some parts of nvgpu, but we don't explicitly
include a header that defines it. Add include/nvgpu/sizes.h that in
Linux #includes linux/sizes.h.

Change-Id: I8f506d85c7eaa12e649f5874a87533e2f0fe9438
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1607575
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-12-01 08:37:08 -08:00
Alex Waterman
86a94230c6 gpu: nvgpu: Add nvgpu/bug.h include to some MM files
Add <nvgpu/bug.h> to MM files that use any of the BUG, BUG_ON,
WARN, WARN_ON, etc, macros but do not yet include <nvgpu/bug.h>.

JIRA NVGPU-401

Change-Id: I538219683d2a52b15abf147ff4bcf6375b6cb8a0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599960
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-30 17:30:12 -08:00
David Nieto
b20e045ef1 gpu: nvgpu: fix vidmem regression
Ensures all vidmem mutex are init

bug 2004378

Change-Id: I2ffb1d8e99ecb269b36e5ea79d08db2021e54302
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583196
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-22 22:15:37 -07:00
Alex Waterman
8aacfb1da4 gpu: nvgpu: Add VIDMEM debugging
Add some VIDMEM debugging to help track the background free
thread and allocs/frees.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I88471b29d2a42c104666b111d0d3014110c9d56c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1576330
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-10-20 19:03:59 -07:00
Alex Waterman
e26ce10cc6 gpu: nvgpu: Convert VIDMEM work_struct to thread
Convert the work_struct used by the vidmem background clearing to
a thread to make it more cross platform. The thread waits on a
condition variable to determine when work needs to be done. The
signal comes from the DMA API when it enqueues a new nvgpu_mem that
needs clearing.

Add logic for handling suspend: the CE cannot be accessed while
the GPU is suspended. As such the background thread must be paused
while the GPU is suspended and the CE is not available.

Several other changes were also made:

  o  Move the code that enqueues a nvgpu_mem from the DMA API
     code to a function in the VIDMEM code.
  o  Move nvgpu_vidmem_get_pending_alloc() to the Linux specific
     code as this function is only used there. It's a trivial
     function that QNX can easily implement as well.
  o  Remove the was_empty logic from the enqueue. Now just always
     signal the condition variable when anew nvgpu_mem comes in.
  o  Move CE suspend to after MM suspend.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ie9286ae5a127c3fced86dfb9794e7d81eab0491c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574498
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-10-20 19:03:57 -07:00
Alex Waterman
ff9c3fc20a gpu: nvgpu: Reduce usage of nvgpu_vidmem_get_page_alloc
Reduce the usage of nvgpu_vidmem_get_page_alloc() and friends as much
as possible. This reduces the dependency of nvgpu on Linux SGLs. SGLs
still need to be used, however, since sharing buffers in userspace is
done by dma_buf FD. The best way to pass the vidmem buf through the
dma_buf is by SGL pointer.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ide0e9e5a557f00aa63b063be085042101a5b34ee
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540709
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:34 -07:00
Alex Waterman
59e4089278 gpu: nvgpu: Separate vidmem alloc from Linux code
Split the core vidmem allocation from the Linux component of vidmem
allocation. The core vidmem allocation allocates the nvgpu_mem struct
that defines the vidmem buffer in the core MM code. The Linux code
now allocates some Linux specific stuff (dma_buf, etc) and also
allocates the core vidmem buf.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540708
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:28 -07:00
Alex Waterman
88d5f6b415 gpu: nvgpu: Rename vidmem APIs
Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure
consistency and that all the non-static vidmem functions are
properly namespaced.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:23 -07:00
Alex Waterman
3c37701377 gpu: nvgpu: Split VIDMEM support from mm_gk20a.c
Split VIDMEM support into its own code files organized as such:

  common/mm/vidmem.c     - Base vidmem support
  common/linux/vidmem.c  - Linux specific user-space interaction
  include/nvgpu/vidmem.h - Vidmem API definitions

Also use the config to enable/disable VIDMEM support in the makefile
and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible
from the source code.

And lastly update a while-loop that iterated over an SGT to use the
new for_each construct for iterating over SGTs.

Currently this organization is not perfectly adhered to. More patches
will fix that.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 08:01:04 -07:00