Commit Graph

774 Commits

Author SHA1 Message Date
Konsta Holtta
b0dee2f26c gpu: nvgpu: don't run cde shader for 0 ctaglines
If the associated buffer is not compressed, it would be invalid to call
the cde swizzler shader with zero lines. The fences in
PREPARE_COMPRESSIBLE_READ still need to be managed, so just do a dummy
submit with zero entries when lines is zero for the buffer.

Bug 1856088

Change-Id: Ia68c2ffff21e5e8077d5c550b0ca44090f88bf80
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1590055
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-22 09:49:30 -08:00
Seema Khowala
8fe633449f gpu: nvgpu: Add check_priv_security fuse ops
-New fuse ops is added to set NVGPU_SEC_PRIVSECURITY
 and NVGPU_SEC_SECUREGPCCS bits in g->enabled_flags
 during hal initialization

-For igpu non simulation platforms, fuses are read
 to decide if gpu should be allowed to boot or not.
--Do not boot gpu if priv_sec_en is set but wpr_enabled
  is not set to 1 or vpr_auto_fetch_disable is not set to 0
--With priv_sec_en set, all falcons have to boot
  in LS mode and this needs wpr_enabled set to 1
  AND vpr_auto_fetch_disable set to 0. In this case
  gmmu tries to pull wpr and vpr settings from tegra mc

Bug 2018223

Change-Id: Iceaa1b0b3214e9a3d6cef5d77a82e034302f748b
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595454
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-22 00:59:28 -08:00
Seema Khowala
f34a4d0b12 gpu: nvgpu: CONFIG_TEGRA_ACR is supported by default
TEGRA_ACR config is supposed to be enabled maxwell
onwards. Since gk20a support is no longer supported,
delete code that is not under TEGRA_ACR config

Change-Id: Id52485680bca1ceaadcb94f9603c0898c2002e02
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595437
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-22 00:59:18 -08:00
Mahantesh Kumbar
f53a0dd96b gpu: nvgpu: falcon interface update
-Added nvgpu_flcn_mem_scrub_wait() to
 falcon interface layer to poll imem/dmem
 scrubbing status complete check for 1msec
 with status check interval of 10usec.
-Called nvgpu_flcn_mem_scrub_wait() in
 falcon reset interface to check scrubbing
 status upon falcon/engine reset.
-Replaced mem scrubbing wait check code in
 pmu_enable_hw() by calling
 nvgpu_flcn_mem_scrub_wait()

Bug 200346134

Change-Id: Iac68e24dea466f6dd5facc371947269db64d238d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598644
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-20 00:34:22 -08:00
Mahantesh Kumbar
1ab4754c05 gpu: nvgpu: Kill pg init thread if pmu boot fails
- Created nvgpu_kill_task_pg_init() method to set
pmu state to PMU_STATE_EXIT & make thread stop,
and poll to confirm thread stopped.
- Check for PMU/SEC2 ACR secure boot completion
status & initiate pg init thread kill if ACR boot
exits with error, which fails to validate &
boot LS-PMU.
- Set pmu state to PMU_STATE_OFF after thread kill
during ACR boot failure.

Issue: pg init task blocks if PMU boot fails &
cause kernel to show message "task nvgpu_pg_init_g:2120
blocked for more than 120 seconds"

Bug 200346134

Change-Id: I5270426080dcd628ccca4df798005294c19767a0
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1582593
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-20 00:34:15 -08:00
Terje Bergstrom
9d04e97093 gpu: nvgpu: Remove separation of t18x code
Remove separation of t18x specific code and fields and the associated
ifdefs. We can build T18x code in always.

Change-Id: I4e8eae9c30335632a2da48b418c6138193831b4f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595431
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-17 16:29:41 -08:00
Alex Waterman
35ae4194a0 gpu: nvgpu: Add translation for NVGPU MM flags
Add a translation layer to convert from the NVGPU_AS_* flags to
to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags.
This allows the common MM code to not depend on the UAPI header
defined for Linux.

In addition to this change a couple of other small changes were
made:

1. Deprecate, print a warning, and ignore usage of the
   NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag.
2. Move the t19x IO coherence flag from the t19x UAPI header
   to the regular UAPI header.

JIRA NVGPU-293

Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-17 16:17:20 -08:00
Deepak Nibade
b42fb7ba26 gpu: nvgpu: move vgpu code to linux
Most of VGPU code is linux specific but lies in common code
So until VGPU code is properly abstracted and made os-independent,
move all of VGPU code to linux specific directory

Handle corresponding Makefile changes
Update all #includes to reflect new paths
Add GPL license to newly added linux files

Jira NVGPU-387

Change-Id: Ic133e4c80e570bcc273f0dacf45283fefd678923
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599472
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-17 08:27:19 -08:00
Alex Waterman
b7cc3a2aa6 gpu: nvgpu: Fix some barrier usage
Commit 81868a187f updated barrier
usage to use the nvgpu wrappers and in doing so downgraded many
plain barriers {mb(), wmb(), rmb()} to the SMP versions of these
barriers.

The SMP version of the barriers in question are only issued
when running on an SMP machine. In most of the cases mentioned
above this is fine since the barriers are present to faciliate
proper ordering across CPUs. A single CPU is always coherent
with itself, so on a non-SMP case we don't need those barriers.

However, there are a few places where the barriers in use (GMMU
page table programming, IO accessors, userd) where the barrier
usage is for communicating and establishing ordering for the
GPU. We need these barriers for both SMP machines and non-SMP
machines. Therefor we must use the plain barrier versions.

Change-Id: I376129840b7dc64af8f3f23f88057e4e81360f89
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-16 15:55:52 -08:00
Alex Waterman
463c6f4c74 gpu: nvgpu: Mark nvgpu_pde_phys_addr static
nvgpu_pde_phys_addr() is only used in gmmu.c and as such can be
marked static.

JIRA NVGPU-402

Change-Id: I7adba6f54ebd4e06d176f23b9a959c04a8770338
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599040
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-16 12:39:08 -08:00
Alex Waterman
201fb02c24 gpu: nvgpu: Always allocate zeroed DMA mem
Always allocate explicitly zeroed DMA memory and remove the
unnecessary memset() from the alloc path for memory with a
kernel mapping.

JIRA NVGPU-418

Change-Id: I5a3df6e6969e2586df41b72325d1bff1e40206e6
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598933
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-16 12:39:01 -08:00
Deepak Nibade
ba8dc31859 Merge remote-tracking branch 'remotes/origin/dev/linux-nvgpu-t19x' into linux-nvgpu
Bug 200363166

Change-Id: Ic662d7b44b673db28dc0aeba338ae67cf2a43d64
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2017-11-15 23:21:35 -08:00
Sami Kiminki
69e032653d gpu: nvgpu: Add synchronization to comptag alloc and clearing
Comptags allocation and clearing was not synchronized for a
buffer. Fix this race by serializing the operations with the
gk20a_dmabuf_priv lock. While doing that, add an error check in
the cbc_ctrl call.

Bug 1902982

Change-Id: Icd96f1855eb5e5340651bcc85849b5ccc199b821
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597904
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 13:26:19 -08:00
Terje Bergstrom
44f8b11f47 gpu: nvgpu: Remove GPU characteristics from gk20a
Remove a global copy of GPU characteristics in struct gk20a. Instead
fill it at the Linux implementation of GPU characteristics IOCTL.

JIRA NVGPU-388

Change-Id: Idc4ad58301d44a554777f5b969f3191a342e73fd
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597330
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 13:26:15 -08:00
Sami Kiminki
1f28b429a2 gpu: nvgpu: Always do full buffer compbits allocs
Remove parameter 'lines' from gk20a_alloc_or_get_comptags() and
nvgpu_ctag_buffer_info. We're always doing full buffer allocs
anyways. This simplifies the code a bit.

Bug 1902982

Change-Id: Iacfc9cdba8cb75b31a7d44b175660252e09d605d
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597131
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 13:26:06 -08:00
Sami Kiminki
23396c58db gpu: nvgpu: Simplify compbits alloc and add needs_clear
Simplify compbits alloc by making the alloc function re-callable for
the buffer, and making it return the comptags info. This simplifies
the calling code: alloc_or_get vs. get + alloc + get again.

Add tracking whether the allocated compbits need clearing before they
can be used in PTEs. We do this, since clearing is part of the gmmu
map call on vgpu, which can fail.

Bug 1902982

Change-Id: Ic4ab8d326910443b128e82491d302a1f49120f5b
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597130
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 13:26:02 -08:00
Sami Kiminki
434385ca54 gpu: nvgpu: Clean up comptag data structs and alloc
Clean up the comptag-related data structures and allocation logic. The
most important change is that we only ever try comptag allocation once
to prevent incorrect map aliasing.

If we were to retry the allocation on further map calls, the following
situation would become possible:
(1) Request compressible kind mapping for a buffer. Comptag alloc failed
    and we proceed with incompressible kind fallback.
(2) Request another compressible kind mapping for a buffer. Comptag alloc
    retry succeeded and now we use the compressible kind.
(3) After writes through the compressible kind mapping, the buffer is no
    longer legible via the fallback incompressible kind mapping.

The other changes are about removing the unused comptag-related fields
in gk20a_comptags and nvgpu_mapped_buf, and retrieving comptags info
only for compressible buffers. We also make nvgpu_ctag_buffer_info and
nvgpu_vm_compute_compression as private mm/vm.c definitions, since
they're not used elsewhere.

Bug 1902982

Change-Id: I0c9fe48ccc585a80dd2c05ec606a079c1c1d41f1
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595153
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 13:25:58 -08:00
Deepak Nibade
3ff666c4b9 gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLs
TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using
combination of timeslice and interleave levels to decide the priority

Hence remove the IOCTLs and all corresponding APIs

Jira NVGPU-393

Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 08:46:09 -08:00
Terje Bergstrom
744d5a5212 gpu: nvgpu: vgpu: Implement clk.get_maxfreq
Modify HAL clk->get_maxfreq() signature to match the one in
clk->set_rate() and clk->get_rate(). It allows support of multiple
clocks.

Implement clk.get_maxfreq operation for vgpu and use it to
fill max_freq field in GPU characteristics query.

JIRA NVGPU-388

Change-Id: I93bfc2aa76e38b8a5e0ac55d87c4e26df6fea77f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597329
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-14 15:46:58 -08:00
Seema Khowala
5944f49f55 gpu: nvgpu: wrapper for checking if bpmp running
Add nvgpu_is_bpmp_running API for checking if bpmp
is running or not. This API will call tegra_bpmp_running()
and return the value retured by tegra_bpmp_running()

Bug 2018223

Change-Id: I42c1dbec65733fdc89a8fc3846e8c3afb2dcfb8d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595349
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-14 15:46:55 -08:00
David Gilhooley
b22c5911dd gpu: nvgpu: Pass DMA allocation flags correctly
There are flags that need to be passed to both dma_alloc
and sg_alloc together. Update nvgpu_dma_alloc_flags_sys to always
pass flags.

Bug 1930032

Change-Id: I10c4c07d7b518d9ab6c48dd7a0758c68750d02a6
Signed-off-by: David Gilhooley <dgilhooley@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596848
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-14 11:15:58 -08:00
Deepak Nibade
90aeab9dee gpu: nvgpu: define preemption modes in common code
We use linux specific graphics/compute preemption modes defined in uapi header
(and of below form) in all over common code
NVGPU_GRAPHICS_PREEMPTION_MODE_*
NVGPU_COMPUTE_PREEMPTION_MODE_*

Since common code should be independent of linux specific code, define new modes
of the form in common code and used them everywhere
NVGPU_PREEMPTION_MODE_GRAPHICS_*
NVGPU_PREEMPTION_MODE_COMPUTE_*

Add required parser functions to convert both the modes into each other

For linux IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE, we need to convert
linux specific modes into common modes first before passing them to common code

And to pass gpu characteristics to user space we need to first convert common
modes into linux specific modes and then pass them to user space

Jira NVGPU-392

Change-Id: I8c62c6859bdc1baa5b44eb31c7020e42d2462c8c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596930
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-14 04:58:39 -08:00
Terje Bergstrom
fd2cac59f3 gpu: nvgpu: Include UAPI explicitly
Add explicit #includes for <uapi/linux/nvgpu.h> for source code files
that depend on it.

JIRA NVGPU-259

Change-Id: I717d5f1493423fd3a7a34b6dd3380d33a9307a09
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-13 18:56:30 -08:00
Terje Bergstrom
d64241cb5a gpu: nvgpu: Include UAPI explicitly
Add explicit #includes for <uapi/linux/nvgpu.h> for source code files
that depend on it.

JIRA NVGPU-388

Change-Id: I5d834e6f3b413cee9b1e4e055d710fc9f2c8f7c2
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596246
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-13 10:57:21 -08:00
Terje Bergstrom
8e611fb654 gpu: nvgpu: Hard code map_buffer_batch_limit
Add a hard coded #define for map_buffer_batch_limit and use that
insted of querying from GPU characteristics. Also add an
nvgpu_is_enabled() flag for disabling batch mapping, and set
map_buffer_batch_limit to zero if batch mapping is disabled.

JIRA NVGPU-388

Change-Id: Ic91feea638d0f47c5c22321886cfc75e97259dc3
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593690
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-13 10:56:54 -08:00
Terje Bergstrom
4c451b06bd gpu: nvgpu: Move max_css_buffer_size to gr_gk20a
max_css_buffer_size was accessed directly from GPU characteristics,
which added a dependency to Linux. Move the field to gr_gk20a and
copy it to GPU characteristics at query time.

JIRA NVGPU-259

Change-Id: Ied19e33bf1a79a9ce45e33df57fe5bbe3a3c4f9d
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593689
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-12 11:34:03 -08:00
Alex Waterman
01c98eb680 gpu: nvgpu: VM map path refactoring
Final VM mapping refactoring. Move most of the logic in the VM
map path to the common/mm/vm.c code and use the generic APIs
previously implemented to deal with comptags and map caching.

This also updates the mapped_buffer struct to finally be free
of the Linux dma_buf and scatter gather table pointers. This
is replaced with the nvgpu_os_buffer struct.

JIRA NVGPU-30
JIRA NVGPU-71
JIRA NVGPU-224

Change-Id: If5b32886221c3e5af2f3d7ddd4fa51dd487bb981
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583987
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 15:47:01 -08:00
Alex Waterman
8428c82c81 gpu: nvgpu: Add nvgpu_os_buffer
Add a generic nvgpu_os_buffer type, defined by each OS, to abstract
a "user" buffer. This allows the comptag interface to be used in the
core code.

The end goal of this patch is to allow the OS specific mapping code
to call a generic mapping function that handles most of the mapping
logic. The problem is a lot of the logic involves comptags which are
highly dependent on the operating systems buffer management scheme.
With this, each OS can implement the buffer comptag mechanics
however it wishes without the core MM code caring.

JIRA NVGPU-30
JIRA NVGPU-223

Change-Id: Iaf64bc52e01ef3f262b4f8f9173a84384db7dc3e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583986
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 15:46:58 -08:00
Alex Waterman
ee4970a33f gpu: nvgpu: Make buf alignment generic
Drastically simplify and move the aligment computation for buffers
getting mapped into the SGT code. An SGT is all that is needed for
computing the alignment.

However, this did require that a new SGT op was added:

  nvgpu_sgt_iommuable()

This function returns true if the passed SGT is IOMMU'able and must
be implemented by an SGT implementation that has IOMMU'able buffers.
If this function is left as NULL then it is assumed that the buffer
is not IOMMU'able.

Also cleanup the parameter ordering convention among all nvgpu_sgt
functions. Previously there was a mishmash of different parameter
orderings. This patch now standardizes on the gk20a first approach
seen everywhere else in the driver.

JIRA NVGPU-30
JIRA NVGPU-246
JIRA NVGPU-71

Change-Id: Ic4ab7b752847cf795c7cfafed5a07818217bba86
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583985
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 15:46:54 -08:00
seshendra Gadagottu
6911b4d48c gpu: nvgpu: enable/disable tegra fuse clock
GPU hardware block needs tegra fuse clock to mirror
gpu fuses from tegra fuses to gpu domain.
Tegra fuse driver provided following APIs to
enable/disable tegra fuse clock:
int tegra_fuse_clock_enable(void);
int tegra_fuse_clock_disable(void);

To ensure that tegra fuse clock is disabled by nvgpu
driver when gpu hardware block is not in use by:
Calling tegra_fuse_clock_enable() while doing
gk20a_pm_unrailgate() and calling
tegra_fuse_clock_disable() while doing
gk20a_pm_railgate().

Bug 2019897

Change-Id: I61688829fd9a8b0c1ffa9d34db6393550f333866
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595297
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 10:30:28 -08:00
Deepak Nibade
83bdf33b56 gpu: nvgpu: remove NVGPU_ALLOC_OBJ_FLAGS_* from common code
In gr_gp10b_alloc_gr_ctx(), we use linux specific flags NVGPU_ALLOC_OBJ_FLAGS_*
Since common code should be independent of linux specific code, define new flags
NVGPU_OBJ_CTX_FLAGS_SUPPORT_* in common code and use them wherever needed

Linux code will parse the user flags and send appropriate flags to
g->ops.gr.alloc_obj_ctx()

Also remove use of NVGPU_ALLOC_OBJ_FLAGS_LOCKBOOST_ZERO since this seems to be
deadcode anyways

Jira NVGPU-382

Change-Id: Id82efe0d46ddc3e2c063610025ea57f283bc3510
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594452
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 10:30:19 -08:00
Deepak Nibade
a17a938a48 gpu: nvgpu: remove NVGPU_ALLOC_GPFIFO_EX_FLAGS_* from common code
In gk20a_channel_alloc_gpfifo(), we use linux specific flags
NVGPU_ALLOC_GPFIFO_EX_FLAGS_*
Since common code should be independent of linux specific code, define new flags
NVGPU_GPFIFO_FLAGS_SUPPORT_* in common code and use them in
gk20a_channel_alloc_gpfifo()

Linux code will parse the user flags and send appropriate flags to
gk20a_channel_alloc_gpfifo()

Jira NVGPU-381

Change-Id: Ibec51903b3407175fbba727208483b0dc36a5772
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594422
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 10:30:10 -08:00
Sami Kiminki
cefabe7eb1 gpu: nvgpu: Remove PTE kind logic
Since NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL was made mandatory,
kernel does not need to know the details about the PTE kinds
anymore. Thus, we can remove the kind_gk20a.h header and the code
related to kind table setup, as well as simplify buffer mapping code
a bit.

Bug 1902982

Change-Id: Iaf798023c219a64fb0a84da09431c5ce4bc046eb
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-10 08:38:19 -08:00
Terje Bergstrom
870e76fbc7 gpu: nvgpu: Move sm_arch to nvgpu_gpu_params
Move sm_arch_* fields to nvgpu_gpu_params to make them available from
common code without accessing Linux specific GPU characteristics.

JIRA NVGPU-259

Change-Id: Ieffb2ddde81b27af53dfedb9fe3972d20757cc35
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593686
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-09 19:18:21 -08:00
Terje Bergstrom
dc5f6bcee0 gpu: nvgpu: Return GPU classes in get_litter_value
Return GPU classes in HAL get_litter_value() instead of assigning
them to GPU characteristics at HAL initialization time.

JIRA NVGPU-259

Change-Id: Ife7a5cb38df3d33ce98a1caa43d3873fb1431234
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593683
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-09 19:18:11 -08:00
Terje Bergstrom
1dad4adbd2 gpu: nvgpu: Move fuse override DT handling
Move fuse override DT handling to Linux code. All the chip specific
fuse override functions did the same thing, so delete the HAL and
call the same function to read the DT overrides on all chips.

Also remove the fuse override functionality from dGPU. There are no
DT entries for PCIe devices, so it would've failed anyway.

JIRA NVGPU-259

Change-Id: Iba64a5d53bf4eb94198c0408a462620efc2ddde4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593687
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-09 14:27:04 -08:00
Alex Waterman
016231c045 gpu: nvgpu: Use only contig CBCs
Modify the LTC code to only use a contiguous CompBit Cache (CBC). The
original code had two allocation schemes: "physical" and "virtual" -
what they meant was virtually contiguous or physically contiguous. The
CBC must appear contiguous to the GPU be it either from the IOMMU or
from physical pages allocated contiguously.

This change makes the CBC get allocated with the FORCE_CONTIGUOUS flag
if the GPU is not IOMMU'able. If we can get contiguous mem with the
IOMMU then no need to force the underlying pages to be contiguous.
However, not all GPUs may be IOMMU'able so we do need to handle that
case.

Also delete the gk20a/ltc_gk20a.[ch] code. All that remained in these
files was the CBC alloc functions which were completely chip agnostic.
As a result these functions were consolidated and moved to common/ltc.c.

Bug 2015747

Change-Id: I3f41961b4f94378b954e7502a6b27cf0bc627375
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593666
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 17:11:30 -08:00
Terje Bergstrom
e7c4547889 gpu: nvgpu: Hard code regops max batch size
We set the regops limit in common code to a hard coded value and access
it in Linux code. Change the responsibility so that regops limit is
set in Linux code in the GPU characteristics query to a hard coded value
and just use the same hard coded value in the IOCTL limit check.

JIRA NVGPU-259

Change-Id: I2f78a7ea8f1cb68a08633a2dc74b71b3b001e5c9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593682
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 14:06:20 -08:00
Alex Waterman
e620bbccdd gpu: nvgpu: Request CONTIG allocs for large PDs
Request explicitly contiguous DMA memory for large page directory
allocations. Large in this case means greater than PAGE_SIZE. This
is necessary if the GPU's DMA allocator is set to, by default,
allocate discontiguous memory.

Bug 2015747

Change-Id: I3afe9c2990522058f6aa45f28030bc82a369ca69
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593093
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 10:37:00 -08:00
Deepak Nibade
3cb65f57d5 gpu: nvgpu: define runlist level in common code
All the runlist levels NVGPU_RUNLIST_INTERLEAVE_LEVEL_* are declared in linux
specific uapi header and used in common code
But since common code should be linux-independent, move these uses out of
common code

Define new runlist levels NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* in common code
and use them wherever required

Add new API nvgpu_get_common_runlist_level() to get common runlist level of
the form NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* from linux specific runlist
level of the form NVGPU_RUNLIST_INTERLEAVE_LEVEL_*

Jira NVGPU-259

Change-Id: Ic19239f0f8275683d5d1b981df530acd90e6dfbb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594327
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 09:09:54 -08:00
Sami Kiminki
c22a5af913 gpu: nvgpu: Remove support for legacy mapping
Make NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL mandatory for all map
IOCTLs. We'll clean up the legacy kernel code in subsequent patches.

Remove support for NVGPU_AS_IOCTL_MAP_BUFFER. It has been superseded
by NVGPU_AS_IOCTL_MAP_BUFFER_EX.

Remove legacy definitions to nvgpu_map_buffer_args and the related
flags, and update the in-kernel map calls accordingly by switching to
the newer definitions.

Bug 1902982

Change-Id: Ie9a7f02b8d5d0ec7c3722c4481afab6d39b4fbd0
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560932
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 09:09:08 -08:00
Deepak Nibade
02d281d077 gpu: nvgpu: remove use of linux specific powergate_mode flag
In dbg_set_powergate(), we use flags NVGPU_DBG_GPU_POWERGATE_MODE_DISABLE/ENABLE
which are defined in linux specific uapi header
Hence we need to remove those flags from common code

Update dbg_set_powergate() to receive boolean flag to disable/enable powergate
instead of NVGPU_DBG_GPU_POWERGATE_MODE_DISABLE/ENABLE

Also update corresponding HALs as per above change

Jira NVGPU-259

Change-Id: I9c4eb30e29ea5ce0d8e25517a6a072fb9f0e92e5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594326
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 07:57:06 -08:00
Terje Bergstrom
58dd20f86b gpu: nvgpu: Introduce queries for big page sizes
Introduce query functions for default big page size and available
big page sizes. Move initialization of GPU characteristics big
page sizes to the GPU characteristics query function.

JIRA NVGPU-259

Change-Id: Ie66cc2fbfcd88205593056f8d5010ac2539c8bc2
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593685
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-07 22:24:14 -08:00
Terje Bergstrom
a51219e526 gpu: nvgpu: Store VBIOS version in g->bios
Store VBIOS version in g->bios instead of GPU characteristics. This
removes a few Linux dependencies from common code, because GPU
characteristics is defined in Linux IOCTL header.

JIRA NVGPU-259

Change-Id: I9aab3d37b7ca000edd59c92b8601a96ee288e2bb
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593684
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-07 22:19:05 -08:00
Konsta Holtta
760f8dd7fb gpu: nvgpu: drop user callback support in CE
Simplify the copyengine code by deleting support for the
ce_event_callback feature that has never been used. Similarly, create a
channel without the finish callback to get rid of that Linux dependency,
and delete the finish callback function as it now serves no purpose.

Delete also the submitted_seq_number and completed_seq_number fields
that are only written to.

Jira NVGPU-259

Change-Id: I02d15bdcb546f4dd8895a6bfb5130caf88a104e2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589320
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-07 17:10:57 -08:00
Terje Bergstrom
973553069d gpu: nvgpu: Fix missing #includes and fw decls in Linux code
ioctl_channel.h and cde.h referred to multiple structures that were
not forward declared or explitly #included in. Add several forward
declarations and #includes. Also add #include for
<uapi/linux/nvgpu.h> to multiple Linux .c files that were missing it.

Change-Id: Iefd52e71224d5810b5abbcc765f92bc535d7a28b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1591634
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-06 16:29:31 -08:00
Thomas Fleury
94feb18de8 gpu: nvgpu: call destructor for boardobj and boardobjgrp
Maintain a list of boardobj and boardobjgrp, so that we can free
related objects when removing pmu support. A flag is added in
boardobj so that the destructor can determine if it should free
the object. This 'allocated' flag is false when the object is
embedded into another structure, which should be freed through
other means.

JIRA EVLR-1959
Bug 200352099

Change-Id: I6a3ff3c57f7428dd145deacf98f2992a9be9796d
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586596
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-06 13:41:36 -08:00
Thomas Fleury
4e7c9c3008 gpu: nvgpu: fix dma memory leak in remove pmu support
Add missing unmap and free for seq_buf and ucode (acr & hsbl).

JIRA EVLR-1959
Bug 200352009

Change-Id: I3e422ce07228b59554ab1407c29e45c70479134d
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586576
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-06 13:41:27 -08:00
Thomas Fleury
d0a278b0a5 gpu: nvgpu: fix kernel memory leak in pmu remove support
When unbinding the driver, secure pmu firmware was not freed
in nvgpu_remove_pmu_support(). Free related firmware if
previously allocated.

JIRA EVLR-1959
Bug 200352099

Change-Id: If9e431964837b3233ec25931b2ab61da920e5540
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1582909
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-06 13:41:08 -08:00
Konsta Holtta
8bdce5337e gpu: nvgpu: support tuning per-ch deterministic opts
Add a new ioctl NVGPU_GPU_IOCTL_SET_DETERMINISTIC_OPTS to adjust
deterministic options on a per-channel basis. Currently, the only
supported option is to relax the no-railgating requirement on open
deterministic channels. This also disallows submits on such channels,
until the railgate option is reset.

Bug 200327089

Change-Id: If4f0f51fd1d40ad7407d13638150d7402479aff0
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1554563
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-06 12:27:35 -08:00