Commit Graph

2355 Commits

Author SHA1 Message Date
seshendra Gadagottu
3d343c9eea gpu: nvgpu: enhance pbdma debug info
Enhanced pbdma error output to print pbdma interrupt
error.

Generated following hw definitions to dump relevant data:
pbdma_gp_shadow_0_r
pbdma_gp_shadow_1_r

Updated gk20a_dump_pbdma_status to dump this additional
info:
pbdma_gp_put_r
pbdma_gp_get_r
pbdma_gp_shadow_0_r
pbdma_gp_shadow_1_r

Bug 2003671

Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1572182
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:20:07 -07:00
Terje Bergstrom
be3750bc9e gpu: nvgpu: Abstract IO aperture accessors
Add abstraction of IO aperture accessors. Add new functions
gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to
aperture fields from common code.

Implement Linux version of the abstraction by moving gk20a_readl()
and gk20a_writel() to new Linux specific io.c. Move the fields
defining IO aperture to nvgpu_os_linux.

Add t19x specific IO aperture initialization functions and add t19x
specific section to nvgpu_os_linux.

JIRA NVGPU-259

Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569698
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:55 -07:00
Alex Waterman
59e4089278 gpu: nvgpu: Separate vidmem alloc from Linux code
Split the core vidmem allocation from the Linux component of vidmem
allocation. The core vidmem allocation allocates the nvgpu_mem struct
that defines the vidmem buffer in the core MM code. The Linux code
now allocates some Linux specific stuff (dma_buf, etc) and also
allocates the core vidmem buf.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I88e87e0abd5ec714610eacc6eac17e148bcee3ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540708
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:28 -07:00
Alex Waterman
88d5f6b415 gpu: nvgpu: Rename vidmem APIs
Rename the VIDMEM APIs to be prefixed by nvgpu_ to ensure
consistency and that all the non-static vidmem functions are
properly namespaced.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: I9986ee8f2c8f95a4b7c5e2b9607bc1e77933ccfc
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:23 -07:00
seshendra Gadagottu
a9ce91f910 gpu: nvgpu: add syncpoint read map
For sync-point read map:
 1. Added nvgpu_mem memory allocator in gk20a struct and
    allocated memory for this in gk20a_finalize_poweron()
    and freed this memory in gk20a_remove().
 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct
    for read map in vm.

Added nvgpu_quiesce() in nvgpu_remove() before freeing
syncpoint read map to ensure that nvgpu is idle.

JIRA GPUT19X-2

Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1514338
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:16 -07:00
Seema Khowala
ceaab0595c gpu: nvgpu: ctxsw_trace_tsg/channel_reset call order
For non fake mmu fault, both tsg and ch pointers
could be valid. If tsg pointer is non null, issue
ctxsw_trace for tsg instead of channel only.

Change-Id: I161c40e8d43c7ae4d953ef4768ad75d4e993c87e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:43:22 -07:00
Seema Khowala
a8643b3a99 gpu: nvgpu: WAR: unbind_channel_verify_status ret val
Do not return error if channel to be removed has
NEXT set. This is a WAR until proper fix is
identified and implemented.

Bug 200327095

Change-Id: Ia77f3b834e8e577ac2dad8281f1dd562079adcef
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577133
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:43:10 -07:00
Deepak Nibade
6de92de60b gpu: nvgpu: fix channel status verify sequence
While unbindin a channel from TSG, we first disable all the channels,
then examine the status of channel being removed in
gk20a_fifo_tsg_unbind_channel_verify_status(),  and if this API fails we
re-enable all the channel and kill whole TSG

And in gk20a_fifo_tsg_unbind_channel_verify_status() we first check ctx_reload
and fault status and then check NEXT status
If channel has NEXT set we bail out

But since we have already changed the TSG ctx_reload status re-enabling all
channels in TSG might cause issues

Hence fix this by correcting sequence so that we first ensure that NEXT is
not set on channel and then only alter the status

Bug 200327095

Change-Id: I4f0786bc507fad5462d4cdd8d0ca91ea611ee3b5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1575905
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:42:58 -07:00
David Nieto
e02d14e754 gpu: nvgpu: ce: tsg and large vidmem support
Some GPUs require all channels to be on TSG and also have larger than 4GB
vidmem sizes which were not supported on the previous CE2 code.

This change creates a new property to track if the copy engine needs to
encapsulate its kernel context on tsg and also modifies the copy engine code
to support much larger copies without dramatically increasing the PB size.

JIRA: EVLR-1990

Change-Id: Ieb4acba0c787eb96cb9c7cd97f884d2119d445aa
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1573216
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
2017-10-13 13:42:30 -07:00
Konsta Holtta
036e4ea244 gpu: nvgpu: make channel worker wait interruptible
Change the wait for work pending condition to interruptible in the
channel worker thread, as there's no reason to be noninterruptible.
A noninterruptible wait, even one with a timeout, causes the worker to
be printed in the Linux blocked tasks list which is confusing.

Change the cond signal to interruptible to match this.

Change-Id: I71d848b7f449a5d53fecae90c6a450c98c675c7f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1570166
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-12 02:30:48 -07:00
Peter Daifuku
8bd5b7e3b8 gpu: nvgpu: no hv support for write_sm_error_state
There is no current need for a virtualized version of
nvgpu_dbg_gpu_ioctl_write_single_sm_error_state, so return
-ENOSYS when virtual.

Bug 200331110

Change-Id: I2a8cf07b2962de4c91b752b276678bee4eea6e80
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1568906
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-11 21:02:59 -07:00
Alex Waterman
60b655330a gpu: nvgpu: Remove SGL reference from mm_gk20a.c
Remove an SGL reference from the mm_gk20a.c code. This code is
common code and as such all linuxisms need to be fixed. It just
so happens that this particular function is only used by the
CDE code which is only present in Linux. So simply move this
function over to the CDE code.

JIRA NVGPU-30
JIRA NVGPU-225
JIRA NVGPU-226

Change-Id: Ifb0cb427c742c6d9cada382ace4a52f52474c379
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1576436
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-10-11 14:40:22 -07:00
Alex Waterman
c53b94f1dd gpu: nvgpu: Remove phys_addr_t from common code
Remove phys_addr_t from common code and replace it with u64. This
faciliates QNX compiling the common code since phys_addr_t is a
Linux specific type.

JIRA NVGPU-30
JIRA NVGPU-226

Change-Id: I15fe2078f9cd0b07c7e90ad6e359c493afa56714
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1576432
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-11 14:40:22 -07:00
Konsta Holtta
9b20f2a15e gpu: nvgpu: report GR_SEMAPHORE_TIMEOUT on PBDMA sema timeout
As with GR's semaphore acquires that timeout, report
NVGPU_CHANNEL_GR_SEMAPHORE_TIMEOUT to userspace in the error notifier
also when a semaphore acquire timeout interrupt is received from PBDMA.
This timeout is used when the kernel watchdog timer is enabled.

Bug 1782480

Change-Id: I1ceb8632548c5e89febb2b80a5850116a2d4b670
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574293
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
2017-10-10 16:26:53 -07:00
Terje Bergstrom
b41e141778 gpu: nvgpu: Declare gk20a_user_*init() in ioctl.h
The functions are gk20a_user_init() and gk20a_user_deinit() are
defined in ioctl.c. Move declaration of the functions to new
header file ioctl.h.

JIRA NVGPU-259

Change-Id: If348b51e9032083f252a7c7717ed7bc153dbba52
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569696
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 13:40:53 -07:00
Terje Bergstrom
4f56c88feb gpu: nvgpu: Move gk20a->busy_lock to os_linux
gk20a->busy_lock is a Linux specific rw_semaphore used only
by Linux code. Move it to os_linux.

JIRA NVGPU-259

Change-Id: I220a8a080a5050732683b875d3c1d0539ba0f40e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569695
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 13:38:17 -07:00
Deepak Nibade
236573e00a gpu: nvgpu: clean up channel open/release declares
Below APIs are already declared in ioctl_channel.h, and hence remove duplicate
declaration from channel_gk20a.h
gk20a_channel_open()
gk20a_channel_ioctl()
gk20a_channel_release()

And move declaration of gk20a_channel_open_ioctl() from channel_gk20a.h to
ioctl_channel.h

Jira NVGPU-259

Change-Id: I46702ca481e41a19f92f4fe0169f95e31360abe0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1573106
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 12:08:19 -07:00
David Nieto
7c5cf70268 gpu: nvgpu: add support for pre-os FW
Pre-os firmware takes care, among others, of the control of FAN till
the driver takes over its control. On some GPUs not enabling this FW can lead
tp physical board damage, hence it is needed to run this firmware.

JIRA: NVGPUGV100-9

Change-Id: I18d54cfd5eb64ecec79c5dae67ac8d5bb1facf36
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549035
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-10-10 12:05:42 -07:00
Seema Khowala
bf8dca77ae gpu: nvgpu: add null check for perfbuffer enable and disable dbg ops
This is needed to disable/enable features on new chips

Bug 200352825

Change-Id: I02eb58e6fdd554ed20866fe8a8553a667541f512
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574481
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 09:24:45 -07:00
Alex Waterman
3c37701377 gpu: nvgpu: Split VIDMEM support from mm_gk20a.c
Split VIDMEM support into its own code files organized as such:

  common/mm/vidmem.c     - Base vidmem support
  common/linux/vidmem.c  - Linux specific user-space interaction
  include/nvgpu/vidmem.h - Vidmem API definitions

Also use the config to enable/disable VIDMEM support in the makefile
and remove as many CONFIG_GK20A_VIDMEM preprocessor checks as possible
from the source code.

And lastly update a while-loop that iterated over an SGT to use the
new for_each construct for iterating over SGTs.

Currently this organization is not perfectly adhered to. More patches
will fix that.

JIRA NVGPU-30
JIRA NVGPU-138

Change-Id: Ic0f4d2cf38b65849c7dc350a69b175421477069c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1540705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-10 08:01:04 -07:00
Deepak Nibade
33f475d6d8 gpu: nvgpu: kill TSG if channel has NEXT set while closing
Currently if channel has NEXT bit set while closing the channel we just print
an error and continue channel unbind sequence from TSG

But since channel with NEXT set is active killing it can potentially corrupt
the TSG context and cause unpredictable errors on remaining channels/TSG

Hence fix this by killing whole TSG context if channel being closed has
NEXT bit set

if gk20a_fifo_tsg_unbind_channel() API returns error, kill the TSG
otherwise continue with channel unbind sequence

Bug 200327095

Change-Id: I2abf1a3db8ba6f105b6ca86e78006c7b2a7726cc
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1568566
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-04 03:37:17 -07:00
Deepak Nibade
3cd0603c42 gpu: nvgpu: verify channel status while closing per-platform
We right now call gk20a_fifo_tsg_unbind_channel_verify_status() to verify
channel status while unbinding a channel from TSG while closing

Add support to do this verification per-platform and keep this disabled
for vgpu platforms

Bug 200327095

Change-Id: I19fab41c74d10d528d22bd9b3982a4ed73c3b4ca
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1572368
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 03:37:14 -07:00
Deepak Goyal
0e8aee1c1a gpu: nvgpu: skip clk gating prog for sim/emu.
For Simualtion/Emulation platforms,clock gating
should be skipped as it is not supported.
Added new flags "can_"X"lcg" to check platform
capability before doing SLCG,BLCG and ELCG.

Bug 200314250

Change-Id: I4124d444a77a4c06df8c1d82c6038bfd457f3db0
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566049
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:24:30 -07:00
Alex Waterman
edb1166613 gpu: nvgpu: rename ops.mm.get_physical_addr_bits
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.

JIRA NVGPU-68

Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-04 02:21:47 -07:00
Alex Waterman
2559fa295d gpu: nvgpu: Add common vaddr translate function
Add a function to do address translation for IOMMU capable GPUs.
When an iGPU is behind and IOMMU it can pick whether to use that
IOMMU for translation by adding a bit to physical addresses. This
function takes care of that.

However, this required an abstracted nvgpu_iommuable() API to
check whether a GPU is behind an IOMMU. This patch adds that API
for Linux.

JIRA NVGPU-68

Change-Id: I489d14475167c019c294407372395df78c8b5feb
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542965
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-04 02:19:08 -07:00
Mahantesh Kumbar
adf4b33c3b gpu: nvgpu: nvgpu_pmu_disable_elpg() status check
- Added status check for nvgpu_pmu_disable_elpg() return value
 & prints error information upon failure.
- Below CID's are due to missing status check of function
 nvgpu_pmu_disable_elpg() return value, so this CL helps to fix it
 2624546
 2624547
 2624548

Bug 200291879

Change-Id: I263fc6bc9e2667af478bfd7160fe205167556f99
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565998
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-27 14:17:48 -07:00
seshendra Gadagottu
84fe49a421 gpu: nvgpu: fix handling of EGPC_ETPC_SM addresses
Added new defines for following litter values:
GPU_LIT_SMPC_PRI_BASE
GPU_LIT_SMPC_PRI_SHARED_BASE
GPU_LIT_SMPC_PRI_UNIQUE_BASE9
GPU_LIT_SMPC_PRI_STRIDE

Calculate offsets for ctx operations considering
sm per tpc. Following functions are modified for this:
gr_gk20a_get_ctx_buffer_offsets
gr_gk20a_get_pm_ctx_buffer_offsets
__gr_gk20a_exec_ctx_ops

Bug 200337994

Change-Id: I3a4ca470a4107d3078b708f38601762626ce1bf1
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1539069
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 17:29:18 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
seshendra Gadagottu
1c0ea341cc gpu: nvgpu: cpu access for ctxheader
Before updating ctxheader in gr_gk20a_ctx_patch_smpc()
add cpu access with nvgpu_mem_begin.
After updating ctxheader, close cpu access
with nvgpu_mem_end.

Reviewed usage of ctxheader in other places and its
cpu access is taken care correctly.

Bug 200333285

Change-Id: I88ab0b040f95240673a4be55bcfe880a1440655b
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1564764
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-25 16:57:13 -07:00
Mahantesh Kumbar
350bb74859 gpu: nvgpu: PMU debug reorg
- Moved PMU debug related code to pmu_debug.c
  Print pmu trace buffer
  Moved PMU controller/engine status dump debug code
  Moved ELPG stats  dump code
- Removed PMU falcon controller status dump code & used
nvgpu_flcn_dump_stats() method,
- Method to print ELPG stats.
- PMU HAL to print PMU engine & ELPG debug info upon error

NVGPU JIRA-96

Change-Id: Iaa3d983f1d3b78a1b051beb6c109d3da8f8c90bc
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1516640
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-09-25 00:18:59 -07:00
Mahantesh Kumbar
b5556c7490 gpu: nvgpu: Falcon IMEM/DMEM dump support
- Added falcon interface/HAL for IMEM-copy-from
to read data from IMEM from given location with requested
size
-Added falcon interface to print data of IMEM/DMEM
from given location with requested size using falcon HAL.

JIRA NVGPU-105

Change-Id: I84cf7b5769b84a2baee2c7e65027539598ec1295
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1514536
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-09-25 00:18:58 -07:00
Mahantesh Kumbar
5a1165d984 gpu: nvgpu: falcon status dump support
- Added support to dump flacon controller status
- Method to print recent PC history to know call trace
- Method to dump IMBLK info
- Updated falcon hw header files to include
registers of PC trace & IMBLK

JIRA NVGPU-105

Change-Id: Id4aaafd87113d47e552afb21b87f8b087d36004e
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1515371
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-09-25 00:18:57 -07:00
David Nieto
7eabc16b84 gpu: nvgpu: defer channel worker initialization
kthread_run can fail if SIGKILL is triggered on an application during
driver load.

On this change we defer the channel worker init to the enqueue to avoid
this condition during driver power on which would cause the driver state to be
corrupted leaving subsequent attempts to load the driver unsuccesful.

By moving this code to a later time, it is now needed to protect the task
structure with a mutex.

JIRA: EVLR-956
Bug 1816515

Change-Id: I3a159de2d1f03e70b2a3969730a927532ede2d6e
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1462490
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1460689
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 15:49:37 -07:00
David Nieto
90568a2ce5 gpu: nvgpu: allow bind to be interrupted
This change solves two problems:

(*) the possibility of a crash due to interrupting the gpu
initialization following a bind
(*) a IOVA memory leak that could prevent the GPU from binding after
about 200 bind/unbind cycles

A detailed list of fixes:

- chek that arbiter is initialized before freeing it.
- do not re-enable interrupts when MSI is enabled on unbind.
- free the semaphore sea on unbind.
- ensure we dont double load the vbios.
- check return value of nvgpu_mutex_init for semaphores.
- add corresponding nvgpu_mutex_destroy calls.

bug 1816516

Change-Id: Ia8af73019e0e1183998855d55bb3eea09672a8b7
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: http://git-master/r/1465302
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: David Jarrett <djarrett@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1563019
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 15:47:01 -07:00
David Nieto
7134e9e852 gpu: nvgpu: prevent crash during unbind
This change solves crashes during bind that were introduced in the driver
during the OS unification refactoring due to lack of coverage of the remove()
function.

The fixes during remove are:

(1) Prevent NULL dereference on GPUs with secure boot
(2) Prevent NULL dereferences when fecs_trace is not enabled
(3) Added PRAMIN blocker during driver removal if HW is no longer accesible
(4) Prevent double free of debugfs nodes as they are handled on the
debugfs_remove_recursive() call
(5) quiesce() can now be called without checking is HW accesible flag is set
(6) added function to free irq so no IRQ association is left on the driver after
it is removed
(7) prevent NULL dereference on nvgpu_thread_stop() if the thread is already
stopped

JIRA: EVLR-1739

Change-Id: I787d38f202d5267a6b34815f23e1bc88110e8455
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1563005
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 15:44:25 -07:00
Sunny He
17c581d755 gpu: nvgpu: SGL passthrough implementation
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.

This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.

Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.

Jira NVGPU-186

Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:55:24 -07:00
Alex Waterman
0090ee5aca gpu: nvgpu: nvgpu SGL implementation
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.

The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.

To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.

This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.

Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-22 12:52:48 -07:00
Mahantesh Kumbar
e32cc0108c gpu: nvgpu: read WPR info from fb
- Added function to read WPR info from FB
MMU registers
- Added HAL to point wpr info read function
- Replaced wpr info read from MC with HAL
- Removed debugfs header include from acr files.

JIRA NVGPU-128

Change-Id: I5ebec46bfe03b9200f2aa569f2e5a780a715616d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1564683
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-09-22 06:14:02 -07:00
Alex Waterman
b9082f0760 gpu: nvgpu: Add per-GPU total to DMA memory prints
Track the total amount of DMA memory currently outstanding for each
GPU. Print this total in the DMA debugging/logging prints.

Bug 1956137

Change-Id: I929598e5aa388ee84db0badb4eb9f7c6cbe030c7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1559518
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-19 21:52:56 -07:00
seshendra Gadagottu
1132fd2a12 gpu: nvgpu: changes related handling ctx header
ctx header holds only gpu va for each address space.
All other information will be held in main
context. Ctx header will have gpu va for following
fields:
ctxsw_prog_main_image_context_buffer_ptr
ctxsw_prog_main_image_context_buffer_ptr_hi
ctxsw_prog_main_image_zcull_ptr
ctxsw_prog_main_image_zcull_ptr
ctxsw_prog_main_image_pm_ptr
ctxsw_prog_main_image_pm_ptr_hi
ctxsw_prog_main_image_full_preemption_ptr_hi
ctxsw_prog_main_image_full_preemption_ptr
ctxsw_prog_main_image_full_preemption_ptr_xxxx0
ctxsw_prog_main_image_full_preemption_ptr_xxxx0_v
ctxsw_prog_main_image_patch_adr_lo
ctxsw_prog_main_image_patch_adr_hi

Changes done as part of this CL:
- Read ctx_id from from main context header
- Golden context creation:
  Use gold_mem for for golden context creation
  and copy golden context from save gold local
  memory to main context. No need to restore
  golden context to context header.
- Write ctx_patch_count and smpc_ctxsw_mode in
  main context header only.
- Update preemption mode in main context header and
  preemption buffer va in context header.
- Updated image patch buffer va in context header.

Bug 1958308

Change-Id: Ic076aad8b1802f76f941d2d15cb9a8c07308e3e8
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1562680
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-09-19 17:45:29 -07:00
seshendra Gadagottu
c4370d7def gpu: nvgpu: Initialize ctxsw header counters
Initialize following counters in context header
for all legacy chips:
ctxsw_prog_main_image_num_save_ops
ctxsw_prog_main_image_num_restore_ops

This was already present in the code but move to a function
gk20a_gr_init_ctxsw_hdr_data, so that it can be re-used across
chips.

Additionally initialize following preemption related counters
for gp10b onwards in context header:
ctxsw_prog_main_image_num_wfi_save_ops
ctxsw_prog_main_image_num_cta_save_ops
ctxsw_prog_main_image_num_gfxp_save_ops
ctxsw_prog_main_image_num_cilp_save_ops

Bug 1958308

Change-Id: I0e45ec718a8f9ddb951b52c92137051b4f6a8c60
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1562654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-09-19 17:45:28 -07:00
Seema Khowala
e229cb76ba gpu: nvgpu: fix verbose return value
verbose return value is not taking all the channels into account.
Fix this by ORing verbose values for all channels.

Change-Id: Id77c74458067c72792422aa69be1626c3d164e1c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549645
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-18 18:19:07 -07:00
Sami Kiminki
5d09c908b0 gpu: nvgpu: Direct GMMU PTE kind control
Allow userspace to control directly the PTE kind for the mappings by
supplying NVGPU_AS_MAP_BUFFER_FLAGS_DIRECT_KIND_CTRL for MAP_BUFFER_EX.

In particular, in this mode, the userspace will tell the kernel
whether the kind is compressible, and if so, what is the
incompressible fallback kind. By supplying only the compressible kind,
the userspace can require that the map kind will not be demoted to the
incompressible fallback kind in case of comptag allocation failure.

Add also a GPU characteristics flag
NVGPU_GPU_FLAGS_SUPPORT_MAP_DIRECT_KIND_CTRL to signal whether direct
kind control is supported.

Fix indentation of nvgpu_as_map_buffer_ex_args header comment.

Bug 1705731

Change-Id: I317ab474ae53b78eb8fdd31bd6bca0541fcba9a4
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1543462
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-15 15:45:45 -07:00
Deepak Nibade
2b7e8a2c2a gpu: nvgpu: fix channel unbind sequence from TSG
We right now remove a channel from TSG list and disable all the channels in
TSG while removing a channel from TSG
With this sequence if any one channel in TSG is closed, rest of the channels
are set as timed out and cannot be used anymore

We need to fix this sequence as below to allow removing a channel from active
TSG so that rest of the channels can still be used

- disable all channels of TSG
- preempt TSG
- check if CTX_RELOAD is set if support is available
  if CTX_RELOAD is set on channel, it should be moved to some other channel
- check if FAULTED is set if support is available
- if NEXT is set on channel then it means channel is still active
  print out an error in this case for the time being until properly handled
- remove the channel from runlist
- remove channel from TSG list
- re-enable rest of the channels in TSG
- clean up the channel (same as regular channels)

Add below fifo operations to support checking channel status
g->ops.fifo.tsg_verify_status_ctx_reload
g->ops.fifo.tsg_verify_status_faulted

Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106
as gm20b_fifo_tsg_verify_status_ctx_reload()
This API will check if channel to be released has CTX_RELOAD set, if yes
CTX_RELOAD needs to be moved to some other channel in TSG

Remove static from channel_gk20a_update_runlist() and export it

Bug 200327095

Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-15 12:48:21 -07:00
Deepak Nibade
460951ed09 gpu: nvgpu: fix TSG enable sequence
Due to a h/w bug in Maxwell and Pascal we first need to enable all channels with
NEXT and CTX_RELOAD set in a TSG, and then rest of the channels should be
enabled
Add this sequence to gk20a_tsg_enable()

Add new APIs to enable/disable scheduling of TSG runlist
gk20a_fifo_enable_tsg_sched()
gk20a_fifo_disble_tsg_sched()

Add new APIs to check if channel has NEXT or CTX_RELOAD set
gk20a_fifo_channel_status_is_next()
gk20a_fifo_channel_status_is_ctx_reload()

Bug 1739362

Change-Id: I4891cbd7f22ebc1e0bf32c52801002cdc259dbe1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560636
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-15 12:48:20 -07:00
Deepak Nibade
7d6d040531 gpu: nvgpu: support platform specific TSG enable/disable
Add platform specific operations to enable/disable a TSG and use them instead
of directly calling enable/disable APIs

For gm20b/gp106/gp10b we continue to use gk20a_enable_tsg() and
gk20a_disable_tsg() as platform specific operations

Bug 1739362

Change-Id: I2dd0f38c8303757e8c7a47d8da0e30a790e514f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560635
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-15 12:48:20 -07:00
Alex Waterman
72f40c211e gpu: nvgpu: page align DMA allocs
Explicitly page align DMA memory allocations. Non-page aligned
DMA allocs can lead to nvgpu_mem structs that have a size that's
not page aligned. Those allocs, in some cases, can cause vGPU
maps to return an error.

More generally DMA allocs in Linux are never going to not be
page aligned both in size and address. So might as well page align
the alloc size to make code else where in the driver more simple.

To imlpement this an aligned_size field has been added to struct
nvgpu_mem. This field has the real page aligned size of the
allocation. The original size is still saved in size.

Change-Id: Ie08cfc4f39d5f97db84a54b8e314ad1fa53b72be
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1547902
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-12 15:54:46 -07:00
Terje Bergstrom
89772b03cb gpu: nvgpu: Move XVE debugfs code to Linux module
Move XVE debugfs initialization code to live under common/linux.

JIRA NVGPU-62

Change-Id: Ic6677511d249bc0a2455dde01db5b230afc70bb1
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1535133
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-12 13:00:19 -07:00
Deepak Nibade
41283a02ac gpu: nvgpu: fix UBSAN warning of signed integer overflow
Fix below warning reported by UBSAN by explicitly type casting both the
operators in multiplication as unsigned long

[   69.470802] UBSAN: Undefined behaviour in
drivers/gpu/../../../nvgpu/drivers/gpu/nvgpu/gk20a/gk20a_scale.c:60:49
[   69.485519] signed integer overflow:
[   69.489104] 2147483647 * 1000 cannot be represented in type 'int'
[   69.504424] Hardware name: quill (DT)
[   69.508088] Call trace:
[   69.510579] [<ffffff900809a600>] dump_backtrace+0x0/0x4f0
[   69.515987] [<ffffff900809ab18>] show_stack+0x28/0x38
[   69.521050] [<ffffff9008f0d8d8>] dump_stack+0x154/0x1c4
[   69.526291] [<ffffff9008fd1ee0>] ubsan_epilogue+0x18/0xb0
[   69.531720] [<ffffff9008fd321c>] handle_overflow+0x1c0/0x21c
[   69.537416] [<ffffff9008fd3334>] __ubsan_handle_mul_overflow+0x34/0x50
[   69.544410] [<ffffff9003c73368>] gk20a_scale_qos_notify+0x210/0x2f0 [nvgpu]
[   69.551415] [<ffffff9008170884>] __blocking_notifier_call_chain+0xec/0x240
[   69.558299] [<ffffff9008170a18>] blocking_notifier_call_chain+0x40/0x50
[   69.564928] [<ffffff900825dd18>] pm_qos_update_bounded_target+0x738/0x1038
[   69.571812] [<ffffff900825f4a0>] pm_qos_update_bounded_req+0x148/0x280
[   69.578348] [<ffffff9008263224>] pm_qos_bounded_write+0x484/0x990

Bug 200342586

Change-Id: I35ee59a95b2e3625fb42f256d2877558be9e51cf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1557156
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-09-12 11:05:55 -07:00
Terje Bergstrom
c37c9baae6 gpu: nvgpu: Move CDE code to Linux module
CDE is only used in Linux platforms, and the code is highly dependent
on Linux APIs. Move the common CDE code to Linux module and leave only
the chip specific parts to HAL.

Change-Id: I507fe7eceaf7607303dfdddcf438449a5f582ea7
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1554755
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-11 15:10:52 -07:00