Commit Graph

2574 Commits

Author SHA1 Message Date
Aparna Das
395c553813 gpu: nvgpu: move chip detect in os specific probe code
This allows moving HAL overrides for vserver out of common
chip specific HAL files into os specific probe code.

Jira VQRM-3070

Change-Id: Icc61aacc03ac7db7a0ea1f6a2dd2b76185c74757
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656752
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Tested-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:51:54 -08:00
Kirill Artamonov
70bf8275ef gpu: nvgpu: gv11b: implement gfxp wfi controls
/sys/devices/gpu.0/gfxp_wfi_timeout_unit
usec - microseconds
sysclk - gpu clock count

Treat gr_fe_gfxp_wfi_timeout_r as context-switched
register on gv11b.

Set default gfxp_wfi_timeout to 100 usec to match
gp10b at 1GHz.

bug 1888344

Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed
Reviewed-on: https://git-master.nvidia.com/r/1651517
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:51:50 -08:00
seshendra Gadagottu
acbbd4e357 gpu: nvgpu: modify logic for fecs arb idle check
Check these conditions for fecs arbiter idle:
1. Wait for gr_fecs_arb_ctx_cmd to idle
2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle

Just waiting for condition 1, only guarantees dispatching
of command but not completion of work.

Bug 2056266

Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667550
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-05 22:22:49 -08:00
Seema Khowala
33d1e22230 gpu: nvgpu: clean up memory leaks in gr init
This is to resolve memory leak after modifying
tpc_fs_mask sysfs which sets gr.sw_ready to false and
forces gr to re-initialize.

echo 1 > /sys/devices/gpu.0/force_idle
echo 5 > /sys/devices/gpu.0/tpc_fs_mask
echo 0 > /sys/devices/gpu.0/force_idle

Bug 200393029

Change-Id: I76299f53fc87823071c672ec682c3eb51f72f513
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1666018
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-05 22:22:36 -08:00
Deepak Goyal
26b9194603 gpu: nvgpu: gv11b: Correct PMU PG enabled masks.
PMU ucode records supported feature list for a
particular chip as support mask sent
via PMU_PG_PARAM_CMD_GR_INIT_PARAM.

It then enables selective feature list through
enable mask sent via
PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd.

Right now only ELPG state machine mask was enabled.
Only ELPG state machine was getting executed
but other crucial steps in ELPG entry/exit sequence
were getting skipped.

Bug 200392620.
Bug 200296076.

Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665767
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-05 21:18:20 -08:00
Timo Alho
848af2ce6d Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""
This reverts commit 89fbf39a05.

Bug 2075315

Change-Id: Id34a0376be5160b164931926ec600f77edf69667
Signed-off-by: Timo Alho <talho@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1668487
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
2018-03-05 08:39:57 -08:00
Alex Waterman
89fbf39a05 Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""
This reverts commit 5a35a95654.

JIRA EVLR-2333

Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667167
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-02 22:10:14 -08:00
Sourab Gupta
ef116a6e63 gpu: nvgpu: add pid field to gk20a_event_id_data
QNX is going to reuse the 'struct gk20a_event_id_data' data structure
for its event handling. Contrary to linux which works on fd polling,
QNX needs the information about the pid to find out which process to
deliver the event to.

The patch adds the requisite field.

Change-Id: I466e9cfaacb921c6e1c41bd92c66194b81fc6774
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657994
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-02 22:09:34 -08:00
Deepak Nibade
37a15ce818 gpu: nvgpu: skip channel abort for deferred reset
In case deferred_reset_pending is set in gk20a_fifo_handle_mmu_fault() and in
gv11b_fifo_teardown_ch_tsg(), we skip resetting the engines and skip setting
the error notifier
Then we call gk20a_channel_abort()/gk20a_fifo_abort_tsg() which aborts the
channels, and resets the syncpoint values to release all the waiters

But since we don't set error notifier this could lead User to assume a
successful submission without any error

To fix this disable channel/TSG in case deferred_reset_pending is set and skip
calls to gk20a_channel_abort()/gk20a_fifo_abort_tsg()

Note that we finally abort the channel when channel is being closed

Bug 200363077

Change-Id: Ia48ca369701c14d1913d8f7b66ed466b7b840224
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1664319
(cherry picked from commit ac40d082e8)
Reviewed-on: https://git-master.nvidia.com/r/1666445
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:31 -08:00
Deepak Nibade
df2100018d gpu: nvgpu: allocate separate client managed syncpoint for User
We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space

But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync

Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint

Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not

Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform

For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel

Bug 200326065
Jira NVGPU-179

Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:28 -08:00
Deepak Nibade
aa1da74a75 Revert "gpu: nvgpu: support user fence updates"
This reverts commit 0c46f8a5e1.

We should not support tracking of MAX/threshold value for syncpoint allocated
by user space
Hence revert this patch

Bug 200326065
Jira NVGPU-179

Change-Id: I2df8f8c13fdac91c0814b11a2b7dee30153409d4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665827
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:24 -08:00
Seema Khowala
abe829338c gpu: nvgpu: do not alloc ctx buffers if already allocated
Bug 200393029

Change-Id: Ic2946958e34bcb9247179fcf2e8735c822155cce
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665338
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com>
Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 12:39:08 -08:00
Konsta Holtta
ba8fa334f4 gpu: nvgpu: introduce explicit nvgpu_sgl type
The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl)
argument which is a void pointer. Change the type signatures to take
struct nvgpu_sgl * which is an opaque marker type that makes it more
difficult to pass around wrong arguments, as anything goes for void *.
Explicit types add also self-documentation to the code.

For some added safety, some explicit type casts are now required in
implementors of the nvgpu_sgt_ops interface when converting between the
general nvgpu_sgl type and implementation-specific types.  This is not
purely a bad thing because the casts explain clearly where type
conversions are happening.

Jira NVGPU-30
Jira NVGPU-52
Jira NVGPU-305

Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643555
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 12:24:06 -08:00
Alex Waterman
5a35a95654 Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.

Bug 2070609

Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
2018-02-28 13:49:22 -08:00
Alex Waterman
1170687c33 gpu: nvgpu: Use coherent aperture flag
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.

The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.

However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.

This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.

Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.

Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.

JIRA EVLR-2333

Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:43 -08:00
Alex Waterman
71f53272b2 gpu: nvgpu: FIFO sched fixes
Miscellaneous fixes for the sched code:

1. Make sure get_addr() on an SGL respects the use phys flag since
   the runlist needs physical addresses when NVLINK is in use.
2. Ensure the runlist is contiguous. Since the runlist memory is
   not virtually addressed the buffer must be physically contiguous.
3. Use all 64 bits of the runlist address in the runlist base addr
   register (and related fields).

JIRA EVLR-2333

Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654667
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:34 -08:00
Richard Zhao
8202be50ce gpu: nvgpu: vgpu: split vgpu.c into vgpu.c and vgpu_linux.c
vgpu.c will keep common code whil vgpu_linux.c is linux specific.

Jira EVLR-2364

Change-Id: Ice9782fa96c256f1b70320886d3720ab0db26244
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649943
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 14:30:32 -08:00
Srikar Srimath Tirumala
a26de1185a Revert "gpu: nvgpu: Use gv11b_css_hw_set_handled_snapshots for GV11B"
This reverts commit 2f2e51bbae.

Bug 2068936

Change-Id: I539cdc12a3bd0d9d7fe0ce7dbe9cb7a274eeaa57
Signed-off-by: Srikar Srimath Tirumala <srikars@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1664647
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
2018-02-26 19:01:16 -08:00
Martin Radev
2f2e51bbae gpu: nvgpu: Use gv11b_css_hw_set_handled_snapshots for GV11B
The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta
and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not
have correct behavior on GV11B due to that.
The patch adds an instance of css_hw_set_handled_snapshots
for Volta to fix that.
The patch also renames css_hw_set_handled_snapshots
to gk20a_css_hw_set_handled_snapshots to make it more clear
that the function is arch dependent.

Bug 1960846

Change-Id: I92c35a862ecd7f918dd1458c086fc7ae42ca8fc5
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 12:03:24 -08:00
Konsta Holtta
d98232ab21 gpu: nvgpu: use nvgpu_info in refcount tracking
Use the nvgpu_info log facility instead of Linux-specific dev_info in
gk20a_channel_dump_ref_actions. Also fix format types.

dev_info isn't defined anymore in this file and our version is preferred
anyway. Refcount tracking isn't compiled in by default, so this has went
unnoticed.

Jira NVGPU-22

Change-Id: If93b98c9a54d3b0deaf344a355594cb73712399c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1663032
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 05:19:10 -08:00
Deepak Nibade
0c46f8a5e1 gpu: nvgpu: support user fence updates
Add support for user fence updates i.e. increments added by user space
in pushbuffer directly

Add a submit IOCTL flag NVGPU_SUBMIT_GPFIFO_FLAGS_USER_FENCE_UPDATE to indicate
if User has added increments in pushbuffer
If yes, number_of_increment value is received in fence.value from User

If User is adding increments in the pushbuffer then we don't need to do any job
tracking in the kernel
So fail the submit if we evaluate need_job_tracking to true and
FLAGS_USER_FENCE_UPDATE is set
User is responsible for ensuring all pre-requisites for a fast submit and to
prevent kernel job tracking

Since user space adds increments in the pushbuffer, just handle the threshold
book keeping in kernel.

Bug 200326065
Jira NVGPU-179

Change-Id: Ic0f0b1aa69e3389a4c3305fb6a559c5113719e0f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1661854
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 03:48:14 -08:00
Deepak Nibade
8d5536271f gpu: nvgpu: add user API to get a syncpoint
Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose
per-channel allocated syncpoint to user space
API will also return current value of the syncpoint
On supported platforms, this API will also return a RW semaphore address
(corresponding to syncpoint shim) to user space

Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate
support for this new API
Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver

Set this flag for GV11B and GP10B for now

Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA
address of a syncpoint

Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX
value of syncpoint

Bug 200326065
Jira NVGPU-179

Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658009
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 03:48:11 -08:00
Thomas Fleury
180604fec0 gpu: nvgpu: gv100: fb hal to init and enable nvlink
Add the following hals:
(1) init_nvlink to configure nvlink(s) for sysmem in HSHUB
(2) enable_nvlink to switch from PCIe sysmem to nvlink sysmem,
    and setup atomics.

Change-Id: I73d2370aaf8e0530158a1091d9efef4a8cf2aac5
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1648044
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-25 21:48:28 -08:00
Thomas Fleury
0601fd25a5 gpu: nvgpu: gv100: nvlink endpoint driver
The following changes implements the initial (as per bringup) nvlink driver.

(1) SW initialization of nvlink core driver structures
(2) Nvlink interrupt handling
(3) Device initialization (IOCTRL, pll and clocks, device level intr)
(4) Falcon support for minion
(5) Minion load and bootstrapping
(6) Link initialization and DL PROD settings
(7) Device Interface init (and switching HSHUB to nvlink)
(8) HS set/get mode for both link and sublink
(9) Topology discovery and VBIOS settings.
(10) Ensures we get physical contiguous memory when Nvlink is enabled

This driver includes a hack for the current single dev/single link limitation.

JIRA: EVLR-2331
JIRA: EVLR-2330
JIRA: EVLR-2329
JIRA: EVLR-2328

Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-25 21:48:24 -08:00
Alex Waterman
b5d3cf444e gpu: nvgpu: Cleanup unused variables
There are numerous places where variables are assigned to but then
never used. This patch cleans up all these unused variables and
in some cases simplifies surrounding logic.

Also delete unused header includes and add necessary header includes.

JIRA NVGPU-525

Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ice9ec2a0e97f262d0dcfebe22f83208dbea569d9
Reviewed-on: https://git-master.nvidia.com/r/1662548
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 21:53:38 -08:00
Alex Waterman
bd95c2ce3f gpu: nvgpu: Cleanup warnings and Linuxisms
Remove a variable which is assigned to but never used after
that and convert a pr_warn() into an nvgpu_warn().

JIRA NVGPU-525

Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ia3ab64063ec48d6cd8a4a13cff2ab9b9c459a462
Reviewed-on: https://git-master.nvidia.com/r/1662544
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 21:53:23 -08:00
Alex Waterman
c991410874 gpu: nvgpu: Abstract kernel_restart()
This function is used in gk20a.c to handle catastrophic error conditions
but is Linux specific. As such, implement an abstraction for this in
driver_common.c and expose the API in nvgpu_common.h.

JIRA NVGPU-525

Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ie2e417d30af5ff7db76f4d2d5b97ec96c386bd04
Reviewed-on: https://git-master.nvidia.com/r/1662543
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 21:53:19 -08:00
Alex Waterman
ea7eaed843 gpu: nvgpu: Don't alloc comptags on GV100
We don't have compression enabled on GV100 so it does not make sense
to allocate a huge CBC for this device.

Change-Id: I0ff908571f28c2ba6f439b0989398bf68dce16f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655279
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 05:04:30 -08:00
Alex Waterman
eb219e9f3f gpu: nvgpu: Cleanup map attributes debugging
Make the map attributes printed by map debug code are more easily
readable and consistent.

Change-Id: I9737131a2ea44c6a080dff0095929760888b83ae
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654518
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-22 08:09:06 -08:00
Terje Bergstrom
ec00a6c2db gpu: nvgpu: Use preallocated VPR buffer
To prevent deadlock while allocating VPR in nvgpu, allocate all the
needed VPR memory at probe time and use an internal allocator to
hand out space for VPR buffers.

Change-Id: I584b9a0f746d5d1dec021cdfbd6f26b4b92e4412
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655324
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-14 21:43:43 -08:00
Deepak Nibade
f0cbe19b12 gpu: nvgpu: add user API to get read-only syncpoint address map
Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint
address map in user space

We already map whole syncpoint shim to each address space with base address
being vm->syncpt_ro_map_gpu_va

This new API exposes this base GPU_VA address of syncpoint map, and unit size
of each syncpoint to user space.
User space can then calculate address of each syncpoint as
syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size)

Note that this syncpoint address is read_only, and should be only used for
inserting semaphore acquires.
Adding semaphore release with this address would result in MMU_FAULT

Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported
on Xavier SoC

Bug 200327559

Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649365
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-07 15:35:47 -08:00
Seema Khowala
791ce6bd54 gpu: nvgpu: gv11b: enable more gr exceptions
-pd, scc, ds, ssync, mme and sked exceptions are
 enabled. This will be useful for debugging
-Handle enabled interrupts
-Add gr ops to handle ssync hww. For legacy
 chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr.
 Since ssync hww is not enabled on legacy chips, added
 ssync hww exception handling for volta only.

Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644751
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 13:23:30 -08:00
Seema Khowala
9beefc4551 gpu: nvgpu: add fecs_host_int_enable hal
This will be used to enable fecs interrupts per
chip.

Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1642554
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 13:23:21 -08:00
Terje Bergstrom
cb9f8bae1a gpu: nvgpu: Unify querying stream id
Stream ID for gp10b is retrieved directly from DT headers in common
code. Introduce instead a variable to store the stream ID and move the
query to platform_gp10b_tegra.c.

JIRA NVGPU-4

Change-Id: I123024e13e470283bb691883f8f963eb72c997d8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1648013
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 12:37:28 -08:00
Richard Zhao
b386768d32 gpu: nvgpu: make .tsg_unbind_channel one layer lower
The message to tell RM server to unbind channel has to be sent after
client unbinds the channel and before client calls tsg release. The
channel has to belong to a tsg on RM server before client submit a
runlist to remove the channel. Or there's a bare channel problem.

By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel()
will be common functions for all chip, and it'll call tsg release after
call .tsg_unbind_channel. So vgpu won't need to worry about tsg was
released before sending msg to RM server.

Bug 200382695
Bug 200382785

Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1645383
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 02:40:48 -08:00
Konsta Holtta
1a7484c901 gpu: nvgpu: ce: store fences in a separate array
Simplify the copyengine code massively by storing the job post fence
pointers in an array of fences instead of mixing them up in the command
buffer memory. The post fences are used when the ring buffer of a
context gets full and we need to wait for the oldest slot to free up.

NVGPU-43
NVGPU-52

Change-Id: I36969e19676bec0f38de9a6357767a8d5cbcd329
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1646037
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-26 10:50:37 -08:00
Konsta Holtta
91114cd6d4 gpu: nvgpu: ce: drop prefence support
Delete the gk20a_fence_in argument in gk20a_ce_execute_ops. It has never
been used and is in the way of some upcoming code cleanup.

NVGPU-43

Change-Id: Ie61e1a2f4945b1e34d64880044c265d26fa822d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1646036
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-26 10:50:33 -08:00
David Nieto
fbdcc8a2d4 gpu: nvgpu: Initial Nvlink driver skeleton
Adds the skeleton and integration of the GV100 endpoint driver to NVGPU

(1) Adds a OS abstraction layer for the internal nvlink structure.
(2) Adds linux specific integration with Nvlink core driver.
(3) Adds function pointers for nvlink api, initialization and isr process.
(4) Adds initial support for minion.
(5) Adds new GPU enable properties to handle NVLINK presence
(6) Adds new GPU enable properties for SG_PHY bypass (required for NVLINK over
PCI)
(7) Adds parsing of nvlink vbios structures.
(8) Adds logging defines for NVGPU

JIRA: EVLR-2328

Change-Id: I0720a165a15c7187892c8c1a0662ec598354ac06
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644708
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-25 17:39:53 -08:00
Alex Waterman
4967570033 gpu: nvgpu: add speculative load barrier (ctrl IOCTLs)
Data can be speculatively loaded from memory and stay in cache even
when bound check fails. This can lead to unintended information
disclosure via side-channel analysis.

To mitigate this problem insert a speculation barrier.

bug 2039126
CVE-2017-5753

Change-Id: Ib6c4b2f99b85af3119cce3882fe35ab47509c76f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640500
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-25 14:25:34 -08:00
Alex Waterman
25aba34bbd gpu: nvgpu: add speculative load barrier (channel IOCTLs)
Data can be speculatively loaded from memory and stay in cache even
when bound check fails. This can lead to unintended information
disclosure via side-channel analysis.

To mitigate this problem insert a speculation barrier.

bug 2039126
CVE-2017-5753

Change-Id: I6b8af794ea2156f0342ea6cc925051f49dbb1d6e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640498
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-25 14:25:21 -08:00
Terje Bergstrom
9cbb954266 gpu: nvgpu: Report mailbox id and value on ucode timeout
When we detect a timeout waiting for ctxsw ucode method to complete,
we print an error. The error does not detail the event we are waiting
which makes debugging difficult. Add the missing mailbox id and value
to the error.

Bug 2049965

Change-Id: I45204a2d6f1f39919a0133b1e0867213e1a5b671
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643709
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-23 11:19:10 -08:00
Terje Bergstrom
f3f14cdff5 gpu: nvgpu: Fold T19x code back to main code paths
Lots of code paths were split to T19x specific code paths and structs
due to split repository. Now that repositories are merged, fold all of
them back to main code paths and structs and remove the T19x specific
Kconfig flag.

Change-Id: Id0d17a5f0610fc0b49f51ab6664e716dc8b222b6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640606
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-22 22:20:15 -08:00
seshendra Gadagottu
193a2ed38c gpu: nvgpu: add sw method for SET_BES_CROP_DEBUG4
Added sw method support for SET_BES_CROP_DEBUG4.
In this sw method:
CLAMP_FP_BLEND_TO_MAXVAL forces overflow and
CLAMP_FP_BLEND_TO_INF blend results to clamp to FP maxval.

Added support for this sw method in gp10b/gp106/gv11b
and gv100.

Bug 2046636

Change-Id: I3a9e97587aca76718f7f504ea3b853f87409092a
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1641529
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-22 15:29:54 -08:00
Konsta Holtta
f6d898656a gpu: nvgpu: note railgate_allowed in do_idle
The idling and unidling of deterministic channels in the
do_idle/do_unidle path assume that each deterministic channel holds a
power reference. This is no longer the case if railgating has been
allowed for a channel via the deterministic options ioctl which also
causes the channel to drop the power ref that it holds otherwise during
its lifetime.

All this is happening inside the deterministic_busy rwsem, which also
guards the ioctl changing those deterministic option states.

Bug 200327089

Change-Id: I9ce312bbaa459b3cf4a7541fa369186b78c3afdc
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1642310
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-22 08:29:05 -08:00
Konsta Holtta
3ccf5c85fb gpu: nvgpu: add g->sw_ready flag
Fix a race condition where we'd still be booting up the gpu and/or
initializing the driver but elsewhere assume that all is done already.

Some userspace APIs to make sure that we're ready by testing
g->gr.sw_ready, but this flag is set in the middle of bootup; there are
other things after gr initialization. Add a new flag that is enabled
after bootup is fully complete at the end of finalize_poweron, and
change the checks in user API paths to test the new flag only.

These checks are only in the ioctl paths for ctrl, dbg and tsg, and in
the ctrl device's opening path.

The gr.sw_ready flag is still left there to signify whether just gr has
had its bookkeeping initialized.

Bug 200370011

Change-Id: I2995500e06de46430d9b835de1e9d60b3f01744e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640124
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-20 02:19:02 -08:00
Alex Waterman
137006fe78 gpu: nvgpu: Update gk20a pde bit coverage function
The mm_gk20a.c function that returns number of bits that a PDE covers
is very useful for determing PDE size for all chips. Copy this into
the common VM code since this applies to all chips/platforms.

Bug 200105199

Change-Id: I437da4781be2fa7c540abe52b20f4c4321f6c649
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639730
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-19 17:29:09 -08:00
Konsta Holtta
f50c2af8a7 gpu: nvgpu: delete unused wfi in gk20a_fence
The boolean wfi field in struct gk20a_fence is not used for anything.
Delete it and a couple of function parameters that carried the flag.

Jira NVGPU-43

Change-Id: I399c8709102a3f944cab669ff806761aedaeb6d3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1636344
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-19 08:13:13 -08:00
Deepak Goyal
e0dbf3a784 gpu: nvgpu: gv11b: Enable perfmon.
t19x PMU ucode uses RPC mechanism for
PERFMON commands.

- Declared  "pmu_init_perfmon",
  "pmu_perfmon_start_sampling",
  "pmu_perfmon_stop_sampling" and
  "pmu_perfmon_get_samples" in pmu ops
  to differenciate for chips using RPC & legacy
  cmd/msg mechanism.
- Defined and used PERFMON RPC commands for t19x
  	- INIT
	- START
	- STOP
	- QUERY
- Adds RPC handler for PERFMON RPC commands.
- For guerying GPU utilization/load, we need to send PERFMON_QUERY
  RPC command for gv11b.
- Enables perfmon for gv11b.

Bug 2039013

Change-Id: Ic32326f81d48f11bc772afb8fee2dee6e427a699
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1614114
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-18 23:40:02 -08:00
Richard Zhao
e53263ea86 gpu: nvgpu: unexport gk20a_ce_create/delete_context
No external referencing of them.

Jira VFND-4713

Change-Id: If053bbdbb37e9bd4789bfd7cccb1aef035fbf317
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639674
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-17 15:49:31 -08:00
Terje Bergstrom
2f6698b863 gpu: nvgpu: Make graphics context property of TSG
Move graphics context ownership to TSG instead of channel. Combine
channel_ctx_gk20a and gr_ctx_desc to one structure, because the split
between them was arbitrary. Move context header to be property of
channel.

Bug 1842197

Change-Id: I410e3262f80b318d8528bcbec270b63a2d8d2ff9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1639532
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-17 12:29:09 -08:00