Commit Graph

100 Commits

Author SHA1 Message Date
Konsta Holtta
cb8d8337a6 gpu: nvgpu: disallow invalid syncpoint wait ids
Instead of ignoring a wait when a raw syncpoint prefence has an invalid
id, reject the submit with -EINVAL just like with syncpoints in syncfds.

Change-Id: I9b5c417bd1c7cd081c79659d088ac2c915de8c0e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1680281
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-23 17:18:24 -07:00
Konsta Holtta
bac51e8081 gpu: nvgpu: allow syncfds as prefences on deterministic
Accept submits on deterministic channels even when the prefence is a
syncfd, but only if it has just one fence inside.

Because NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE is shared between pre- and
postfences, a postfence (SUBMIT_GPFIFO_FLAGS_FENCE_GET) is not allowed
at the same time though.

The sync framework is problematic for deterministic channels due to
certain allocations that are not controlled by nvgpu. However, that only
applies for postfences, yet we've disallowed FLAGS_SYNC_FENCE for
deterministic channels even when a postfence is not needed.

Bug 200390539

Change-Id: I099bbadc11cc2f093fb2c585f3bd909143238d57
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1680271
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-23 17:18:15 -07:00
Deepak Nibade
b5b4353ca6 gpu: nvgpu: set safe state for user managed syncpoints
MAX/threshold value of user managed syncpoint is not tracked by nvgpu
So if channel is reset by nvgpu there could be waiters still waiting on some
user syncpoint fence

Fix this by setting a large safe value to user managed syncpoint when aborting
the channel and when closing the channel

We right now increment the current value by 0x10000 which should be sufficient
to release any pending waiter

Bug 200326065
Jira NVGPU-179

Change-Id: Ie6432369bb4c21bd922c14b8d5a74c1477116f0b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1678768
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-23 08:20:35 -07:00
Konsta Holtta
9f9035d10b gpu: nvgpu: remove fence param from channel_sync
The fence parameter that gets output from gk20a_channel_sync's wait()
and wait_fd() APIs is no longer used for anything. Delete it.

Jira NVGPU-527
Jira NVGPU-528
Bug 200390539

Change-Id: I659504062dc6aee83a0a0d9f5625372b4ae8c0e2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676734
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-16 17:12:03 -07:00
Konsta Holtta
69252b3fb6 gpu: nvgpu: remove support for foreign sema syncfds
Delete the proxy waiter for non-semaphore-backed syncfds in sema wait
path to simplify code, to remove dependencies to the sync framework (and
thus Linux) and to support upcoming refactorings. This feature has never
been used for actually foreign fences.

Jira NVGPU-43
Jira NVGPU-66

Change-Id: I2b539aefd2d096a7bf5f40e61d48de7a9b3dccae
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665119
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-16 17:11:03 -07:00
Alex Waterman
d4382ed094 gpu: nvgpu: Use asid only under CONFIG_SYNC in channel_sync_gk20a.c
This variable is only ever used under the CONFIG_SYNC config so
make sure that we only define/assign to it when CONFIG_SYNC is
enabled.

JIRA NVGPU-525

Change-Id: I27160adbd6a46f58e21f24ab19d37966ded5e7de
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673812
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-16 07:34:45 -07:00
Konsta Holtta
34323b5595 gpu: nvgpu: wait for all prefence semas on gpu
The pre-fence wait for semaphores in the submit path has supported a
fast path for fences that have only one underlying semaphore. The fast
path just inserts the wait on this sema to the pushbuffer directly. For
other fences, the path has been using a CPU wait indirection, signaling
another semaphore when we get the CPU-side callback.

Instead of only supporting prefences with one sema, unroll all the
individual semaphores and insert waits for each to a pushbuffer, like
we've already been doing with syncpoints. Now all sema-backed syncs get
the fast path. This simplifies the logic and makes it more explicit that
only foreign fences need the CPU wait.

There is no need to hold references to the sync fence or the semas
inside: this submitted job only needs the global read-only sema mapping
that is guaranteed to stay alive while the VM of this channel stays
alive, and the job does not outlive this channel.

Jira NVGPU-43
Jira NVGPU-66
Jira NVGPU-513

Change-Id: I7cfbb510001d998a864aed8d6afd1582b9adb80d
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1636345
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-16 07:34:01 -07:00
Konsta Holtta
86943d3d03 gpu: nvgpu: decouple sema and hw sema
struct nvgpu_semaphore represents (mainly) a threshold value that a sema
at some index will get and struct nvgpu_semaphore_int (aka "hw_sema")
represents the allocation (and write access) of a semaphore index and
the next value that the sema at that index can have. The threshold
object doesn't need a pointer to the sema allocation that is not even
guaranteed to exist for the whole threshold lifetime, so replace the
pointer by the position of the sema in the sema pool.

This requires some modifications to pass a hw sema around explicitly
because it now represents write access more explicitly.

Delete also the index field of semaphore_int because it can be directly
derived from the offset in the sema location and is thus unnecessary.

Jira NVGPU-512

Change-Id: I40be523fd68327e2f9928f10de4f771fe24d49ee
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658102
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-13 02:43:37 -07:00
seshendra Gadagottu
3df619f68a gpu: nvgpu: hal for syncpt_incr_per_release
Create hal to indicate syncpt increments per release.
Legacy chip uses 2 syncpt increments per release and gv1xx
onwards uses 1 syncpt increment per release.

Bug 2066025

Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669817
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-12 10:40:17 -07:00
Konsta Holtta
b063870f90 gpu: nvgpu: BUG_ON for sema increment, not value
When adding a sema wait to a pushbuf, verify that the sema threshold has
been incremented from the original value by reading the incremented
field instead of value (which is set to nonzero by
nvgpu_semaphore_incr()). Value could be 0 even after an increment if new
semas weren't reset to 0.

Jira NVGPU-514

Change-Id: I295451fbc7eb9e597aea12d73074e99f74a6a899
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658100
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-07 09:45:04 -08:00
Deepak Nibade
df2100018d gpu: nvgpu: allocate separate client managed syncpoint for User
We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space

But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync

Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint

Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not

Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform

For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel

Bug 200326065
Jira NVGPU-179

Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:28 -08:00
Deepak Nibade
aa1da74a75 Revert "gpu: nvgpu: support user fence updates"
This reverts commit 0c46f8a5e1.

We should not support tracking of MAX/threshold value for syncpoint allocated
by user space
Hence revert this patch

Bug 200326065
Jira NVGPU-179

Change-Id: I2df8f8c13fdac91c0814b11a2b7dee30153409d4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665827
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:24 -08:00
Deepak Nibade
0c46f8a5e1 gpu: nvgpu: support user fence updates
Add support for user fence updates i.e. increments added by user space
in pushbuffer directly

Add a submit IOCTL flag NVGPU_SUBMIT_GPFIFO_FLAGS_USER_FENCE_UPDATE to indicate
if User has added increments in pushbuffer
If yes, number_of_increment value is received in fence.value from User

If User is adding increments in the pushbuffer then we don't need to do any job
tracking in the kernel
So fail the submit if we evaluate need_job_tracking to true and
FLAGS_USER_FENCE_UPDATE is set
User is responsible for ensuring all pre-requisites for a fast submit and to
prevent kernel job tracking

Since user space adds increments in the pushbuffer, just handle the threshold
book keeping in kernel.

Bug 200326065
Jira NVGPU-179

Change-Id: Ic0f0b1aa69e3389a4c3305fb6a559c5113719e0f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1661854
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 03:48:14 -08:00
Deepak Nibade
8d5536271f gpu: nvgpu: add user API to get a syncpoint
Add new user API NVGPU_IOCTL_CHANNEL_GET_USER_SYNCPOINT which will expose
per-channel allocated syncpoint to user space
API will also return current value of the syncpoint
On supported platforms, this API will also return a RW semaphore address
(corresponding to syncpoint shim) to user space

Add new characteristics flag NVGPU_GPU_FLAGS_SUPPORT_USER_SYNCPOINT to indicate
support for this new API
Add new flag NVGPU_SUPPORT_USER_SYNCPOINT for use of core driver

Set this flag for GV11B and GP10B for now

Add a new API (*syncpt_address) in struct gk20a_channel_sync to get GPU_VA
address of a syncpoint

Add new API nvgpu_nvhost_syncpt_read_maxval() which will read and return MAX
value of syncpoint

Bug 200326065
Jira NVGPU-179

Change-Id: I9da6f17b85996f4fc6731c0bf94fca6f3181c3e0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658009
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-26 03:48:11 -08:00
Konsta Holtta
f50c2af8a7 gpu: nvgpu: delete unused wfi in gk20a_fence
The boolean wfi field in struct gk20a_fence is not used for anything.
Delete it and a couple of function parameters that carried the flag.

Jira NVGPU-43

Change-Id: I399c8709102a3f944cab669ff806761aedaeb6d3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1636344
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-19 08:13:13 -08:00
seshendra Gadagottu
a9ce91f910 gpu: nvgpu: add syncpoint read map
For sync-point read map:
 1. Added nvgpu_mem memory allocator in gk20a struct and
    allocated memory for this in gk20a_finalize_poweron()
    and freed this memory in gk20a_remove().
 2. Added "u64 syncpt_ro_map_gpu_va" in vm_gk20a struct
    for read map in vm.

Added nvgpu_quiesce() in nvgpu_remove() before freeing
syncpoint read map to ensure that nvgpu is idle.

JIRA GPUT19X-2

Change-Id: I7cbfec57f0992682dd833a1b0d92d694bcaf1eb3
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1514338
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:19:16 -07:00
Terje Bergstrom
7885500a42 gpu: nvgpu: Change license for common files to MIT
Change license of OS independent source code files to MIT.

JIRA NVGPU-218

Change-Id: I1474065f4b552112786974a16cdf076c5179540e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-09-26 11:37:32 -07:00
Terje Bergstrom
bc87e8989d gpu: nvgpu: Remove support for old kernel version
Remove support for pre-4.4 kernels. This allows deleting the checks
for kernel version, and usage of linux/version.h.

Change-Id: I4d6cb30512ea164d27549f4f4d096e5931bb1379
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1543499
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-08-22 12:44:41 -07:00
Debarshi Dutta
98186ec2c2 gpu: nvgpu: Add wrapper over atomic_t and atomic64_t
- added wrapper structs nvgpu_atomic_t and nvgpu_atomic64_t over
  atomic_t and atomic64_t
- added nvgpu_atomic_* and nvgpu_atomic64_* APIs to access the above
  wrappers.

JIRA NVGPU-121

Change-Id: I61667bb0a84c2fc475365abb79bffb42b8b4786a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1533044
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-08-17 14:26:47 -07:00
Richard Zhao
7d584bf868 gpu: nvgpu: rename hw_chid to chid
hw_chid is a relative id for vgpu. For native it's same as hw id.
Renaming it to chid to avoid confusing.

Jira VFND-3796

Change-Id: I1c7924da1757330ace715a7c52ac61ec9dc7065c
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master/r/1509530
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-06-29 22:34:35 -07:00
Konsta Holtta
f6c921ec97 gpu: nvgpu: bring back tegra idle registration
To make do_idle work when nvgpu is built as a module, reverse the order
of call dependencies for do_idle. Don't provide visible
gk20a_do_{idle,unidle}() functions for the kernel but instead call the
kernel for registering and unregistering pointers to them when the
driver loads and unloads.

Refactor the internal __gk20a_do_{idle,unidle} functions to take a
struct gk20a * instead of struct device *, and use the callback api for
providing that g instead of retrieving the plat device from device tree.

Bug 200290850

Change-Id: Ibef8b069302e547b298069cbb97734f461a10cc3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1493774
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-06-15 05:23:19 -07:00
Deepak Nibade
0ad7f1d9aa gpu: nvgpu: use nvgpu specific nvhost APIs
Remove use of linux specifix header files
<linux/nvhost.h> and <linux/nvhost_ioctl.h>
and use nvgpu specific header file <nvgpu/nvhost.h>
instead
This is needed to remove all Linux dependencies
from nvgpu driver

Replace all nvhost_*() calls by
nvgpu_nvhost_*() calls from new nvgpu library

Remove platform device pointer host1x_dev
from struct gk20a and add struct
nvgpu_nvhost_dev instead

Jira NVGPU-29

Change-Id: Ia7af70602cfc16f9ccc380752538c05a9cbb8a67
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1489726
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2017-06-08 06:37:15 -07:00
seshendra Gadagottu
c0822cb22e gpu: nvgpu: add chip specific sync point support
Added support for chip specific sync point implementation.
Relevant fifo hal functions are added and updated for
legacy chips.

JIRA GPUT19X-2

Change-Id: I9a9c36d71e15c384b5e5af460cd52012f94e0b04
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: http://git-master/r/1258232
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-26 14:07:12 -07:00
Stephen Warren
2e338c77ea gpu: nvgpu: remove duplicate \n from log messages
nvgpu_log/info/warn/err() internally add a \n to the end of the message.
Hence, callers should not include a \n at the end of the message. Doing
so results in duplicate \n being printed, which ends up creating empty
log messages. Remove the duplicate \n from all err/warn messages.

Bug 1928311

Change-Id: I99362c5327f36146f28ba63d4e68181589735c39
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Reviewed-on: http://git-master/r/1487232
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-26 03:34:30 -07:00
Konsta Holtta
ee9733e587 gpu: nvgpu: expose deterministic submit support
Add these bits in the gpu characteristics flags:

NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_NO_JOBTRACKING - fast
submits with no in-kernel job tracking are supported.

NVGPU_GPU_FLAGS_SUPPORT_DETERMINISTIC_SUBMIT_FULL - deterministic
submits also with job tracking and num_inflight_jobs set are supported.

Either of these may get disabled if the particular channel or submit
still requires features that block these.

Make gk20a_channel_sync_needs_sync_framework() take a gk20a pointer
instead of a channel pointer so that it can be called without a channel.
It does not need any per-channel data.

Bug 200291300

Change-Id: I5f82510b6d39b53bcf6f1006dd83bdd9053963a0
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1456845
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-05-05 07:54:18 -07:00
Terje Bergstrom
71af78d2c2 gpu: nvgpu: Move has_syncpts to gk20a
Copy has_syncpts to struct gk20a at probe time, and access it from
gk20a instead of platform_gk20a.

JIRA NVGPU-16

Change-Id: I50329e3a5141a62e6e9828e97ea0747abc1ce1ee
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1463545
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-26 09:14:26 -07:00
Alex Waterman
84dadb1a9a gpu: nvgpu: Move semaphore impl to nvgpu_mem
Use struct nvgpu_mem for DMA allocations (and the corresponding
nvgpu_dma_alloc_sys()) instead of custom rolled code. This migrates
away from using linux scatter gather tables directly. Instead this
is hidden in the nvgpu_mem struct. With this change the semaphore.c
code no longer has any direct Linux dependencies.

JIRA NVGPU-12
JIRA NVGPU-30

Change-Id: I92167c98aac9b413ae87496744dcee051cd60207
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1464081
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
2017-04-25 14:26:00 -07:00
Deepak Nibade
8929ab7533 gpu: nvgpu: clean up linux list includes
Remove linux list includes <linux/list.h>
and include <nvgpu/list.h> since we now use
nvgpu list APIs instead of linux APIs

Jira NVGPU-13

Change-Id: I59bd433a9bc5c15d4c40e6fe4b18cf44246ba3b2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1462080
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-04-19 12:15:57 -07:00
Terje Bergstrom
a0fa2b0258 gpu: nvgpu: Add wrapper nvgpu/bug.h
Add wrapper header file nvgpu/bug.h. It #includes <linux/bug.h>
in Linux.

JIRA NVGPU-13

Change-Id: I7bf02ba554333f7cbd79d72bd1cb423c81ebcb49
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1461545
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-13 08:56:06 -07:00
Terje Bergstrom
44d5fb76aa gpu: nvgpu: Add wrapper nvgpu/atomic.h
Add wrapper header file nvgpu/atomic.h. It #includes <linux/atomic.h>
on Linux.

JIRA NVGPU-13

Change-Id: I6f2b3a04c964e7664b1f61b6073b643629bd99c5
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1460792
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
2017-04-12 07:01:12 -07:00
Konsta Holtta
1a4647272f gpu: nvgpu: remove fence dependency tracking
In preparation for better abstraction in job synchronization, drop
support for the dependency fences tracked via submit pre-fences in
semaphore-based syncs. This has only worked for semaphores, not nvhost
syncpoints, and hasn't really been used. The dependency was printed in
the sync framework's sync pt value string.

Remove also the userspace-visible gk20a_sync_pt_info which is not used
and depends on this feature (providing a duration since the dependency
fence's timestamp).

Jira NVGPU-43

Change-Id: Ia2b26502a9dc8f5bef5470f94b1475001f621da1
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1456880
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-11 09:57:21 -07:00
Terje Bergstrom
3ba374a5d9 gpu: nvgpu: gk20a: Use new error macro
gk20a_err() and gk20a_warn() require a struct device pointer,
which is not portable across operating systems. The new nvgpu_err()
and nvgpu_warn() macros take struct gk20a pointer. Convert code
to use the more portable macros.

JIRA NVGPU-16

Change-Id: Ia51f36d94c5ce57a5a0ab83b3c83a6bce09e2d5c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1331694
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-04-10 19:04:19 -07:00
Deepak Nibade
92ab6c3bbc gpu: nvgpu: use nvgpu list for pending semaphore waits
Use nvgpu list APIs instead of linux list APIs
to store pending semaphore waits

Jira NVGPU-13

Change-Id: I42fc6c6233e39f475a939ddd6a81c0cda851b6bf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1454693
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2017-04-09 23:54:32 -07:00
Alex Waterman
b69020bff5 gpu: nvgpu: Rename gk20a_mem_* functions
Rename the functions used for mem_desc access to nvgpu_mem_*.

JIRA NVGPU-12

Change-Id: Ibfdc1112d43f0a125e4487c250e3f977ffd2cd75
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1323325
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-04-06 18:14:48 -07:00
Alex Waterman
24e1c7e0a7 gpu: nvgpu: Use new kmem API functions (misc)
Use the new kmem API functions in misc gk20a code. Some additional
modifications were also made:

  o  Add a struct gk20a pointer to gk20a_fence to enable proper
     kmem free usage.
  o  Add gk20a pointer to alloc_session() in dbg_gpu_gk20a.c to
     use kmem API for allocating a session.
  o  Plumb a gk20a pointer through the fence creation and deletion.
  o  Use statically allocated buffers for names in file creation.

Bug 1799159
Bug 1823380

Change-Id: I3678080e3ffa1f9bcf6934e3f4819a1bc531689b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1318323
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-30 12:36:09 -07:00
Terje Bergstrom
3e39798997 gpu: nvgpu: Remove unnecessary use of dev_name()
Move the name field from struct gpu_ops up to struct gk20a. The field
is not a function op, so it doesn't belong in gpu_ops.

Replace all uses of dev_name() with use of g->name when possible.

JIRA NVGPU-16

Change-Id: Ic6e99e39258cbf3bb7c806962cbbd7de5126688f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1328534
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-28 12:05:03 -07:00
Konsta Holtta
33f637585e gpu: nvgpu: split nvhost dependency on plat interface
Add CONFIG_TEGRA_GK20A_NVHOST and remove the TEGRA_GRHOST ||
TEGRA_HOST1X dependency in CONFIG_TEGRA_GK20A to allow using the iGPU
without the nvhost driver. Use the new config to guard syncpt-related
code.

Also make TEGRA_ACR depend on GK20A too so that it aligns properly under
gk20a in menuconfig.

Bug 1853519

Change-Id: I9e9b0a7915d000aae7930821627b7a01d08d3f5c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1321303
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-23 08:04:23 -07:00
Alex Waterman
4a94c135f0 gpu: nvgpu: Use new kmem API functions (channel)
Use the new kmem API functions in the channel and channel
related code.

Also delete the usage of kasprintf() since that must be paired
with a kfree(). Since the kasprintf() doesn't use the nvgpu kmem
machinery (and is Linux specific) instead use a small buffer
statically allocated on the stack.

Bug 1799159
Bug 1823380

Change-Id: Ied0183f57372632264e55608f56539861cc0f24f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1318312
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-22 18:37:10 -07:00
Konsta Holtta
dd0f3a061b gpu: nvgpu: avoid double-free of incr cmd
The call site (gk20a_submit_prepare_syncs) owns the incr_cmd buffer
passed to __gk20a_channel_semaphore_incr. Delete the free in the error
path of the latter case to avoid freeing the same buffer twice.

Bug 1853519

Change-Id: I9b90ce7ebb17ac63992938c7f9fe90bbd139f85f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1321117
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-03-16 09:17:22 -07:00
Konsta Holtta
f1072a28be gpu: nvgpu: add worker for watchdog and job cleanup
Implement a worker thread to replace the delayed works in channel
watchdog and job cleanups. Watchdog runs by polling the channel states
periodically, and job cleanup is performed on channels that are appended
on a work queue consumed by the worker thread. Handling both of these
two in the same thread makes it impossible for them to cause a deadlock,
as has previously happened.

The watchdog takes references to channels during checking and possibly
recovering channels. Jobs in the cleanup queue have an additional
reference taken which is released after the channel is processed. The
worker is woken up from periodic sleep when channels are added to the
queue.

Currently, the queue is only used for job cleanups, but it is extendable
for other per-channel works too. The worker can also process other
periodic actions dependent on channels.

Neither the semantics of timeout handling or of job cleanups are yet
significantly changed - this patch only serializes them into one
background thread.

Each job that needs cleanup is tracked and holds a reference to its
channel and a power reference, and timeouts can only be processed on
channels that are tracked, so the thread will always be idle if the
system is going to be suspended, so there is currently no need to
explicitly suspend or stop it.

Bug 1848834
Bug 1851689
Bug 1814773
Bug 200270332
Jira NVGPU-21

Change-Id: I355101802f50841ea9bd8042a017f91c931d2dc7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1297183
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-03-02 17:51:03 -08:00
Deepak Nibade
8ee3aa4b31 gpu: nvgpu: use common nvgpu mutex/spinlock APIs
Instead of using Linux APIs for mutex and spinlocks
directly, use new APIs defined in <nvgpu/lock.h>

Replace Linux specific mutex/spinlock declaration,
init, lock, unlock APIs with new APIs
e.g
struct mutex is replaced by struct nvgpu_mutex and
mutex_lock() is replaced by nvgpu_mutex_acquire()

And also include <nvgpu/lock.h> instead of including
<linux/mutex.h> and <linux/spinlock.h>

Add explicit nvgpu/lock.h includes to below
files to fix complilation failures.
gk20a/platform_gk20a.h
include/nvgpu/allocator.h

Jira NVGPU-13

Change-Id: I81a05d21ecdbd90c2076a9f0aefd0e40b215bd33
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1293187
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-02-22 04:15:02 -08:00
Alex Waterman
e7a0c0ae8b gpu: nvgpu: Move from gk20a_ to nvgpu_ in semaphore code
Change the prefix in the semaphore code to 'nvgpu_' since this code
is global to all chips.

Bug 1799159

Change-Id: Ic1f3e13428882019e5d1f547acfe95271cc10da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284628
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
2017-02-13 18:15:03 -08:00
Alex Waterman
aa36d3786a gpu: nvgpu: Organize semaphore_gk20a.[ch]
Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore
code is common to all chips.

Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu
and rename it to semaphore.h. Also update all places where the header
is inluced to use the new path.

This revealed an odd location for the enum gk20a_mem_rw_flag. This should
be in the mm headers. As a result many places that did not need anything
semaphore related had to include the semaphore header file. Fixing this
oddity allowed the semaphore include to be removed from many C files that
did not need it.

Bug 1799159

Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284627
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2017-02-13 18:14:45 -08:00
Alex Waterman
9da40c79fc gpu: nvgpu: Store pending sema waits
Store pending sema waits so that they can be explicitly handled when
the driver dies. If the sema_wait is freed before the pending wait is
either handled or canceled problems occur.

Internally the sync_fence_wait_async() function uses the kernel timers.
That uses a linked list of possible events. That means every so often
the kernel iterates through this list. If the list node that is in the
sync_fence_waiter struct is freed before it can be removed from the
pending timers list then the kernel timers list can be corrupted. When
the kernel then iterates through this list crashes and other related
problems can happen.

Bug 1816516
Bug 1807277

Change-Id: Iddc4be64583c19bfdd2d88b9098aafc6ae5c6475
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1250025
(cherry picked from commit 01889e21bd31dbd7ee85313e98079138ed1d63be)
Reviewed-on: http://git-master/r/1261920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2016-12-19 15:40:19 -08:00
Terje Bergstrom
d29afd2c9e gpu: nvgpu: Fix signed comparison bugs
Fix small problems related to signed versus unsigned comparisons
throughout the driver. Bump up the warning level to prevent such
problems from occuring in future.

Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1252068
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2016-11-16 21:35:36 -08:00
Sachit Kadle
ab593b9ccd gpu: nvgpu: make deferred clean-up conditional
This change makes the invocation of the deferred job clean-up
mechanism conditional. For submissions that require job tracking,
deferred clean-up is only required if any of the following
conditions are met:

1) Channel's deterministic flag is not set
2) Rail-gating is enabled
3) Channel WDT is enabled
4) Buffer refcounting is enabled
5) Dependency on Sync Framework

In case deferred clean-up is not needed, we clean-up
a single job tracking resource in the submit path. For
deterministic channels, we do not allow deferred clean-up to
occur and fail any submits that require it.

Bug 1795076

Change-Id: I4021dffe8a71aa58f12db6b58518d3f4021f3313
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1220920
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
(cherry picked from commit b09f7589d5ad3c496e7350f1ed583a4fe2db574a)
Reviewed-on: http://git-master/r/1223941
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-10-21 11:23:53 -07:00
Sachit Kadle
63e8592e06 gpu: nvgpu: use inplace allocation in sync framework
This change is the first of a series of changes to
support the usage of pre-allocated job tracking resources
in the submit path. With this change, we still maintain a
dynamically-allocated joblist, but make the necessary changes
in the channel_sync & fence framework to use in-place
allocations. Specifically, we:

1) Update channel sync framework routines to take in
pre-allocated priv_cmd_entry(s) & gk20a_fence(s) rather
than dynamically allocating themselves

2) Move allocation of priv_cmd_entry(s) & gk20a_fence(s)
to gk20a_submit_prepare_syncs

3) Modify fence framework to have seperate allocation
and init APIs. We expose allocation as a seperate API, so
the client can allocate the object before passing it into
the channel sync framework.

4) Fix clean_up logic in channel sync framework

Bug 1795076

Change-Id: I96db457683cd207fd029c31c45f548f98055e844
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1206725
(cherry picked from commit 9d196fd10db6c2f934c2a53b1fc0500eb4626624)
Reviewed-on: http://git-master/r/1223933
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2016-10-20 08:14:04 -07:00
Alex Waterman
cab7514f4b gpu: nvgpu: Fix invalid test for signaled sync_fences
Fix a check that was backwards for signaled sync_fences. This would cause
the code to not wait on some sync_fences that had not already signaled and
wait on other fences that had signaled.

Bug 1787348

Reviewed-on: http://git-master/r/1204710
(cherry picked from commit 75b94bb30f79c3a7a9992773dc8a93b507121006)
Change-Id: I00b0f8a373a9954a5ad9ab31aff6423e91574153
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221044
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-09-15 21:58:44 -07:00
Alex Waterman
7b8cbd2be3 gpu: nvgpu: Greatly simplify the semaphore detection
Greatly simplify and make more robust the gpu semaphore detection
in sync_fences. Instead of using a magic number use the parent
timeline of sync_pts.

This will also work with multi-GPU setups using nvgpu since the
timeline ops pointer will be the same across all instances of
nvgpu.

Bug 1732449

Reviewed-on: http://git-master/r/1203834
(cherry picked from commit 66eeb577eae5d10741fd15f3659e843c70792cd6)
Change-Id: I4c6619d70b5531e2676e18d1330724e8f8b9bcb3
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221042
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-09-15 21:58:37 -07:00
Alex Waterman
9bd76b7fa0 gpu: nvgpu: Optimize sync fence creation
Only create sync-fences in the semaphore synchronization path
when they are actually needed (i.e requested by userspace).

Bug 1795076

Reviewed-on: http://git-master/r/1201564
(cherry picked from commit dc52d424a839e6c064c02b7f02905dd6a59a50af)
Change-Id: Ieac6aef415678d4ea982683a955897c64959436e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1221041
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2016-09-15 21:58:36 -07:00