Add new API gr_gk20a_get_ctx_id() to get/extract
context id from GR context
Bug 200156699
Change-Id: If0e8887a9a6b139cd795bf03f5def64fd664d12b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/927130
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add below APIs to suspend or resume single SM :
gk20a_suspend_single_sm()
gk20a_resume_single_sm()
Also, update gk20a_suspend_all_sms() to make it
more generic by passing global_esr_mask and
check_errors flag as parameter
Bug 200156699
Change-Id: If40f4bcae74a8132673b4dca10b7d9898f23c164
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/925884
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Support preprocessing of SM exceptions if API
pointer pre_process_sm_exception() is defined
Also, expose some common APIs
Bug 200156699
Change-Id: I1303642c1c4403c520b62efb6fd83e95eaeb519b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/925883
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
In gk20a_channel_timeout_handler(), below deadlock scenario
is possible :
thread 1:
- take global lock g->ch_wdt_lock
- identify timed out channel (as ch1)
- check engine status which is stuck
- identify failing channel on engine as ch2
- we need to trigger recovery with ch2
- as part of recovery, call channel_abort() for ch2
- in channel_abort(), we wait to cancel the timer wq
- but timer wq for ch2 never completes due to thread 2
thread 2:
- ch2 has already timed out
- to process, we wait for global lock g->ch_wdt_lock
- this lock needs to be released by thread 1
To fix this, cancel the timer (through flag) of ch2
(failing channel on engine) before triggering recovery
on that channel
Bug 200164753
Change-Id: Idb42d01c8440a53f43cb5e87e41f1c283f7e8fcf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/929924
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
In gk20a_channel_timeout_handler(), we currently disable
all engine activity before checking for fence completion
and before we identify timed out channel
But disabling all engine activity could be overkill for
this process.
Also, as part of disabling engine activity we preempt
the channel on engine.
But it is possible that channel preemption times out
since channel has already timed out
And this can lead to races and deadlock
Hence, instead of disabling all engine activity, just
disable the context switch which should also do the
same trick
Bug 1716062
Change-Id: I596515ed670a2e134f7bcd9758488a4aa0bf16f7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/929421
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Interleave all high priority channels between all other channels.
This reduces the latency for high priority work when there
are a lot of lower priority work present, imposing an upper
bound on the latency. Change the default high priority timeslice
from 5.2ms to 3.0 in the process, to prevent long running high priority
apps from hogging the GPU too much.
Introduce a new debugfs node to enable/disable high priority
channel interleaving. It is currently enabled by default.
Adds new runlist length max register, used for allocating
suitable sized runlist.
Limit the number of interleaved channels to 32.
This change reduces the maximum time a lower priority job
is running (one timeslice) before we check that high priority
jobs are running.
Tested with gles2_context_priority (still passes)
Basic sanity testing is done with graphics_submit
(one app is high priority)
Also more functional testing using lots of parallel runs with:
NVRM_GPU_CHANNEL_PRIORITY=3 ./gles2_expensive_draw
–drawsperframe 20000 –triangles 50 –runtime 30 –finish
plus multiple:
NVRM_GPU_CHANNEL_PRIORITY=2 ./gles2_expensive_draw
–drawsperframe 20000 –triangles 50 –runtime 30 -finish
Previous to this change, the relative performance between
high priority work and normal priority work comes down
to timeslice value. This means that when there are many
low priority channels, the high priority work will still
drop quite a lot. But with this change, the high priority
work will roughly get about half the entire GPU time, meaning
that after the initial lower performance, it is less likely
to get lower in performance due to more apps running on the system.
This change makes a large step towards real priority levels.
It is not perfect and there are no guarantees on anything,
but it is a step forwards without any additional CPU overhead
or other complications. It will also serve as a baseline to
judge other algorithms against.
Support for priorities with TSG is future work.
Support for interleave mid + high priority channels,
instead of just high, is also future work.
Bug 1419900
Change-Id: I0f7d0ce83b6598fe86000577d72e14d312fdad98
Signed-off-by: Peter Pipkorn <ppipkorn@nvidia.com>
Reviewed-on: http://git-master/r/805961
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
It'll detect dead semaphore acquire. The worst case is when
ACQUIRE_SWITCH is disabled, semaphore acquire will poll and
consume full gpu timeslicees.
The timeout value is set to half of channel WDT.
Bug 1636800
Change-Id: Ida6ccc534006a191513edf47e7b82d4b5b758684
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/928827
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
On Maxwell comptaglines are assigned per 128k, but preferred big page
size for graphics is 64k. Bit 16 of GPU VA is used for determining
which half of comptagline is used.
This creates problems if user space wants to map a page multiple times
and to arbitrary GPU VA. In one mapping the page might be mapped to
lower half of 128k comptagline, and in another mapping the page might
be mapped to upper half.
Turn on mode where MSB of comptagline in PTE is used instead of bit 16
for determining the comptagline lower/upper half selection.
Bug 1704834
Change-Id: If87e8f6ac0fc9c5624e80fa1ba2ceeb02781355b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/924322
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Disable all secure allocations on linsim by returning
an error from gk20a_tegra_secure_page_alloc()
With this failure, no more secure allocations will be
done from nvgpu
Bug 200163671
Change-Id: I26604e45a684dde29c092dc34cc89259f5de5d91
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/928280
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
Fixed the following sparse warning by restricting the scope of
API's within file.
- gr_gk20a.c:7273: warning: symbol 'gr_gk20a_bpt_reg_info' was not declared.
Should it be static?
- gr_gm20b.c:1053: warning: symbol 'gr_gm20b_bpt_reg_info' was not declared.
Should it be static?
Bug 200088648
Change-Id: I63bba55b1432e4284c9074d2729a176f1767a83a
Signed-off-by: Amit Sharma <amisharma@nvidia.com>
Reviewed-on: http://git-master/r/842260
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Current sequence in gk20a_disable_channel() is
- disable channel in gk20a_channel_abort()
- adjust pending fence in gk20a_channel_abort()
- preempt channel
But this leads to scenarios where syncpoint has
min > max value
Hence to fix this, make sequence in gk20a_disable_channel()
- disable channel in gk20a_channel_abort()
- preempt channel in gk20a_channel_abort()
- adjust pending fence in gk20a_channel_abort()
If gk20a_channel_abort() is called from other API where
preemption is not needed, then use channel_preempt
flag and do not preempt channel in those cases
Bug 1683059
Change-Id: I4d46d4294cf8597ae5f05f79dfe1b95c4187f2e3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/921290
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
In gk20a_channel_abort(), we disable the channel and
adjust the fences. And then we call gk20a_channel_update()
only if we released the post-fence semaphore
Instead of this, call gk20a_channel_update() always to ensure
that all the clean up after fence completion is done always
Bug 1683059
Change-Id: Ife00ba43e808c0833264d79c98c74417ffadf802
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/842999
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
When closing channel, disable and preempt it immediately instead of
waiting for it to finish all work.
Bug 1683059
Change-Id: Ia5f5fc6a072dc3ddb1e9bf63534814ff0a60b5b4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/836746
Copy the necessary ctx patching prologues and epilogues from patch_write
to gr_gk20a_exec_ctx_ops next to mapping of gr ctx, around a loop that
applies patches multiple times, in order to optimize the number of
maps/unmaps on the channel's patch context.
Bug 200075565
Change-Id: I7125be5c778192d639f0bbed1731bb900c7015da
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
(cherry picked from commit TODO-FILL-THIS-WHEN-MERGED)
Reviewed-on: http://git-master/r/839203
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Currently, we create sync_fence (from nvhost_sync_create_fence())
for every submit
But not all submits request for a sync_fence.
Also, nvhost_sync_create_fence() API takes about 1/3rd of the total
submit path.
Hence to optimize, we can allocate sync_fence
only when user explicitly asks for it using
(NVGPU_SUBMIT_GPFIFO_FLAGS_FENCE_GET &&
NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE)
Also, in CDE path from gk20a_prepare_compressible_read(),
we reuse existing fence stored in "state" and that can
result into not returning sync_fence_fd when user asked
for it
Hence, force allocation of sync_fence when job submission
comes from CDE path
Bug 200141116
Change-Id: Ia921701bf0e2432d6b8a5e8b7d91160e7f52db1e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/812845
(cherry picked from commit 5fd47015eeed00352cc8473eff969a66c94fee98)
Reviewed-on: http://git-master/r/837662
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
We currently call check_gp_put() and update_gp_get()
in submit path and this takes about 5uS for both checks
check_gp_put() - 3.5 uS
update_gp_get() - 1.5 uS
But this book-keeping can be moved to gk20a_channel_update()
to save some submit time
Note that check_gp_put() needs to be done inside submit
lock
Bug 200141116
Change-Id: I276400111be0421eb673695e2f2899ff52e344b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/839232
(cherry picked from commit 289617e8bf01bde9aab45dfa3a1c6a1241e6eb78)
Reviewed-on: http://git-master/r/839713
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
If we could not find the channel for GR interrupt we skipped
resetting it. Change the logic to do the reset, and find the channel
via FIFO engine status.
Bug 1706962
Change-Id: I8a0d283b36d88ea1cec541dbf90db7929104ad7c
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/839465
SM locking & register reads Order has been changed.
Also, functions have been implemented based on gk20a
and gm20b.
Change-Id: Iaf720d088130f84c4b2ca318d9860194c07966e1
Signed-off-by: sujeet baranwal <sbaranwal@nvidia.com>
Signed-off-by: ashutosh jain <ashutoshj@nvidia.com>
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/837236
This patch modifies TegraDRM to compile on NVIDIA downstream kernel.
Currently buffers are pinned to virtual drm device. In upstream this
is ok since the buffers will be remapped into the TegraDRM maintained
IOMMU domain. However, NVIDIA downstream kernel relies on DMA mapping
API and hence requires that the buffers are pinned to the real
hardware device.
In addition, this patch modifies code to use downstream
power-management APIs if the downstream option is enabled.
Bug 1698151
Change-Id: I1cb7a1a0ad0ae767c48bfecb608d899e484d6b40
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/823640
Add IOCTL NVGPU_IOCTL_CHANNEL_WDT to disable/enable
watchdog per-channel
Also, if watchdog is disabled, we currently schedule
the worker with MAX timeout.
Instead of this, do not schedule any worker if
watchdog is disabled
Bug 1683059
Bug 1700277
Change-Id: I7f6bec84adeedb74e014ed6d1471317b854df84c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/837962
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add new operaton g->ops.mm.set_debug_mode and let other places
that set debug mode call this callback.
It's preparing for adding vgpu set mmu debug mode hook.
JIRA VFND-1005
Bug 1594604
Change-Id: I1d227a0c0f96adb0035ae16ae1f4fbfa739bf0a7
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/833497
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
We currently set "aggressive_destroy" flag to destroy
sync object statically and for each sync object
Move this flag to per-platform structure so that it
can be set per-platform for all the sync objects
Also, set the default value of this flag as "false"
and set it to "true" once we have more than 64
channels in use
Bug 200141116
Change-Id: I1bc271df4f468a4087a06a27c7289ee0ec3ef29c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/822041
(cherry picked from commit 98741e7e88066648f4f14490c76b61dbff745103)
Reviewed-on: http://git-master/r/835800
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
We currently allocate private command buffers (wait_cmd
and incr_cmd) before submitting the job but we never
free them explicitly.
When private command queue of the channel is full, we
then try to recycle/remove free command buffers.
But this recycling happens during submit path, and
hence that particular submit path takes much longer
Rework this as below :
- add reference of command buffers to job structure
- when job completes, free the command buffers
explicitly
- remove the code to recycle buffers since it should
not be needed now
Note that command buffers need to be freed in order of
their allocation. Ensure this with error print before
freeing the command buffer entry
Bug 200141116
Bug 1698667
Change-Id: Id4b69429d7ad966307e0d122a71ad55076684307
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/827638
(cherry picked from commit c6cefd69b71c9b70d6df5343b13dfcfb3fa99598)
Reviewed-on: http://git-master/r/835802
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
In job submission path, we always take refcount on all
the mapped buffers to safeguard against case where user
space releases the buffer early
But in case user space itself is doing proper buffer
management, kernel need not take refcounts on all the
buffers - which is also a overhead in submit path
Hence, provide a new submit flag
NVGPU_SUBMIT_GPFIFO_FLAGS_SKIP_BUFFER_REFCOUNTING to
optionally skip taking refcounts on all the buffers
Also, if we do not take refcounts, then no need to drop
any refcounts in gk20a_channel_update() as well
Bug 1698667
Bug 200141116
Change-Id: I81bb7a03240300b691c70bcec04ea1badd5934f4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/824718
(cherry picked from commit 8c8978fa303ec4e6db0233becdbdcbad4a248173)
Reviewed-on: http://git-master/r/835801
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
In GPU job submit path gk20a_ioctl_channel_submit_gpfifo(),
we currently allocate a temporary gpfifo, copy user space
gpfifo content into this temporary buffer, and then copy
temp buffer content into channel's gpfifo.
Allocation/copy/free of temporary buffer adds additional
overhead
Rewrite this sequence such that gk20a_submit_channel_gpfifo()
can receive either a pre-filled gpfifo or pointer to
user provided args.
And then we can direclty copy the user provided gpfifo
into the channel's gpfifo
Also, if command buffer tracing is enabled, we still need
to copy user provided gpfifo into temporaty buffer for reading
But that should not cause overhead in real world use case
Bug 200141116
Change-Id: I7166c9271da2694059da9853ab8839e98457b941
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/823386
(cherry picked from commit 3e0702db006c262dd8737a567b8e06f7ff005e2c)
Reviewed-on: http://git-master/r/835799
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This patch adds wrappers to use nvhost functions even if the
upstream host1x driver was used. In this case the functions
are re-routed to corresponding functions in the host1x driver.
Bug 1698151
Change-Id: Id49ea78f925e9b14e54e67743cac42610b3ecbeb
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/823638