Commit Graph

376 Commits

Author SHA1 Message Date
Vinod G
010439ba08 gpu: nvgpu: add HALs to mmu fault descriptors.
mmu fault information for client and gpc differ
on various chip. Add separate table for each chip
based on that change and add hal functions to access
those descriptors.

bug 2050564

Change-Id: If15a4757762569d60d4ce1a6a47b8c9a93c11cb0
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704105
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-03 23:57:12 -07:00
Seema Khowala
c9463fdbb3 gpu: nvgpu: add rc_type i/p param to gk20a_fifo_recover
Add below rc_types to be passed to gk20a_fifo_recover
MMU_FAULT
PBDMA_FAULT
GR_FAULT
PREEMPT_TIMEOUT
CTXSW_TIMEOUT
RUNLIST_UPDATE_TIMEOUT
FORCE_RESET
SCHED_ERR
This is nice to have to know what triggered recovery.

Bug 2065990

Change-Id: I202268c5f237be2180b438e8ba027fce684967b6
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662619
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-03 21:43:06 -07:00
Seema Khowala
bf03799977 gpu: nvgpu: rename mutex to runlist_lock
Rename mutex to runlist_lock in fifo_runlist_info_gk20a
struct. This is good to have for code readability.

Bug 2065990
Bug 2043838

Change-Id: I716685e3fad538458181d2a9fe592410401862b9
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662587
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-03 21:42:57 -07:00
Deepak Nibade
fc1ebe57f5 gpu: nvgpu: add HALs to submit and wait for runlist
Add below two new HALs
gops.fifo.runlist_hw_submit() to submit a new runlist to hardware
gops.fifo.runlist_wait_pending() to wait until runlist write is successful

Set existing API gk20a_fifo_runlist_wait_pending() to
gops.fifo.runlist_wait_pending HAL

Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w
and set it to gops.fifo.runlist_hw_submit HAL

Jira NVGPUT-20

Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700548
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-24 11:10:48 -07:00
Sourab Gupta
abd5f68eef gpu: nvgpu: add usermode submission interface HAL
The patch adds the HAL interfaces for handling the usermode
submission, particularly allocating channel specific usermode
userd. These interfaces are currently implemented only on QNX,
and are created accordingly. As and when linux adds the
usermode submission support, we can revisit them
if any further changes are needed.

Change-Id: I790e0ebdfaedcdc5f6bb624652b1af4549b7b062
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683392
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-05 05:22:58 -07:00
Sourab Gupta
0b2ea2924b gpu: nvgpu: add gops.fifo.setup_sw
bar1/userd setup is different for RM server. created common function
gk20a_init_fifo_setup_sw_common.

Jira VQRM-3058

Change-Id: I655b54e21ed5f15dcb8e7b01bd9cd129b35ae7a3
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665691
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:54:38 -07:00
Richard Zhao
8d8ff9d34e gpu: nvgpu: add gops.fifo.set_error_notifier
RM Server overrides it for handling stall interrupts.

Jira VQRM-3058

Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679709
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:54:29 -07:00
Richard Zhao
bcab5c1486 gpu: nvgpu: add gops.fifo.check_tsg_ctxsw_timeout/check_ch_ctxsw_timeout
RM Server acts differently for ctxsw timeout check. It won't check
GP_GET or accumulated timeouts, but notify guest and go to recovery.

Jira VQRM-3058

Change-Id: I428aea34dc517311eb7e73feb556145e916309fb
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:54:11 -07:00
Richard Zhao
c5f03db98a gpu: nvgpu: add gops.fifo.ch_abort_clean_up
Channel abort clean up is only needed by native and vgpu driver but not
RM server. RM server expects guest will clean up itself. RM server
should not set the callback.

Jira VQRM-3058

Change-Id: I11b49b6f2d51c871e31de16955d487dca82609cb
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679705
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:54:02 -07:00
Sourab Gupta
5c27ac91fd gpu: nvgpu: make fifo/ch functions called by RM Server global
The patch declares globally few channel/fifo HAL functions
required for QNX code compilation (as they are being referred
elsewhere in QNX code). This is required as a part of
bringing in the nvgpu Channel/FIFO HAL into QNX.

Jira VQRM-3058

Change-Id: Ia176535b64de981d2f7ddb20f62015a0da74fd2a
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662411
GVS: Gerrit_Virtual_Submit
Tested-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:53:53 -07:00
Thomas Fleury
6c33a010d8 gpu: nvgpu: add placeholder for IPA to PA
Add __nvgpu_sgl_phys function that can be used to implement IPA
to PA translation in a subsequent change.
Adapt existing function prototypes to add pointer to gpu context,
as we will need to check if IPA to PA translation is needed.

JIRA EVLR-2442
Bug 200392719

Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673142
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-13 00:04:16 -07:00
seshendra Gadagottu
3df619f68a gpu: nvgpu: hal for syncpt_incr_per_release
Create hal to indicate syncpt increments per release.
Legacy chip uses 2 syncpt increments per release and gv1xx
onwards uses 1 syncpt increment per release.

Bug 2066025

Change-Id: I5d6d0a5368ef561f8150fbb7120181f49f6e338b
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669817
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-12 10:40:17 -07:00
Konsta Holtta
cb6ed949e2 gpu: nvgpu: support per-channel wdt timeouts
Replace the padding in nvgpu_channel_wdt_args with a timeout value in
milliseconds, and add NVGPU_IOCTL_CHANNEL_WDT_FLAG_SET_TIMEOUT to
signify the existence of this new field. When the new flag is included
in the value of wdt_status, the field is used to set a per-channel
timeout to override the per-GPU default.

Add NVGPU_IOCTL_CHANNEL_WDT_FLAG_DISABLE_DUMP to disable the long debug
dump when a timed out channel gets recovered by the watchdog. Printing
the dump to serial console takes easily several seconds. (Note that
there is NVGPU_TIMEOUT_FLAG_DISABLE_DUMP about ctxsw timeout separately
for NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX as well.)

The behaviour of NVGPU_IOCTL_CHANNEL_WDT is changed so that either
NVGPU_IOCTL_CHANNEL_ENABLE_WDT or NVGPU_IOCTL_CHANNEL_DISABLE_WDT has to
be set. The old behaviour was that other values were silently ignored.

The usage of the global default debugfs-controlled ch_wdt_timeout_ms is
changed so that its value takes effect only for newly opened channels
instead of in realtime. Also, zero value no longer means that the
watchdog is disabled; there is a separate flag for that after all.

gk20a_fifo_recover_tsg used to ignore the value of "verbose" when no
engines were found. Correct this.

Bug 1982826
Bug 1985845
Jira NVGPU-73

Change-Id: Iea6213a646a66cb7c631ed7d7c91d8c2ba8a92a4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1510898
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-09 20:09:44 -08:00
Alex Waterman
418f31cd91 gpu: nvgpu: Enable IO coherency on GV100
This reverts commit 848af2ce6d.

This is a revert of a revert, etc, etc. It re-enables IO coherence again.

JIRA EVLR-2333

Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669722
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-07 18:04:41 -08:00
Timo Alho
848af2ce6d Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""
This reverts commit 89fbf39a05.

Bug 2075315

Change-Id: Id34a0376be5160b164931926ec600f77edf69667
Signed-off-by: Timo Alho <talho@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1668487
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
2018-03-05 08:39:57 -08:00
Alex Waterman
89fbf39a05 Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""
This reverts commit 5a35a95654.

JIRA EVLR-2333

Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667167
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-02 22:10:14 -08:00
Deepak Nibade
37a15ce818 gpu: nvgpu: skip channel abort for deferred reset
In case deferred_reset_pending is set in gk20a_fifo_handle_mmu_fault() and in
gv11b_fifo_teardown_ch_tsg(), we skip resetting the engines and skip setting
the error notifier
Then we call gk20a_channel_abort()/gk20a_fifo_abort_tsg() which aborts the
channels, and resets the syncpoint values to release all the waiters

But since we don't set error notifier this could lead User to assume a
successful submission without any error

To fix this disable channel/TSG in case deferred_reset_pending is set and skip
calls to gk20a_channel_abort()/gk20a_fifo_abort_tsg()

Note that we finally abort the channel when channel is being closed

Bug 200363077

Change-Id: Ia48ca369701c14d1913d8f7b66ed466b7b840224
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1664319
(cherry picked from commit ac40d082e8)
Reviewed-on: https://git-master.nvidia.com/r/1666445
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 13:53:31 -08:00
Alex Waterman
5a35a95654 Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.

Bug 2070609

Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
2018-02-28 13:49:22 -08:00
Alex Waterman
1170687c33 gpu: nvgpu: Use coherent aperture flag
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.

The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.

However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.

This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.

Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.

Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.

JIRA EVLR-2333

Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:43 -08:00
Alex Waterman
71f53272b2 gpu: nvgpu: FIFO sched fixes
Miscellaneous fixes for the sched code:

1. Make sure get_addr() on an SGL respects the use phys flag since
   the runlist needs physical addresses when NVLINK is in use.
2. Ensure the runlist is contiguous. Since the runlist memory is
   not virtually addressed the buffer must be physically contiguous.
3. Use all 64 bits of the runlist address in the runlist base addr
   register (and related fields).

JIRA EVLR-2333

Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654667
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:34 -08:00
Alex Waterman
b5d3cf444e gpu: nvgpu: Cleanup unused variables
There are numerous places where variables are assigned to but then
never used. This patch cleans up all these unused variables and
in some cases simplifies surrounding logic.

Also delete unused header includes and add necessary header includes.

JIRA NVGPU-525

Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ice9ec2a0e97f262d0dcfebe22f83208dbea569d9
Reviewed-on: https://git-master.nvidia.com/r/1662548
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 21:53:38 -08:00
Seema Khowala
ffa5231d2c gpu: nvgpu: runlist info mutex not needed for runlist_state
runlist_info mutex for the runlist being enabled or
disabled in fifo_sched_disable_r is not needed to be
acquired

Bug 2043838

Change-Id: Ia9839ab7effbe7daf353c3a54f25a2b4914af5e8
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1630345
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-11 17:37:55 -08:00
David Nieto
1f71f475e2 DNI: gpu: nvgpu: Increase GV100 ctxsw timeouts
During bringup and before nvlink is up GV100 on the DDPX platform operates
with a very, very slow sysmem link. In order to get sysmem test to pass
it is neccesary to significantly increase most timeouts by an order the
magnitude.

Bug 2040544

Change-Id: I26858afde4ae80c70f86b47cfff674b6b00b5bf8
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627417
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-05 13:54:37 -08:00
Terje Bergstrom
86691b59c6 gpu: nvgpu: Remove bare channel scheduling
Remove scheduling IOCTL implementations for bare channels. Also
removes code that constructs bare channels in runlist.

Bug 1842197

Change-Id: I6e833b38e24a2f2c45c7993edf939d365eaf41f0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1627326
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-02 13:53:09 -08:00
Terje Bergstrom
f19f22fcc8 gpu: nvgpu: Remove support for channel events
Remove support for events for bare channels. All users have already
moved to TSGs and TSG events.

Bug 1842197

Change-Id: Ib3ff68134ad9515ee761d0f0e19a3150a0b744ab
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1618906
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-12-28 10:01:32 -08:00
Deepak Nibade
6ec7da5eba gpu: nvgpu: use nvgpu list APIs instead of linux APIs
Use nvgpu specific list APIs nvgpu_list_for_each_entry() instead of calling
Linux specific list APIs list_for_each_entry()

Jira NVGPU-444

Change-Id: I3c1fd495ed9e8bebab1f23b6769944373b46059b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1612442
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-12-08 11:58:07 -08:00
Deepak Nibade
830d3f10ca gpu: nvgpu: cleanup uapi header includes
With recent rework in nvgpu most of the <uapi/linux/nvgpu.h> includes
are not needed so remove them

Remove use of NVGPU_DBG_GPU_REG_OP_* in gk20a/gr_gk20a.c and use common
definition instead

Remove use of NVGPU_ALLOC_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE in
gp10b/fifo_gp10b.c by defining new common flag
NVGPU_GPFIFO_FLAGS_REPLAYABLE_FAULTS_ENABLE and then parsing it in API
nvgpu_gpfifo_user_flags_to_common_flags()

Jira NVGPU-363

Change-Id: I8e653275ea3f443f24be7284d54f2115636aba3f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1606108
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-28 09:47:03 -08:00
Deepak Nibade
c6b9177cff gpu: nvgpu: define error_notifiers in common code
All the linux specific error_notifier codes are defined in linux specific
header file <uapi/linux/nvgpu.h> and used in all the common driver

But since they are defined in linux specific file, we need to move all the
uses of those error_notifiers in linux specific code only

Hence define new error_notifiers in include/nvgpu/error_notifier.h and
use them in the common code

Add new API nvgpu_error_notifier_to_channel_notifier() to convert common
error_notifier of the form NVGPU_ERR_NOTIFIER_* to linux specific error
notifier of the form NVGPU_CHANNEL_*

Any future additions to error notifiers requires update to both the form
of error notifiers

Move all error notifier related metadata from channel_gk20a (common code)
to linux specific structure nvgpu_channel_linux
Update all accesses to this data from new structure instead of channel_gk20a

Move and rename below APIs to linux specific file and declare them
in error_notifier.h
nvgpu_set_error_notifier_locked()
nvgpu_set_error_notifier()
nvgpu_is_error_notifier_set()

Add below new API and use it in fifo_vgpu.c
nvgpu_set_error_notifier_if_empty()

Include <nvgpu/error_notifier.h> wherever new error_notifier codes are used

NVGPU-426

Change-Id: Iaa5bfc150e6e9ec17d797d445c2d6407afe9f4bd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593361
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-27 09:23:11 -08:00
Deepak Nibade
3ff666c4b9 gpu: nvgpu: deprecate TSG/CHANNEL_SET_PRIORITY IOCTLs
TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using
combination of timeslice and interleave levels to decide the priority

Hence remove the IOCTLs and all corresponding APIs

Jira NVGPU-393

Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 08:46:09 -08:00
Seema Khowala
592a31fd92 gpu: nvgpu: add rc input param to gk20a_fifo_handle_pbdma_intr
Add a new parameter rc to gk20a_fifo_handle_pbdma_intr so
that it can be called to handle pbdma intr without doing
teardown. This is needed for t19x during polling of pbdma
preempt

Bug 200277163
Bug 1945121

Change-Id: Ide0d3b6ed8c0862cb5332d112926b6933abd0815
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1584734
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 02:05:24 -08:00
Seema Khowala
ef6a296f52 gpu: nvgpu: get intr mask for an active_engine_id
This is needed for t19x during eng preempt done polling.
E.g. copy engine (CE) stall interrupt should not prevent GR
from finishing preemption. In order to check if current stall
interrupt is valid for the engine being polled for
preemption completion, function to provide engine
intr mask is needed. With this, polling code can make sure
there are no stall interrupts pending for the engine being
polled for preemption done. If stall interrupts
are pending for an engine, preemption will never finish.

Bug 200277163
Bug 1945121

Change-Id: Ie1ccac52c3e8d453a49084e195f2e7eaafb8f057
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1584065
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-15 02:05:20 -08:00
Terje Bergstrom
fd2cac59f3 gpu: nvgpu: Include UAPI explicitly
Add explicit #includes for <uapi/linux/nvgpu.h> for source code files
that depend on it.

JIRA NVGPU-259

Change-Id: I717d5f1493423fd3a7a34b6dd3380d33a9307a09
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-13 18:56:30 -08:00
Deepak Nibade
3cb65f57d5 gpu: nvgpu: define runlist level in common code
All the runlist levels NVGPU_RUNLIST_INTERLEAVE_LEVEL_* are declared in linux
specific uapi header and used in common code
But since common code should be linux-independent, move these uses out of
common code

Define new runlist levels NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* in common code
and use them wherever required

Add new API nvgpu_get_common_runlist_level() to get common runlist level of
the form NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* from linux specific runlist
level of the form NVGPU_RUNLIST_INTERLEAVE_LEVEL_*

Jira NVGPU-259

Change-Id: Ic19239f0f8275683d5d1b981df530acd90e6dfbb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594327
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-08 09:09:54 -08:00
Deepak Nibade
e1120727e7 Revert "gpu: nvgpu: WAR: unbind_channel_verify_status ret val"
This reverts commit a8643b3a99.

Proper resolution is implemented in user space (nvrm_gpu) to idle all channels
of TSG before unbinding a channel from it
And with that we should not see NEXT bit set on channel while unbinding

Hence revert the WAR and again return error if we find NEXT bit set on channel
Also restore the error print

Bug 200327095

Change-Id: Id58e00c4602a4a5c9f65e5ee1329b606f45993d7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1585991
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-02 09:25:29 -07:00
Alex Waterman
31e594befe gpu: nvgpu: Split ctxsw_trace API into non-Linux component
Split the ctxsw trace "core" API code into <nvgpu/ctxsw_trace.h>. This
is not perect though since there's some Linuxisms present in the HAL
and as such that code has to be hidden by the ctxsw tracing CONFIG. But
this patch should work for QNX such that it will allow the code to
build as long as CONFIG_GK20A_CTXSW_TRACE is not set.

Also fix the copywrite notice in the ctxsw code present under
common/linux to be GPL.

JIRA NVGPU-287

Change-Id: I94715864caf335b7220185492e4629d713b025e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589429
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-01 20:15:47 -07:00
Seema Khowala
7e59e0b09b gpu: nvgpu: save act_eng_bitmask in runlist_info
Issue
Currently bitmask of engine indices is being saved.
This will give wrong active engine ids for a given runlist
and s/w will end up checking/polling wrong engine_status
registers as these registers are indexed by active
engine ids. Also reset_eng_bitmask will end up
having wrong value for active engine ids to be reset.

Details for runlists serving engines ids for gv11b are:-
runlist id 0: gr = 0, grcopy 0 = 2, grcopy1 = 3
runlist id 1: async ce = 1

Incorrect values
init_runlist:705  [DBG]  runlist 0 : eng bitmask 7 (eng 0, 1, 2)
init_runlist:705  [DBG]  runlist 1 : eng bitmask 8 (eng 3)

Fix
Save bitmask of active engine ids in runlist info.
Right value
init_runlist:705  [DBG]  runlist 0 : eng bitmask d (eng 0, 2, 3)
init_runlist:705  [DBG]  runlist 1 : eng bitmask 2 (eng 1)

Bug 200277163
Bug 1945121

Change-Id: Ia299aa0881c4a258080bb0daa3a542fef0d94e4f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1584066
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-01 15:26:28 -07:00
Seema Khowala
5d260a24a5 gpu: nvgpu: halt gr pipe & gr_reset do not work on fmodel
Do not halt gr pipe as it will hang simulator.
Also during ch/tsg teardown, gr_reset without halting
gr pipe crashes simulator.

Bug 1958308

Change-Id: I4036b4a4999932f05a0f292d4fc51de21d3a030a
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566575
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-11-01 15:26:11 -07:00
Alex Waterman
4d2d890c01 gpu: nvgpu: Move ctxsw_trace_gk20a.c to common/linux
Migrate ctxsw_trace_gk20a.c to common/linux/ctxsw_trace.c. This
has been done becasue the ctxsw tracing code is currently too
tightly tied to the Linux OS due to usage of a couple system calls:

  - poll()
  - mmap()

And general Linux driver framework code. As a result pulling the
logic out of the FECS tracing code is simply too large a scope for
time time being.

Instead the code was just copied as much as possible. The HAL ops
for the FECS code was hidden behind the FECS tracing config so
that the vm_area_struct is not used when QNX does not define said
config. All other non-HAL functions called by the FECS ctxsw
tracing code ha now also been hidden by this config. This is not
pretty but for the time being it seems like the way to go.

JIRA NVGPU-287

Change-Id: Ib880ab237f4abd330dc66998692c86c4507149c2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586547
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-29 11:02:15 -07:00
Deepak Nibade
3afb2a88d5 gpu: nvgpu: skip channel status verification if TSG has timed out
In gk20a_fifo_tsg_unbind_channel(), we always verify channel status before
unbinding a channel from TSG
But in case TSG has alread timed out we never re-enable it so it does not
make sense to inspect channel status anyways

So skip channel status verification in case TSG has timed out

Bug 200327095

Change-Id: Iccf601271290643c235c3f2656201549210a6886
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1586015
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-29 11:01:53 -07:00
Deepak Nibade
0d4272b657 gpu: nvgpu: don't re-enable TSG if timed out
In gk20a_fifo_tsg_unbind_channel(), we disable/preempt TSG, unbind one channel
from TSG, and then re-enable rest of the channels in TSG

But it is possible that TSG has already timed out due to some error and is
already disabled
If we re-enable all channels in such case, it can cause random issues right
after re-enabling faulted channel

Hence do not re-enable TSG if it has timedout

Since we disable all channels of TSG if one channel encounters fatal error,
it is safe to assume that TSG has timed out if one channel has timed out

Bug 1958308
Bug 200327095

Change-Id: I958ca6a2b408ff1338f2e551a79c072f1e203eda
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1585421
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-26 01:06:49 -07:00
Thomas Fleury
539c8bff4b gpu: nvgpu: use full system barrier in BAR1 test
BAR1 test could occasionally fail when doing CPU write through userd
then reading back through BAR1. This is because nvgpu_smp_mb() only
guarantees ordering between cores.
Replaced with nvgpu_mb() to ensure the write will be visible to all
bus masters in the system.

JIRA EVLR-1959
Bug 200352099

Change-Id: Id002e73d135e0805fca2f153a6de77e210a7b226
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1582928
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-24 22:07:32 -07:00
Alex Waterman
2a285d0607 gpu: nvgpu: Cleanup generic MM code in gk20a/mm_gk20a.c
Move much of the remaining generic MM code to a new common location:
common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This
mostly consists of init and cleanup code to handle the common MM
data structures like the VIDMEM code, address spaces for various
engines, etc.

A few more indepth changes were made as well.

1. alloc_inst_block() has been added to the MM HAL. This used to be
   defined directly in the gk20a code but it used a register. As a
   result, if this register hypothetically changes in the future,
   it would need to become a HAL anyway. This path preempts that
   and for now just defines all HALs to use the gk20a version.

2. Rename as much as possible: global functions are, for the most
   part, prepended with nvgpu (there are a few exceptions which I
   have yet to decide what to do with). Functions that are static
   are renamed to be as consistent with their functionality as
   possible since in some cases function effect and function name
   have diverged.

JIRA NVGPU-30

Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574499
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-24 15:16:49 -07:00
Terje Bergstrom
12e23c6aad gpu: nvgpu: Move function to query rl interleave
Function to query interleave name depends on IOCTL flag definition.
Move that code to fifo_gk20a.c to remove Linux dependency in header.

Change-Id: I6d6a80e550bf30973b2be09febc2347890b77d25
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577249
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
2017-10-23 16:45:49 -07:00
Terje Bergstrom
e039dcbc9d gpu: nvgpu: Use nvgpu_rwsem as TSG channel lock
Use abstract nvgpu_rwsem as TSG channel list lock instead of the Linux
specific rw_semaphore.

JIRA NVGPU-259

Change-Id: I41a38b29d4651838b1962d69f102af1384e12cb6
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1579935
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-17 14:05:19 -07:00
seshendra Gadagottu
3d343c9eea gpu: nvgpu: enhance pbdma debug info
Enhanced pbdma error output to print pbdma interrupt
error.

Generated following hw definitions to dump relevant data:
pbdma_gp_shadow_0_r
pbdma_gp_shadow_1_r

Updated gk20a_dump_pbdma_status to dump this additional
info:
pbdma_gp_put_r
pbdma_gp_get_r
pbdma_gp_shadow_0_r
pbdma_gp_shadow_1_r

Bug 2003671

Change-Id: Iaa75d936e00470a2b8d1151f60dbeb741b3f9bce
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1572182
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 15:20:07 -07:00
Seema Khowala
ceaab0595c gpu: nvgpu: ctxsw_trace_tsg/channel_reset call order
For non fake mmu fault, both tsg and ch pointers
could be valid. If tsg pointer is non null, issue
ctxsw_trace for tsg instead of channel only.

Change-Id: I161c40e8d43c7ae4d953ef4768ad75d4e993c87e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:43:22 -07:00
Seema Khowala
a8643b3a99 gpu: nvgpu: WAR: unbind_channel_verify_status ret val
Do not return error if channel to be removed has
NEXT set. This is a WAR until proper fix is
identified and implemented.

Bug 200327095

Change-Id: Ia77f3b834e8e577ac2dad8281f1dd562079adcef
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1577133
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:43:10 -07:00
Deepak Nibade
6de92de60b gpu: nvgpu: fix channel status verify sequence
While unbindin a channel from TSG, we first disable all the channels,
then examine the status of channel being removed in
gk20a_fifo_tsg_unbind_channel_verify_status(),  and if this API fails we
re-enable all the channel and kill whole TSG

And in gk20a_fifo_tsg_unbind_channel_verify_status() we first check ctx_reload
and fault status and then check NEXT status
If channel has NEXT set we bail out

But since we have already changed the TSG ctx_reload status re-enabling all
channels in TSG might cause issues

Hence fix this by correcting sequence so that we first ensure that NEXT is
not set on channel and then only alter the status

Bug 200327095

Change-Id: I4f0786bc507fad5462d4cdd8d0ca91ea611ee3b5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1575905
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2017-10-13 13:42:58 -07:00
Konsta Holtta
9b20f2a15e gpu: nvgpu: report GR_SEMAPHORE_TIMEOUT on PBDMA sema timeout
As with GR's semaphore acquires that timeout, report
NVGPU_CHANNEL_GR_SEMAPHORE_TIMEOUT to userspace in the error notifier
also when a semaphore acquire timeout interrupt is received from PBDMA.
This timeout is used when the kernel watchdog timer is enabled.

Bug 1782480

Change-Id: I1ceb8632548c5e89febb2b80a5850116a2d4b670
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574293
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
2017-10-10 16:26:53 -07:00
Deepak Nibade
33f475d6d8 gpu: nvgpu: kill TSG if channel has NEXT set while closing
Currently if channel has NEXT bit set while closing the channel we just print
an error and continue channel unbind sequence from TSG

But since channel with NEXT set is active killing it can potentially corrupt
the TSG context and cause unpredictable errors on remaining channels/TSG

Hence fix this by killing whole TSG context if channel being closed has
NEXT bit set

if gk20a_fifo_tsg_unbind_channel() API returns error, kill the TSG
otherwise continue with channel unbind sequence

Bug 200327095

Change-Id: I2abf1a3db8ba6f105b6ca86e78006c7b2a7726cc
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1568566
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
2017-10-04 03:37:17 -07:00