All the linux specific error_notifier codes are defined in linux specific
header file <uapi/linux/nvgpu.h> and used in all the common driver
But since they are defined in linux specific file, we need to move all the
uses of those error_notifiers in linux specific code only
Hence define new error_notifiers in include/nvgpu/error_notifier.h and
use them in the common code
Add new API nvgpu_error_notifier_to_channel_notifier() to convert common
error_notifier of the form NVGPU_ERR_NOTIFIER_* to linux specific error
notifier of the form NVGPU_CHANNEL_*
Any future additions to error notifiers requires update to both the form
of error notifiers
Move all error notifier related metadata from channel_gk20a (common code)
to linux specific structure nvgpu_channel_linux
Update all accesses to this data from new structure instead of channel_gk20a
Move and rename below APIs to linux specific file and declare them
in error_notifier.h
nvgpu_set_error_notifier_locked()
nvgpu_set_error_notifier()
nvgpu_is_error_notifier_set()
Add below new API and use it in fifo_vgpu.c
nvgpu_set_error_notifier_if_empty()
Include <nvgpu/error_notifier.h> wherever new error_notifier codes are used
NVGPU-426
Change-Id: Iaa5bfc150e6e9ec17d797d445c2d6407afe9f4bd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1593361
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For nvhost_sync_create_fence, num_pts corresponds to the number of
syncpoints in the array given to it, and the wrapper
nvgpu_nvhost_sync_create_fence only supports one syncpoint at a time.
Use 1 explicitly and make it impossible for the caller of this wrapper
to use something else by mistake.
Change-Id: I2497c1dd4fed0906e3bb07e8f5ddd3a9346cb381
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1604339
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This CL is as part of phased changes to support NO LSPMU
Changes done are to add new pmu ops :
- setup_apertures
- update_lspmu_cmdline_args
These would be called from pmu op init_falcon_setup_hw
JIRA NVGPU-296
Change-Id: Idbcec5c93ca3150df5c9fb81d65b9fce778cecb8
Signed-off-by: Supriya <ssharatkumar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589004
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch removes the dependency on the header file "uapi/linux/nvgpu.h"
for regops_gk20a.c. The original structure and definitions in the
uapi/linux/nvgpu.h is maintained for userspace libnvrm_gpu.h. The
following changes are made in this patch.
1) Defined common versions of the NVGPU_DBG_GPU_REG_OP* definitions inside
regops_gk20a.h.
2) Defined common version of struct nvgpu_dbg_gpu_reg_op inside
regops_gk20a.h naming it struct nvgpu_dbg_reg_op.
3) Constructed APIs to convert the NVGPU_DBG_GPU_REG_OP* definitions from
linux versions to common and vice versa.
4) Constructed APIs to convert from struct nvgpu_dbg_gpu_reg_op to
struct nvgpu_dbg_reg_op and vice versa.
5) The ioctl handler nvgpu_ioctl_channel_reg_ops first copies from
userspace into a local storage based on struct nvgpu_dbg_gpu_reg_op which
is copied into the struct nvgpu_dbg_reg_op using the APIs above and
after executing the regops handler passes the data back into userspace
by copying back data from struct nvgpu_dbg_reg_op to struct
nvgpu_dbg_gpu_reg_opi.
JIRA NVGPU-417
Change-Id: I23bad48d2967a629a6308c7484f3741a89db6537
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596972
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move the implementation for channel job update callbacks that is based
on Linux specific work_struct usage to Linux-specific code.
This requires a bit of extra work for allocating OS-specific priv data
for channels which is also done in this patch. The priv data will be
used more when more OS-specific features are moved.
Jira NVGPU-259
Change-Id: I24bc0148a827f375b56a1c96044685affc2d1e8c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589321
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
If the associated buffer is not compressed, it would be invalid to call
the cde swizzler shader with zero lines. The fences in
PREPARE_COMPRESSIBLE_READ still need to be managed, so just do a dummy
submit with zero entries when lines is zero for the buffer.
Bug 1856088
Change-Id: Ia68c2ffff21e5e8077d5c550b0ca44090f88bf80
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1590055
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-New fuse ops is added to set NVGPU_SEC_PRIVSECURITY
and NVGPU_SEC_SECUREGPCCS bits in g->enabled_flags
during hal initialization
-For igpu non simulation platforms, fuses are read
to decide if gpu should be allowed to boot or not.
--Do not boot gpu if priv_sec_en is set but wpr_enabled
is not set to 1 or vpr_auto_fetch_disable is not set to 0
--With priv_sec_en set, all falcons have to boot
in LS mode and this needs wpr_enabled set to 1
AND vpr_auto_fetch_disable set to 0. In this case
gmmu tries to pull wpr and vpr settings from tegra mc
Bug 2018223
Change-Id: Iceaa1b0b3214e9a3d6cef5d77a82e034302f748b
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595454
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added nvgpu_flcn_mem_scrub_wait() to
falcon interface layer to poll imem/dmem
scrubbing status complete check for 1msec
with status check interval of 10usec.
-Called nvgpu_flcn_mem_scrub_wait() in
falcon reset interface to check scrubbing
status upon falcon/engine reset.
-Replaced mem scrubbing wait check code in
pmu_enable_hw() by calling
nvgpu_flcn_mem_scrub_wait()
Bug 200346134
Change-Id: Iac68e24dea466f6dd5facc371947269db64d238d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598644
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created nvgpu_kill_task_pg_init() method to set
pmu state to PMU_STATE_EXIT & make thread stop,
and poll to confirm thread stopped.
- Check for PMU/SEC2 ACR secure boot completion
status & initiate pg init thread kill if ACR boot
exits with error, which fails to validate &
boot LS-PMU.
- Set pmu state to PMU_STATE_OFF after thread kill
during ACR boot failure.
Issue: pg init task blocks if PMU boot fails &
cause kernel to show message "task nvgpu_pg_init_g:2120
blocked for more than 120 seconds"
Bug 200346134
Change-Id: I5270426080dcd628ccca4df798005294c19767a0
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1582593
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a translation layer to convert from the NVGPU_AS_* flags to
to new set of NVGPU_VM_MAP_* and NVGPU_VM_AREA_ALLOC_* flags.
This allows the common MM code to not depend on the UAPI header
defined for Linux.
In addition to this change a couple of other small changes were
made:
1. Deprecate, print a warning, and ignore usage of the
NVGPU_AS_MAP_BUFFER_FLAGS_MAPPABLE_COMPBITS flag.
2. Move the t19x IO coherence flag from the t19x UAPI header
to the regular UAPI header.
JIRA NVGPU-293
Change-Id: I146402b0e8617294374e63e78f8826c57cd3b291
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599802
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Most of VGPU code is linux specific but lies in common code
So until VGPU code is properly abstracted and made os-independent,
move all of VGPU code to linux specific directory
Handle corresponding Makefile changes
Update all #includes to reflect new paths
Add GPL license to newly added linux files
Jira NVGPU-387
Change-Id: Ic133e4c80e570bcc273f0dacf45283fefd678923
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599472
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Commit 81868a187f updated barrier
usage to use the nvgpu wrappers and in doing so downgraded many
plain barriers {mb(), wmb(), rmb()} to the SMP versions of these
barriers.
The SMP version of the barriers in question are only issued
when running on an SMP machine. In most of the cases mentioned
above this is fine since the barriers are present to faciliate
proper ordering across CPUs. A single CPU is always coherent
with itself, so on a non-SMP case we don't need those barriers.
However, there are a few places where the barriers in use (GMMU
page table programming, IO accessors, userd) where the barrier
usage is for communicating and establishing ordering for the
GPU. We need these barriers for both SMP machines and non-SMP
machines. Therefor we must use the plain barrier versions.
Change-Id: I376129840b7dc64af8f3f23f88057e4e81360f89
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1599744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
reg_op_is_* static inline force a dependency to UAPI in
regops_gk20a.h. Change the implementation to be functions
in .c file.
JIRA NVGPU-388
Change-Id: If5cae1ad011a26ee5ff23e1e39aac3d88fd5bb98
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598979
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove a forward declaration of dma_buf in <nvgpu/vm.h>. This
forward declaration is no longer necessary since all usage of
dma_bufs has been removed from common code!
JIRA NVGPU-224
JIRA NVGPU-30
Change-Id: I0948d8b99efc6429f7a6d122ef3655d670205d75
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598934
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For gp10b and previous chips, RM server will issue fake mmu fault to
reset the channel. And the mmu fault event will be sent to vgpu client,
which will cause the client to abort the channel or tsg.
But on gv11b, RM server doesn't issue fake mmu fault any more. So I need
to abort it right after the force reset is finished.
Jira EVLR-1671
Change-Id: I11399fda84d31086ba1d4ffde5948e409cde2a28
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597378
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Simplify compbits alloc by making the alloc function re-callable for
the buffer, and making it return the comptags info. This simplifies
the calling code: alloc_or_get vs. get + alloc + get again.
Add tracking whether the allocated compbits need clearing before they
can be used in PTEs. We do this, since clearing is part of the gmmu
map call on vgpu, which can fail.
Bug 1902982
Change-Id: Ic4ab8d326910443b128e82491d302a1f49120f5b
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597130
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Clean up the comptag-related data structures and allocation logic. The
most important change is that we only ever try comptag allocation once
to prevent incorrect map aliasing.
If we were to retry the allocation on further map calls, the following
situation would become possible:
(1) Request compressible kind mapping for a buffer. Comptag alloc failed
and we proceed with incompressible kind fallback.
(2) Request another compressible kind mapping for a buffer. Comptag alloc
retry succeeded and now we use the compressible kind.
(3) After writes through the compressible kind mapping, the buffer is no
longer legible via the fallback incompressible kind mapping.
The other changes are about removing the unused comptag-related fields
in gk20a_comptags and nvgpu_mapped_buf, and retrieving comptags info
only for compressible buffers. We also make nvgpu_ctag_buffer_info and
nvgpu_vm_compute_compression as private mm/vm.c definitions, since
they're not used elsewhere.
Bug 1902982
Change-Id: I0c9fe48ccc585a80dd2c05ec606a079c1c1d41f1
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1595153
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using
combination of timeslice and interleave levels to decide the priority
Hence remove the IOCTLs and all corresponding APIs
Jira NVGPU-393
Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This is needed for t19x during eng preempt done polling.
E.g. copy engine (CE) stall interrupt should not prevent GR
from finishing preemption. In order to check if current stall
interrupt is valid for the engine being polled for
preemption completion, function to provide engine
intr mask is needed. With this, polling code can make sure
there are no stall interrupts pending for the engine being
polled for preemption done. If stall interrupts
are pending for an engine, preemption will never finish.
Bug 200277163
Bug 1945121
Change-Id: Ie1ccac52c3e8d453a49084e195f2e7eaafb8f057
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1584065
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Check for interrupts or hangs while waiting for the preempt to complete.
During pbdma/eng preempt done polling, any stalling interrupts relating
to the runlist must be detected and handled in order for the preemption
to complete.
When PBDMA fault or CE fault occurs, the PBDMA will save out
automatically. TSG related to the context in which the fault occurred
will not be scheduled again until the fault is handled.
In the case of some other issue requiring the engine to be reset, TSG
will need to be manually preempted.
In all cases, a PBDMA interrupt may occur prior to the PBDMA being able to
switch out. SW must handle these interrupts according to the relevant handling
procedure before the PBDMA preempt can complete.
Opt for eng reset instead of waiting for preemption to be finished when
there is any stall interrupt pending during engine context preempt completion.
Bug 200277163
Bug 1945121
Change-Id: Icaef79e3046d82987b8486d15cbfc8365aa26f2e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1522914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Tested-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Modify HAL clk->get_maxfreq() signature to match the one in
clk->set_rate() and clk->get_rate(). It allows support of multiple
clocks.
Implement clk.get_maxfreq operation for vgpu and use it to
fill max_freq field in GPU characteristics query.
JIRA NVGPU-388
Change-Id: I93bfc2aa76e38b8a5e0ac55d87c4e26df6fea77f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1597329
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Program mc_elpg_enable and mss nvlink soc credits only
when bpmp is not running or bpmp is running but underlying
platorm is simulation. For simulation, bpmp does not execute
hot reset sequence. As part of gpu unpowergate, bpmp will
program mc_elpg_enable and also set mss nvlink soc credits
after bringing mss nvlink out of reset
-Remove updating mc_enable as writes to this register has no
effect
-Remove fifo_fb_iface_r read/write. This hack was added during
initial bring up of emulation platforms
Bug 2018223
Bug 200269361
Change-Id: Ie09c259e48295a93c6d15376308186152db973fa
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1594495
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We use linux specific graphics/compute preemption modes defined in uapi header
(and of below form) in all over common code
NVGPU_GRAPHICS_PREEMPTION_MODE_*
NVGPU_COMPUTE_PREEMPTION_MODE_*
Since common code should be independent of linux specific code, define new modes
of the form in common code and used them everywhere
NVGPU_PREEMPTION_MODE_GRAPHICS_*
NVGPU_PREEMPTION_MODE_COMPUTE_*
Add required parser functions to convert both the modes into each other
For linux IOCTL NVGPU_IOCTL_CHANNEL_SET_PREEMPTION_MODE, we need to convert
linux specific modes into common modes first before passing them to common code
And to pass gpu characteristics to user space we need to first convert common
modes into linux specific modes and then pass them to user space
Jira NVGPU-392
Change-Id: I8c62c6859bdc1baa5b44eb31c7020e42d2462c8c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1596930
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>