Implement empty stubs of the channel watchdog functions for when
watchdog is disabled from build. Add some forward declarations that were
missing. Now most call sites don't need #idefs for the build flag.
Add error checks for the wdt alloc failure.
Jira NVGPU-5494
Jira NVGPU-5493
Change-Id: I2d42e8ab4c5e045cd280b2e1f254396127bd154b
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2352050
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, nvgpu_writel_loop() writes to a register and immediately
checks if register value is updated. It might take some time for
hardware registers to get updated with value written by software.
Modify nvgpu_writel_loop() to accept number of retries to check if
register value is updated and assert with nvgpu_assert().
Also, move nvgpu_writel_loop() to common code and use generic
nvgpu_readl() and nvgpu_writel() APIs.
JIRA NVGPU-5490
Change-Id: Iaaf24203a91eee3d05de7d0c7dea18113367de5f
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2348628
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Historically, nvgpu has supported a struct gk20a_dmabuf_priv and
associated it with a dmabuf instance. This was aided by Nvmap's
dma_buf_set_drv_data() and dma_buf_get_drvdata() APIs. gk20a_dmabuf_priv
is used to store Comptag IDs i.e. (1 per 64 kb) as well as can store the
dmabuf attachments to avoid multiple attach/detach calls. dma_buf_set_drv_data()
allows Nvgpu to associate an instance of struct gk20a_dmabuf_priv with the instance
of the dmabuf and also provide a release callback to delete the
instance when the last reference to the dmabuf is put. Nvmap accomplishes
this by modifying the struct dma_buf_ops definition to include the
set_drv_data and get_drv_data callbacks in the kernel code.
The above approach won't work for upstream Kstable and Nvmap
plans to remove these APIs for upcoming newer downstream kernels as
well.
In order to implement the same functionality without depending on Nvmap,
Nvgpu will implement a release chaining mechanism. Dmabuf's 'ops' pointer
points to a constant struct and hence a whole copy of the ops is made
followed by altering the new copy's release pointer.
struct gk20a_dmabuf_priv stores the new copy and the dmabuf's 'ops' is
changed to point to this. This allows Nvgpu to retrieve
the corresponding gk20a_dmabuf_priv instance using container_of.
Nvgpu's custom release callback will invoke the original release
callback of the dmabuf's producer as a last step, thus completing the
full circle. In case, the driver is removed, Nvgpu restores the
dmabuf's 'ops' back to the original state. In order to accomplish this,
every instance of a struct nvgpu_os_linux maintains a linkedlist of the
gk20a_dma_buf instances. During the driver removal, this linkedlist is
traversed and the corresponding dmabuf's 'ops' pointer is put back to
its original state followed by freeing of this instance.
Nvgpu is a producer of dmabuf's for vidmem and needs
a way to check whether the given dmabuf belongs to itself.
Its no longer reliable to depend on a comparision of
the 'ops' pointer. Instead dmabuf_export_info() allows a name to be set by the
exporter and this can be used to compare with a memory location
that belongs to Nvgpu. Similarly for sysmem dmabufs, Nvmap makes a
similar change in the way it identifies whether a dmabuf belongs to
itself.
Removed NVGPU_DMABUF_HAS_DRVDATA and moved to a unified mechanism for
both downstream as well as upstream kernel.
Some of the other changes in this file include the following.
1) Deletion of dmabuf.c and moving its contents over to dmabuf_priv.c
2) Replacing gk20a_mm_pin_has_drvdata with nvgpu_mm_pin_privdata and
vice-versa for unpin.
Bug 2878569
Change-Id: Icf8e79b05a25ad5a85f478c3ee0fc1eb7747e22d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2341001
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Puneet Saxena <puneets@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make Recovery and quiesce co-exist to support quiesce state
on unrecoverrable errors. Currently, the quiesce code is wrapped
under ifndef CONFIG_NVGPU_RECOVERY. Isolate the quiesce code from
recovery config, thereby enabling it on all builds.
On Linux, the hung_task checker(check_hung_uninterruptible_tasks()
in kernel/hung_task.c) complains that quiesce thread is stuck for
more than 120 seconds.
INFO: task sw-quiesce:1068 blocked for more than 120 seconds.
The wait time of more than 120 seconds is expected as quiesce
thread will wait until quiesce call is triggered on fatal
unrecoverable errors. However, the INFO print upsets the
kernel_warning_test(KWT) on Linux builds. To fix the failing
KWT, change the quiesce task to interruptible instead of
uninterruptible as checker only looks at uninterruptible tasks.
Bug 2919899
JIRA NVGPU-5479
Change-Id: Ibd1023506859d8371998b785e881ace52cb5f030
Signed-off-by: tkudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2342774
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Disable the somewhat non-useful syncpoint debug spew in the nvgpu
debug spew. The GPU has it's own snapshot view of syncpoints so
visibility into other syncpoint data is often not very helpful.
However, there are plausibly times where this would be necessary.
For example debugging a sync issue between the GPU and some other
SoC engine. Therefore, the syncpoint debug spew can be enabled
again at runtime if necessary.
JIRA NVGPU-5541
Change-Id: I7028e2d6027e41835b2fed4f2bbb366c16b99967
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2349185
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The gk20a_debug_dump() function implicitly adds a newline since it
uses nvgpu_err() under the hood (for uart destined prints). For the
seq_file destined writes it does not so there is an annoying inconsistency.
Remove the newline that many of the gk20a_debug_dump() calls add and add
the newline to the (now) seq_printf() call. This reduces the length of
debug dump logs and speeds them up - UART is _very_ slow after all.
Also cleanup some formatting issues in the various debug prints I
happened to notice.
JIRA NVGPU-5541
Change-Id: Iabf853d5c50214794fc4cbb602dfffabeb877132
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2347956
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
With an exclusively owned context and a channel per cde job, new cde
jobs are never launched on an active channel. A context is allocated,
then used with one job, and then released to the free pool when the
completion callback occurs. There is no need to check for an empty job
list, so delete the check to avoid a dependency to channel joblist
internals from cde code.
Long back in the history the cde contexts were reused before going idle
but the dynamic allocation has existed for years now and each
context/channel pair is isolated.
Jira NVGPU-5492
Change-Id: I9047ef4cd029996ba58fec42ddd55bb52cf0d6a6
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2343243
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create an iova for syncpoint shim region in case iommu is enabled and
nvlink is disabled. This iova is then used to created nvgpu mem with
nvgpu_mem_create_from_phys. Which is then used to create gpu mappings.
Instead of creating another variable g->syncpt_mem's priv is used to
store the sgt which needs to be freed on deinit.
Jira NVGPU-5376
Change-Id: I0b5a8320fbbb68031912ae88cfe8c2c3804fb813
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332643
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add NVGPU_SUPPORT_COMPRESSION to indicate if compression feature is
supported in nvgpu. If not, set cbc.init, cbc.ctrl and
cbc.alloc_comptags hals to NULL.
Add corresponding GPU characteristics flag and IOCTL mapping to sync
compression support status with nvrm_gpu.
JIRA NVGPU-4666
Change-Id: I2e685688ddac592b3bb918ee70c82ea5524d695a
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2338926
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The sync file support in Linux has been stabilized and the new config is
called CONFIG_SYNC_FILE. Even if maybe not so intended, both the
stabilized version and the legacy CONFIG_SYNC can coexist; to begin with
supporting the stabilized version, add CONFIG_NVGPU_SYNCFD_ANDROID and
CONFIG_NVGPU_SYNCFD_NONE as choice configs of which one will be set. A
later patch will extend this with a choice for CONFIG_SYNC_FILE.
Jira NVGPU-5353
Change-Id: I67582b68d700b16c46e1cd090f1b938067a364e3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2336118
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Aggressive sync destroy is used on some platforms where the amount of
syncpoints is limited. It can cause sync objects to get allocated and
freed in the submit path and when jobs are cleaned up, so require
deferred cleanup. Allocations do not belong to job tracking in a
deterministic submit path.
Although this has been technically allowed before, deterministic
channels have likely not been a priority on those old platforms with
aggressive sync destroy set.
Update virtualized gp10b platform data to match on a gp10b-vgpu compat
string instead of gk20a-vgpu. gk20a (Tegra T124) hasn't been supported
for a long time. Delete the aggressive sync destroy field from this
platform. It's got enough syncpoints to not dynamically allocate them;
having this property set for gp10b-vgpu has likely been a mistake.
This is not a completely pure cherry-pick: also extend the gpu
characteristics to not advertise full deterministic submit support when
aggressive sync destroy is off. This platform flag cannot be adjusted by
the user unlike many other flags.
Jira NVGPU-4548
Change-Id: I283f546d48b79ac94b943d88e5dce55710858330
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2322042
(cherry picked from commit b1ba2b997b2174e365bcb0782ef3e67260ff9e57)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328411
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, for platforms with canRailgate device characteristics disabled,
suspend can block as deterministic channels hold busy references. This
patch makes the change to first hold off any new jobs for deterministic
channels and then reverts back the busy references taken by those
channels. Following this, suspend also waits for the device to get idle
by waiting (with timeout) for the nvgpu's internal usage counter to be
come zero. This ensures there are no further jobs in progress and
allows the system to go into a suspend state.
Bug 200598228
Bug 2930266
Change-Id: Id02b4d41a9c2dd64303b2e2449dbed48c12aea4c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328489
(cherry picked from commit 9d1e07ca18)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2330159
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
dma_buf_kmap was introduced a decade ago to map a dma_buf partially
by the input number of pages, when 32-bit was fairly common. It was
added to not exhaust vmalloc space. Starting from kernel 5.6, it is
deprecated as vmap calls should succeed with larger available
vmalloc space.
Use dma_buf_vmap/vunmap instead of dma_buf_kmap/kunmap for handling
mapping of notifier memory in gk20a_channel_wait_semaphore.
Also update the debug prints and add speculation barrier to the
start of gk20a_channel_wait.
Bug 2925664
Change-Id: I49078fa81f050a57a5b66a793e62006dd66e3ba3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2326513
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Following commit updated the debug message in the function
nvgpu_vm_find_mapping w.r.t reuse of mapping.
commit 2f00d9adfc4fc91a6b84b14cc513f9b855d39cad
Author: Sagar Kamble <skamble@nvidia.com>
gpu: nvgpu: fix null pointer access in nvgpu_vm_find_mapping
That reuse log is about the mapping and not SGT. Fix the log
and add details about different handling of SGT for dmabuf
drvdata cases in the comment.
Bug 2834141
Change-Id: I3630de1c45a2bf55ff18bdb426f0597efe83f72c
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328427
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Guard nvgpu_os_fence_syncpt_fdget() with an nvgpu_has_syncpoints()
check. Even when CONFIG_TEGRA_GK20A_NVHOST is set, the platform data bit
can be disabled independently; on Linux we have a runtime flag to
disable them, too. If nvgpu doesn't have syncpt support, don't try
reading syncpt-based sync files.
If a sema-only-backed channel sync is given a syncpoint-based prefence
fd, we can't wait for it with the current design that couples waits and
increments in one interface. This should eventually be fixed, but for
now the extra check at least guards another interesting case.
A sync file with a zero fence count can be trivially accepted as either
a valid syncpoint fence or a sema fence. If only semas are supported,
and the syncpt check that happens first would turn the empty fd into a
syncpt-based sync fence, the sema wait layer would wrongly reject it.
Jira NVGPU-4548
Change-Id: Ib40c2d9a6a25812c5e24eef52c1d1a4f81eeed83
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325733
(cherry picked from commit 877f99d7c9977dfea14480a1b0488c990b813d1d)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2326044
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvhost_syncpt_read_minval() only reads the min value that nvhost has
cached. It makes sense for host managed syncpoints, but the user
syncpoint that needs set_safe_state is client managed; its min and max
values are not tracked internally. Use nvhost_syncpt_read_ext_check() to
read the actual syncpoint value from HW and set the "safe state" (65536
increments) based on that.
The safe state is analogous to "set min equal to max" when max is
expected to be no more than the current value plus a big number. Using
the cached min value would make this safe state lose its meaning when
there could have been more than the big number of increments since the
syncpoint was allocated.
Jira NVGPU-4548
Change-Id: I395be75f1696e48e344f5503420864efeb3621de
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323060
(cherry picked from commit ae571178ca63c4fa3e6bf70a4da0221393e975ee)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2326380
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Refactor user managed syncpoints out of the channel sync infrastructure
that deals with jobs submitted via the kernel api. The user syncpt only
needs to expose the id and gpu address of the reserved syncpoint. None
of the rest (fences, priv cmdbufs) is needed for that, so it hasn't been
ideal to couple with the user-allocated syncpts.
With user syncpts now provided by channel_user_syncpt, remove the
user_managed flag from the kernel sync api.
This allows moving all the kernel submit sync code to be conditionally
compiled in only when needed, and separates the user sync functionality
in a more clear way from the rest with a minimal API.
[this is squashed with commit 5111caea601a (gpu: nvgpu: guard user
syncpt with nvhost config) from
https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325009]
Jira NVGPU-4548
Change-Id: I99259fc9cbd30bbd478ed86acffcce12768502d3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321768
(cherry picked from commit 1095ad353f5f1cf7ca180d0701bc02a607404f5e)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2319629
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
If the os fence is the only kind that's supported, fail a submit if the
user wants fences but doesn't explicitly request sync fences, expecting
syncpoints. Syncpoint support is advertised to userspace in the gpu
characteristics, so userspace already has the knowledge to request the
correct sync type.
Do this check at the ioctl level. The in-kernel stuff that needs submits
(cde, copyengine) can work without syncpoints and sync fences are used
only in userspace.
Fail a submit also if CONFIG_SYNC is not set and sync fences are
requested. Lack of kernel support doesn't guarantee that userspace would
still wrongly want that.
Clarify the deferred cleanup requirements. The sync framework is needed
only for post sync fences, but deferred cleanup is still always needed
with semaphores because the internal tracking is done with dynamically
allocated (although small) objects.
Jira NVGPU-4548
Change-Id: I2e5a6554930cb413b2bb46ddfe388e41390bc7e4
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321715
(cherry picked from commit d870956170906eae1088846ec05266c859669771)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2318157
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
g->ops.channel.set_syncpt() updates the inst block's allowed syncpt
field for the syncpoint that's allocated for kernelmode submits (if
any). This is different from the userspace-owned syncpoint that is used
with usermode submits, so don't call set_syncpt when creating the user
syncpt.
If the kernel sync object wouldn't exist, the call would even fail. The
allowed syncpt is written during nvgpu_channel_setup_bind() (if usermode
submit isn't requested), after the kernel syncpt is allocated.
Originally, just one syncpoint used to be shared for the kernel and
userspace; now that they're separate, this different path to update the
allowed syncpt would need other modifications to work with the user sync
if one would be used on platforms that care about the allowed syncpt.
Jira NVGPU-4548
Change-Id: If800e704ef8a3289e04c8d75a623affd6779e309
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2320357
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>