dmabuf internals that nvgpu relies upon for storing meta-data for
compressible buffers changed in k6.1. For now, disable compression
on all k6.1+ kernels.
Additionally, fix numerous compilation issues due to the bit rotted
compression config. All normal Tegra products support compression
and thus have this config enabled. Over the last several years
compression dependent code crept in that wasn't protected under the
compression config.
Bug 3844023
Change-Id: Ie5b9b5a2bcf1a763806c087af99203d62d0cb6e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2820846
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
Tested-by: Sagar Kamble <skamble@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Upstream commit 7938f4218168 ("dma-buf-map: Rename to iosys-map")
renames 'struct dma_buf_map' to 'struct iosys_map' and breaks building
the NVGPU driver with Linux v5.18-rc1. In the NVGPU driver there are
many places where 'dma_buf_map' is used and so to clean-up the code and
minimise the impact of this change, add a gk20a_dmabuf_vmap() and a
gk20a_dmabuf_vunmap() helper function. These new functions support all
kernel versions and eliminate a lot the KERNEL_VERSION ifdefs.
Bug 3598986
Change-Id: Id0f904ec0662f20f3d699b74efd9542d12344228
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2693970
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
To enable userspace query about comptags allocation status of a buffer,
comptags are to be allocated only during buffer registration done by
nvrm_gpu. Earlier, they were allocated during map.
nvrm_gpu will be sending metadata blob to be associated with the buffer.
This will have to be stored in the dmabuf privdata for all the buffers
registered by nvrm_gpu.
This patch moves the privdata allocation to buffer registration ioctl.
Remove g->mm.priv_lock as it is not needed now. This lock was added
to protect dmabuf private data setup. That private data is now
handled through dmabuf->ops and setup of dmabuf->ops is done
under dmabuf->lock.
To support legacy userspace, this patch still allocates comptags on
demand on map calls for unregistered buffers.
Bug 200586313
Change-Id: I88b2ca04c733dd02a84bcbf05060bddc00147790
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2480761
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu does map attachment with DMA_BIDIRECTIONAL direction for buffers
irrespective of the GPU mapping type. nvmap will allow map attachment
with only DMA_TO_DEVICE direction for RO buffers for secure buffer
access.
nvgpu does RO GPU mapping if the buffer is RO for CPU or user requests
to map as RO. In both cases the dma_buf map attachment should be done
with DMA_TO_DEVICE direction as the intent for accessing the SGT is
reading from GPU.
Also, map the gpfifo buffer as read_only as it is intended to be read
only. The userd buffer is accessed by the GPU through iova and is not
GMMU mapped.
Bug 200731819
Change-Id: Ifc60973f298f7cacab16c5dedecbb40c5f33ed1d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2539312
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Instead of allocating priv data for all external buffers, allocate
only on a demand basis for when compression is requested either in CDE
or via libnvrm_gpu.
This will allow allocators like nvidia-drm to use non-compressed
buffers without needing to avoid the core drm checks.
e.g. drm_gem_prime_import_dev that checks for
if (dma_buf->ops == &drm_gem_prime_dmabuf_ops)"
This patch also gets rid of optimization of dma_buf's attach/detach
calls. Now, nvgpu instead needs to call attach/detach for everytime
the dmabuf fd is imported.
Change-Id: Idefd269b32974106e85ff09e17ebc752b92f830c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2372213
Tested-by: Yogish Kulkarni <yogishk@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The stored fence in struct gk20a_buffer_state is a post fence of a
previous cde preparation job, if any. This stored fence is passed to
userspace via NVGPU_GPU_IOCTL_PREPARE_COMPRESSIBLE_READ in case a
preparation job was necessary to fulfill the request. As nothing else is
needed from the fence, make it just a struct nvgpu_user_fence.
Add nvgpu_user_fence_clone() for copying this user fence because it's
stored internally and returned to userspace. The refcounted os fence
needs special care. Now that the API is not so trivial anymore, add some
documentation.
Jira NVGPU-5248
Jira NVGPU-5493
Change-Id: I8bc4d52eaab7c7cbc5573b331e72e1d853f9f057
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2359065
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Historically, nvgpu has supported a struct gk20a_dmabuf_priv and
associated it with a dmabuf instance. This was aided by Nvmap's
dma_buf_set_drv_data() and dma_buf_get_drvdata() APIs. gk20a_dmabuf_priv
is used to store Comptag IDs i.e. (1 per 64 kb) as well as can store the
dmabuf attachments to avoid multiple attach/detach calls. dma_buf_set_drv_data()
allows Nvgpu to associate an instance of struct gk20a_dmabuf_priv with the instance
of the dmabuf and also provide a release callback to delete the
instance when the last reference to the dmabuf is put. Nvmap accomplishes
this by modifying the struct dma_buf_ops definition to include the
set_drv_data and get_drv_data callbacks in the kernel code.
The above approach won't work for upstream Kstable and Nvmap
plans to remove these APIs for upcoming newer downstream kernels as
well.
In order to implement the same functionality without depending on Nvmap,
Nvgpu will implement a release chaining mechanism. Dmabuf's 'ops' pointer
points to a constant struct and hence a whole copy of the ops is made
followed by altering the new copy's release pointer.
struct gk20a_dmabuf_priv stores the new copy and the dmabuf's 'ops' is
changed to point to this. This allows Nvgpu to retrieve
the corresponding gk20a_dmabuf_priv instance using container_of.
Nvgpu's custom release callback will invoke the original release
callback of the dmabuf's producer as a last step, thus completing the
full circle. In case, the driver is removed, Nvgpu restores the
dmabuf's 'ops' back to the original state. In order to accomplish this,
every instance of a struct nvgpu_os_linux maintains a linkedlist of the
gk20a_dma_buf instances. During the driver removal, this linkedlist is
traversed and the corresponding dmabuf's 'ops' pointer is put back to
its original state followed by freeing of this instance.
Nvgpu is a producer of dmabuf's for vidmem and needs
a way to check whether the given dmabuf belongs to itself.
Its no longer reliable to depend on a comparision of
the 'ops' pointer. Instead dmabuf_export_info() allows a name to be set by the
exporter and this can be used to compare with a memory location
that belongs to Nvgpu. Similarly for sysmem dmabufs, Nvmap makes a
similar change in the way it identifies whether a dmabuf belongs to
itself.
Removed NVGPU_DMABUF_HAS_DRVDATA and moved to a unified mechanism for
both downstream as well as upstream kernel.
Some of the other changes in this file include the following.
1) Deletion of dmabuf.c and moving its contents over to dmabuf_priv.c
2) Replacing gk20a_mm_pin_has_drvdata with nvgpu_mm_pin_privdata and
vice-versa for unpin.
Bug 2878569
Change-Id: Icf8e79b05a25ad5a85f478c3ee0fc1eb7747e22d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2341001
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Puneet Saxena <puneets@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>