dma_fence_wait_timeout (along with a host of other jiffies-based
timeouting functions) returns zero both in case of timeout and when
the wait completes during the last jiffy before timeout. As such,
we can't rely on it to distinguish between success and timeout.
To prevent confusing callers by returning -EAGAIN before the timeout
period has elapsed, check if the fence got signaled again after
the wait.
Bug 3955201
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Change-Id: Ib90cbd3d78bac773a724a523925ae5d1b70107c8
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2857405
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Add support for syncpoint pools. These are configuration-dependent
subsets of syncpoints that can only be allocated by specifying
an allocation specifically from that pool.
In this patch, we add support for the GPU pool. On certain systems,
the GPU is for safety purposes limited to accessing a specific set
of syncpoints. Therefore, syncpoints allocated for GPU use need to
come from this pool.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Change-Id: I385b8bdfdac8573b26f0b1a0feaf05de071148a1
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2826198
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Sanif Veeras <sveeras@nvidia.com>
Reviewed-by: Raghavendra Vishnu Kumar <rvk@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Tested-by: Sanif Veeras <sveeras@nvidia.com>
Add support for running as a guest system under a hypervisor, using
Host1x HW's virtualization capabilities.
In effect this involves not touching apertures other than the 'vm'
aperture, and channels and syncpoints other than those that are
assigned to the VM.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Change-Id: Ideec5b0b9a692aa3ee6b4a0240c5755c983cb7bd
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2811837
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently all fences have a 30 second timeout to ensure they are
cleaned up if the fence never completes otherwise. However, this
one size fits all solution doesn't actually fit in every case,
such as syncpoint waiting where we want to be able to have timeouts
longer than 30 seconds. As such, we want to be able to give control
over fence cancellation to the caller (and maybe eventually get rid
of the internal timeout altogether).
Here we add this cancellation mechanism by essentially adding a
function for entering the timeout path by function call, and changing
the syncpoint wait function to use it.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Change-Id: I4600544afe21efdd3f7d06362bd124130ddec3db
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2786637
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
In anticipation of removal of the intr API, move host1x_syncpt_wait
to use DMA fences instead. As of this patch, this means that waits
have a 30 second maximum timeout because of the implicit timeout
we have with fences, but that will be lifted in a follow-up patch.
Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com>
Change-Id: I82a262b73861b35c4031983f4134d4b4006e3b16
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2786634
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Update the host1x driver to align with the latest upstream host1x
driver from Linux v5.19-rc1. Please note that the context bus support
is not included, because this needs to be built into the kernel.
Bug 3724727
Change-Id: Ic6fbe001462d160d1bb24f76038dd755c5550690
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2739538
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Add the upstream host1x driver with the 'Host1x/Tegra UAPI' series [0]
applied. This driver will be built as an external module for use with
the NVGPU driver on upstream Linux kernels.
The following modifications have been made to the series posted upstream
1. Update the Makefile to always build the driver as a module
2. Remove the tests to see if CONFIG_DRM_TEGRA_STAGING is enabled
3. Rename the include/linux/host1x.h to include/linux/host1x-next.h to
avoid conflicts with upstream headers when building as an external
module.
4. Rename the include/uapi/linux/host1x.h to
include/uapi/linux/host1x-next.h to avoid conflicts with upstream
headers when building as an external module.
5. Rename the module that is built to be host1x-next.ko instead of
host1x.ko to avoid any depmod conflicts with the upstream driver.
[0] https://patchwork.ozlabs.org/project/linux-tegra/list/?series=215770
Bug 3156385
Change-Id: Ic60299546809097dd0e4a9a7157bce1491d9f794
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2435801
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit