Now that we are planning to enable CTRL_FIFO support with NVS,
there is no need for a separate enabled flag for the same.
CTRL_FIFO support is instead determined by the presence of
NVGPU_SUPPORT_NVS enable flag alone.
For non-auto platforms, Control-Fifo can be disabled by restricting
access to /dev/nvsched_ctrl_fifo.
Jira NVGPU-8619
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I9dbec60e5668f38e1460c43800584e88b16a2550
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2814435
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Change enables CONFIG_NVS_PRESENT for safety build.
- Fixes misra vioations.
- Renames sched.h to nvs_sched.h to avoid the conflict with QNX system
sched.h file for the safety support.
- Disable test_channel_close, test_tsg_unbind_channel,
test_channel_enable_disable_tsg, test_gv11b_fifo_preempt_tsg,
test_tsg_unbind_channel_check_hw_state and test_rc_deinit unit tests.
Jira NVGPU-8619
Change-Id: I7c983de2f4910fcb23687ec23368a060ce89c918
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2763579
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add support for accessing Event Queue for non-exclusive
users. Allows, non-exclusive users to open Event Queues
before exclusive users. Non-Exclusive users can only
use the Event Queue in a read-only mode.
Add VM_SHARED for Event Queues across all users instead of just
Read-Only users. Event queues are shared with multiple processes
and as such require VM_SHARED across all users(exclusive and
observers).
Jira NVGPU-8608
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Id9733c2511ded6f06dd9feea880005bdc92e51a0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2745083
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Added implementation for following IOCTLs
NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE
The above ioctls are supported only for users with
R/W permissions.
1) NVGPU_NVS_CTRL_FIFO_CREATE_QUEUE constructs a memory region
via the nvgpu_dma_alloc_sys() API and creates the corresponding
GPU and kernel mappings. Upon successful creation, KMD exports
this buffer to the userspace via a dmabuf fd that the UMD
can use to mmap it into its process address space.
2) Added plumbing to store VMA's corresponding to different users
for event queue in future.
3) Added necessary validation checks for the IOCTLs
4) NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to clear the queues.
5) Using a global queue lock to protect access to the queues. This
could be modified to be more fine-grained in future when there
is more clarity on GSP's implementation and access of queues.
6) Added plumbing to enable user subscription to queues.
NVGPU_NVS_CTRL_FIFO_RELEASE_QUEUE is used to unsubscribe
the user from the queue. Once, the last user is deleted,
all the queues will be cleared. User must ensure that
any mappings are removed before calling release queue.
7) Set the default queue_size for event queues to
PAGE_SIZE. This can be modified later. For event
queues, UMD shall fetch the queue_size.
Jira NVGPU-8129
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I31633174e960ec6feb77caede9d143b3b3c145d7
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2723198
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add a device node for management of nvs control fifo buffers for
scheduling domains. The current design consists of a master structure
struct nvgpu_nvs_domain_sched_ctrl for management of users as well
as control queues. Initially all users are added as non-exclusive users.
Subsequent changes will add support for IOCTLS to manage opening of
Send/Receive and Event buffers, querying characteristics etc.
In subsequent changes, a user that tries to open a Send/Receive queue
will first try to reserve itself as an exclusive user and only if that
succeeds can proceed with creation of both Send/Receive queues.
Exclusive users will be reset to non-exclusive users just before they
close their device node handle.
Jira NVGPU-8128
Change-Id: I15a83f70cd49c685510a9fd5ea4476ebb3544378
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2691404
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Cast the return code of sprintf away when creating a scheduling domain
device name. While at it, use snprintf just in case.
CID 497476
CID 497479
Bug 3512545
Change-Id: I4e26c29b889de4b709d582ec3fdde28c50fca5b9
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2681274
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Create device nodes for user-created scheduling domains. This helps
leverage filesystem based access control: domains can be chosen to be
available for a limited set of users on a system.
The device nodes are dynamic: they can be removed while the driver is
running normally. This is a bit different from the nodes that exist
until the driver is unloaded, so the devno/domain mapping is stored in a
separate list. The usual container_of pattern would suffer from an
unavoidable race condition if a domain file was opened while the same
domain would get removed.
As usual, domain refcounting prevents a domain from being removed. Now
the open device files hold refs and thus any open domain files prevent a
domain from getting removed, in addition to the userspace-invisible ref
that is taken when a TSG is bound to a domain.
While at it, make the query ioctl guarded by the sched domain mutex, as
domains might technically get added or removed during the querying code.
Jira NVGPU-6788
Change-Id: Ief2a09a442c4e70f1f2be8a32359341071d74659
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2651164
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
- Make the domain scheduler timeslice type nanoseconds to future proof
the interface
- Return -ENOSYS from ioctls if the nvs code is not initialized
- Return the number of domains also when user supplied array is present
- Use domain id instead of name for TSG binding
- Improve documentation in the uapi headers
- Verify that reserved fields are zeroed
- Extend some internal logging
- Release the sched mutex on alloc error
- Add file mode checks in the nvs ioctls. The create and remove ioctls
require writable file permissions, while the query does not; this
allows filesystem based access control on domain management on the
single dev node.
Jira NVGPU-6788
Change-Id: I668eb5972a0ed1073e84a4ae30e3069bf0b59e16
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2639017
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Move away from the prototype call in channel wdt worker and create a
separate worker thread for the domain scheduler. The details of runlist
domains are still encapsulated in the runlist code; the domain scheduler
controls when to switch domains. Switching happens based on domain
timeslices or when the current domain is deleted.
The worker thread is paused on railgate and spun back on poweron. The
scheduler data was also left dangling, so fix that by deinitializing all
nvs-related when gk20a_remove_support() is called. The runlist domains
already get freed as part of fifo removal.
Jira NVGPU-6427
Change-Id: I64f42498f8789448d9becdd209b7878ef0fdb124
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2632579
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add ioctls for creating, removing and querying scheduling domains and
interface with the "nvsched" entity that will be the core scheduler.
Include the scheduler in the Linux build.
The core scheduler code will ultimately hold data on and control what
gets scheduled, but this intermediate layer in nvgpu-rm needs a bit of
bookeeping to manage the userspace interface.
To keep changes isolated, this does not touch the internal runlist
domains yet. The core scheduler logic will eventually control the
runlist domains.
Jira NVGPU-6788
Change-Id: I7b4064edb6205acbac2d8c593dad019d517243ce
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2463625
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>