There is race condition in driver when in_use variable
is accessed in both interrupt context and process context
to schedule a workitem in workqueue. When it is reproduced,
following error message is printed in kernel log.
dce: dce_client_schedule_event_work:359 Failed to schedule Async event
Queue Full!
- This change make sure to check for in_use variable in
interrupt context before scheduling a workitem in workqueue and also
uses atomic operations for usage of in_use variable.
- This also removes unreachable code in driver.
Bug 3454371
Change-Id: I68ce3cd17769ec837a895a4147ae042e2730ae58
Signed-off-by: David Yu <davyu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2636749
(cherry picked from commit b8935cf34edb52b9803dae3ace767b116b8adbef)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2637714
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Santosh Galma <galmar@nvidia.com>
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-by: Arun Swain <arswain@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Santosh Galma <galmar@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In current code RPC ack events are handled by thread issuing RPC but
Async-IPC events are handled in the bottom-half with interrupt disabled.
This may delay delivery of RPC ack if async-IPC handling is hogging the
CPU. It can also lead to a race condition where the RPC caller takes
the lock, issues a RPC and waits for a reply. And in-between KMD receives
an Async IPC and wants to take the same lock.
As IPC code is running with interrupt disabled, RPC-reply interrupt will
not be notified and will result in a deadlock.
This patch creates a Workqueue to handle Async IPC events.
JIRA TDS-6381
Change-Id: If7d69ef50298ad9364e9e494a32cf483ecfb744e
Signed-off-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-t23x/+/2485764
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Santosh Galma <galmar@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Arun Swain <arswain@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
For T23x, we have a separate R5 based cluster
named as Display Controller Engine(DCE) to run
our Display RM code. This driver will run on CPU
with the following functionality:
Via debugfs for test and bring-up purposes:
1. Reads the DCE firmware image into DRAM.
2. Sets up DCE AST to cover the DCE firmware image.
3. Sets up R5 reset vector to point to DCE firmware
entry point
4. Brings DCE out of reset
5. Dumps various regsiters for debug
In production env:
1. Manages interrupts to CPU from DCE
2. Uses bootstrap command interface to define Admin
IPC
3. Locks down bootstrap command interface
4. Uses Admin IPC to define message IPC
5. Uses Admin IPC to define message IPC payload area
6. Uses Admin IPC to set IPC channels
6. Uses Admin IPC to define crashdump area
(optional)
7. Provides IPC interfaces for any DCE Client running
on CCPLEX including Display RM.
8. Uses Admin IPC to set logging level (optional)
This patch puts a framework in place with the
following features :
1. Firmware Loading
2. AST Configuration
3. DCE Reset with EVP Programming
4. Logging Infra
5. Debugfs Support
6. Interrupt Handling
7. Mailbox Programming
8. IPC Programming
9. DCE Client Interface
10. Ftrace Support for debug purposes
Change-Id: Idd28cd9254706c7313f531fcadaa7024a5b344e7
Signed-off-by: Arun Swain <arswain@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-t23x/+/2289865
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-by: Santosh Galma <galmar@nvidia.com>
Reviewed-by: Mitch Luban <mluban@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Mahesh Kumar <mahkumar@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>