- Rename dce-os-device to dce-linux-device
- dce-os-device.h header is specific to OS and is only intended
to be internally used within OS. Similarly all it's exposed
functions are OS specific only.
- Therefore instead of having a common name for all OSs for this
header file, make this header internal by including linux
name to it's naming convention.
- Similary rename dce_os_device struct to dce_linux_device and
also rename corresponding functions from this header.
JIRA TDS-16126
Change-Id: I74e2deb17f49065d242bd80d50c5a849b3dfa3a1
Signed-off-by: anupamg <anupamg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3256403
Reviewed-by: Arun Swain <arswain@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
- I2b8a24f9044bc08e10e5ff8cbf0c3f51fa53ff53 change introduced
an issue of concurrent accesses of admin message buffer
by different admin channel clients.
- This change will fix this issue by adding set of buffers per admin
channel client.
- When a admin channel client wants to use a buffer it will
have to request using it's client ID.
Buffer will be granted only if at least one buffer for that client
is not in use.
- Admin channel clients must release the buffer once done with the
usage so that it's available for other accesses by the same client.
- Do we need a mutex to protect this array?
1) There's no issue if different clients are trying to get/put
buffers concurrently since each query will operate on
separate per client array.
2) We've an assumption that none of the clients will be active
during init. This is also documented as part of function
documentation.
3) Will we ever have a usecase where a same client does
get/put concurrently?
4) Is it possible for a client to be active during de-init?
- If answer to 3 or 4 is yes then we still need a mutex to protect
the buffers.
- Currently we've assumed that there woudn't be concurrent
operations during init/deinit and on same client so
we're not introducing any mutex.
JIRA TDS-16126
Change-Id: I2ab640dc7c8ee6dedc9179dbb726368c3cb7d65f
Signed-off-by: anupamg <anupamg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3249307
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-by: Arun Swain <arswain@nvidia.com>
- This is a follow-up CL to address comment from
I42bfe95aa81823dc077ae0964eb6288a1f25fc17
- Certain utils functions are used in single files so make
them static and move to files in which they are used.
- Rename these from dce_os*() to dce_().
- Delete dce_get_fw_phy_addr() as it's unused.
JIRA TDS-16126
Change-Id: I6049ae1d381ac9c18acbcd3b2584d4d8ab3f2dc0
Signed-off-by: anupamg <anupamg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3248435
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-by: Arun Swain <arswain@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Modules covered in this CL:
1) dce-os-log
This is not a functional CL. It does the following:
1) Rename dce_<info/err/debug/warn> to
dce_os_<info/err/debug/warn>
2) Rename dce_log_msg() to dce_os_log_msg()
3) Rename DCE_<WARNING/ERROR/INFO/DEBUG> to
DCE_OS_<WARNING/ERROR/INFO/DEBUG>
4) Move dce-log.h to os/linux/include/dce-os-log.h
5) Stop using old abstraction:
a) Replace <os-dce-log.h> includes with <dce-os-log.h>
6) Delete all related old deprecated log files:
a) os/include/linux-kmd/os-dce-log.h
b) os/include/os-dce-log.h
JIRA TDS-16126
Change-Id: I75ebe98a785c298678d80371184efae6e46932ee
Signed-off-by: anupamg <anupamg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3228536
Reviewed-by: Arun Swain <arswain@nvidia.com>
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
DCE FW will run the dma test using 512 bytes of transfer
between dram and tcm for 400 iterations and alu test
generating 100 prime numbers for 200 iterations.
DCE running above 600Mhz will take nearly 50msec
for each test.
Jira TDS-16211
Change-Id: I34570acd4db6b8103bd2451833b280dc8e32131a
Signed-off-by: vinodg <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3192552
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
For T23x, we have a separate R5 based cluster
named as Display Controller Engine(DCE) to run
our Display RM code. This driver will run on CPU
with the following functionality:
Via debugfs for test and bring-up purposes:
1. Reads the DCE firmware image into DRAM.
2. Sets up DCE AST to cover the DCE firmware image.
3. Sets up R5 reset vector to point to DCE firmware
entry point
4. Brings DCE out of reset
5. Dumps various regsiters for debug
In production env:
1. Manages interrupts to CPU from DCE
2. Uses bootstrap command interface to define Admin
IPC
3. Locks down bootstrap command interface
4. Uses Admin IPC to define message IPC
5. Uses Admin IPC to define message IPC payload area
6. Uses Admin IPC to set IPC channels
6. Uses Admin IPC to define crashdump area
(optional)
7. Provides IPC interfaces for any DCE Client running
on CCPLEX including Display RM.
8. Uses Admin IPC to set logging level (optional)
This patch puts a framework in place with the
following features :
1. Firmware Loading
2. AST Configuration
3. DCE Reset with EVP Programming
4. Logging Infra
5. Debugfs Support
6. Interrupt Handling
7. Mailbox Programming
8. IPC Programming
9. DCE Client Interface
10. Ftrace Support for debug purposes
Change-Id: Idd28cd9254706c7313f531fcadaa7024a5b344e7
Signed-off-by: Arun Swain <arswain@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-t23x/+/2289865
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Mahesh Kumar <mahkumar@nvidia.com>
Reviewed-by: Santosh Galma <galmar@nvidia.com>
Reviewed-by: Mitch Luban <mluban@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Mahesh Kumar <mahkumar@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>