Compare commits

..

86 Commits

Author SHA1 Message Date
Besar Wicaksono
927a33af1c tegra: hwpm: fix userspace tmake files
Fixing the paths used in the tmake files to build
userspace library and test.

JIRA MSST-830

Change-Id: Ib4469794d66aa20ae343b367acdc4f43b5e3c4ab
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3363640
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-05-26 12:42:11 -07:00
Besar Wicaksono
efd031dcb0 tegra: hwpm: add user data mode test
Add mode E user data test for these IPS:
- NVTHERM
- IPMU

JIRA MSST-831

Change-Id: Id8911fa9bbed47f1c5d1e82b075e60134e05ad2c
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3361434
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
2025-05-26 12:42:07 -07:00
Besar Wicaksono
dbbd871203 tegra: hwpm: add test for NVTHERM, CSN, IPMU
Add mode B and E test for these IPS:
- NVTHERM
- CSN
- IPMU

JIRA MSST-831

Change-Id: If67ed0ab41f1ee4369311261ce73d5a0643326bb
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3341509
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-04-25 05:31:15 -07:00
Besar Wicaksono
4d309240ac tegra: hwpm: add userspace test for next4
Add unit test for next4.

JIRA MSST-831

Change-Id: If59fbff5f6d9a61fbcda8c0213f236d0acce8062
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3333470
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-04-25 05:31:07 -07:00
Besar Wicaksono
106bc61f86 tegra: hwpm: add cpu_ext_* enums
Add new CPU IP and resource enum in
kernel driver and userspace library.
This is to extend support for chips with
more than 32 CPU instances (up to 128).

JIRA MSST-893

Change-Id: I33142c7fc8f268f8c436cc3b7cd97385da31b558
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3328654
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-04-25 05:31:02 -07:00
Besar Wicaksono
a4b7ab4486 tegra: hwpm: add csn/csnh enum
Add new enums for CSN/CSNH ip and resource to
kernel driver and userspace library.

JIRA MSST-869

Change-Id: I821010dca617596b86b0fec07f499cf1e6e3f258
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3325216
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-04-25 05:30:57 -07:00
Besar Wicaksono
8b415ed149 tegra: hwpm: add nvtherm enum
Add enum for NVTHERM ip and resource to
kernel driver and userspace library.

JIRA MSST-868

Change-Id: Iacb6e9c9205e4293af04e28f265dd535b6fd1783
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3322825
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-04-25 05:30:52 -07:00
Besar Wicaksono
7f1249c9e9 tegra: hwpm: add initial userspace lib
Initial change for libnvsochwpm userspace library.

Change-Id: I20b11f9d253b65583db97dfebd9ff78b4d33d50c
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3267999
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-04-25 05:30:47 -07:00
Besar Wicaksono
f1de425d35 tegra: hwpm: add membytes high address check
Adding mem bytes high address validation to make sure
it is the same as stream buffer high address.

Change-Id: I189f44037279dc8e9569d1affcee4e19c3194558
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3319873
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
2025-04-25 05:30:42 -07:00
vasukis
a02386ec6e tegra: hwpm: t264: Fix VI and ISP ip details
VI and ISP (Camera modules) have an overlapping MMIO
address regions with both overlapping. Hence, set the
islots_overlimit flag to indicate this in HWPM driver.

Fix minor errors in VI and ISP driver enablement in HWPM
driver.

Bug 5072985

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I37d84c1ae6750202abd8caa9adb38a79f8b75537
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3323540
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2025-03-27 01:53:08 -07:00
Jon Hunter
742e79fa06 tegra: hwpm: Simplify CONFTEST presence check
The variable 'NV_BUILD_SYSTEM_TYPE' is an NVIDIA internal Makefile
variable used for building the Linux kernel. We should avoid using this
in drivers where possible because otherwise it will require external
users to set this.

CONFTEST itself is not internal and is distributed with the NVIDIA OOT
drivers. Rather than using 'NV_BUILD_SYSTEM_TYPE' to see if CONFTEST is
presence, we can simply see if the 'srctree.conftest' variable is set
and avoid using 'NV_BUILD_SYSTEM_TYPE' at all.

Furthermore, given that the variable 'CONFIG_TEGRA_HWPM_CONFTEST' now
defines if CONFTEST is present and this will only be set in the Makefile
if 'CONFIG_TEGRA_HWPM_OOT' is set, then we don't need to check for both
of these variables in the source files to determine if we need to
include 'nvidia/conftest.h'.

Bug 5120925

Change-Id: If9f6cebc7cc38414fce10a445ed090ba345e5002
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3318049
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-03-15 05:59:36 -07:00
vasukis
4fbceccafa tegra: hwpm: t264: Reduce DG map reg timeout
A timeout of 100ms is provided to allow DG Map
register values to propagate to router during
Perfmon enable/disable functionality. This is a lot
as ideally HW takes few us for the propagation. Hence,
reduce the timeout value to 10ms.

Bug 5072985

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ie67a3325341824a451315d94afff3b5a1c0bb144
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3311261
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
2025-03-13 09:32:05 -07:00
Besar Wicaksono
c39de268c9 tegra: hwpm: os: linux: add explicit CONFTEST flag
CONFTEST is NVIDIA internal and not available when
building HWPM driver locally without NVIDIA build
system. This patch introduces a new explicit config
to enable/disable reference to CONFTEST.

Bug 5120925

Change-Id: I669855f04186041661362cd578514b887128ef44
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3307050
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-03-11 06:53:03 -07:00
Besar Wicaksono
547508653d tegra: hwpm: os: linux: update kernel vers. check
This is needed when building HWPM driver locally without
NVIDIA build system.

- driver.c: use kernel version check to select the correct
  signature of class:devnode
- mem_mgmt_utils.c:
  - use kernel version check to provide correct parameter
    of MODULE_IMPORT_NS macro
  - use kernel version check to select the correct signature
    of get_user_pages function
- mem_mgmt_utils.h: use kernel version check to select
  between iosys-map or dma-buf-map

Bug 5120925

Change-Id: Ib33afc4d99056d5b872f0d4362e0e6c25eb7b64a
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3306471
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-03-11 06:52:59 -07:00
Besar Wicaksono
47aa6ea0b9 tegra: hwpm: disable inst/el dynamic alloc on overlimit
The driver will use brute force approach and iterate over
the static instance/element array when the corresponding
*_overlimit is set to true. However, during initialization
the driver may still allocate a dynamic array for the
instance/element, which will be unused (waisted).

This patch reuse the *overlimit option to also disable
the dynamic allocation. This is also needed to configure
IP that has instances without a regular address stride
pattern, like the routers in different dielets in next4.

JIRA MSST-832

Change-Id: Ia616435c38f27962a1632e3d08eb3e9cfe9f0ba8
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3298868
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-03-11 06:52:51 -07:00
Besar Wicaksono
8d43841584 tegra: hwpm: common: fix IP fs info
Fix the shift value for incorporating element fs
mask to the final IP floorsweeping mask.

The floorsweeping info contains the fs info
of all the elements in the IP. The patch fixes
an issue for IP with more than two instances,
where the element fs info of 3rd instance
onwards are not calculated correctly.

Change-Id: Idfa69171b3630ca62f684f7130400a55d451f2ff
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3307893
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2025-03-11 06:52:47 -07:00
vasukis
98c51d644b tegra: hwpm: t264: Enable VI and ISP compilation
Enable Camera (VI and ISP) IP file compilation
in HWPM driver for AV+L builds only.

Bug 4345706

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: If647e7e25ce7d1a853cc7c298780538e03392ec0
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3283197
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
2025-03-09 13:17:18 -07:00
vasukis
d9ed3fd02b tegra: hwpm: t264: Depreciate message to warning
When an IP is not enabled in HWPM Makefile, an error
message is given out. This has a higher log_level
which causes the Kernel Warning test in GVS to fail.
Hence, depreciate the log level to warning as not enabling
an IP for HWPM profiling is not an error per say.

Bug 4345706

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ib85931c06f42168e86aea5b0b2cb208f93216042
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3284317
Reviewed-by: Yifei Wan <ywan@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-01-26 07:51:46 -08:00
vasukis
474df3d0b4 tegra: hwpm: t264: Add ISP IP support
ISP is part of Camera IP. Add IP files to enable ISP IP.

Jira THWPM-90
Bug 4345706

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I2458181b89234bcf50a674de7697dc961407922d
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3263621
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
2025-01-05 08:44:37 -08:00
vasukis
9785a43b05 tegra: hwpm: t264: Add VI IP support
VI is part of Camera IP. Add IP files to enable VI IP.

Jira THWPM-90
Bug 4345706

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I7122d4b1b9e07181c4df0679c1d3d30bd222990c
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3262913
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Yifei Wan <ywan@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2025-01-05 08:44:13 -08:00
Jon Hunter
d61c003cdf tegra: hwpm: Fix build for Linux v6.13
The HWPM driver fails to be with Linux v6.13 because of the following
two issues:

1. In Linux v6.13, commit cdd30ebb1b9f ("module: Convert symbol
   namespace to string literal") updated the MODULE_IMPORT_NS macro to
   take a string literal as an argument in Linux v6.13. Use conftest to
   detect if MODULE_IMPORT_NS takes a string literal as an argument and
   update the HWPM driver accordingly.

2. The following build error is observed:

   In file included from os/linux/clk_rst_utils.c:17:
    include/linux/reset.h:30:49:
    error: implicit declaration of function ‘BIT’
    [-Werror=implicit-function-declaration]
   30 | #define RESET_CONTROL_FLAGS_BIT_ACQUIRED        BIT(2)
      |                                                 ^~~

   Fix the above by including the 'bits.h' header file.

Bug 4991705

Change-Id: I26cba920a0b0af251fd2f623ab9326ecafef5a5f
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3261738
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2024-12-12 03:28:48 -08:00
Vedashree Vidwans
c4e5fde336 tegra: hwpm: add ip_config debugfs flags
Change-Id: I4160b776947570df9ec81f4f34bdef6376b44be8
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3245391
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2024-11-28 18:58:39 -08:00
Vedashree Vidwans
2c2a933a9f tegra: hwpm: add debugfs node to skip alist
Currently, HWPM driver checks regops address to validate that the
address belongs to an IP allowlist that is reserved for profiling.
However, it is possible that the allowlist doesn't include all register
offset that are required for profiling. This scenario is often
encountered during early stages of bringup. This patch adds a debugfs
node to make HWPM driver skip allowlist check. This change will allow
users to dynamically skip allowlist check when debugfs is available.

JIRA THWPM-65

Change-Id: Ic85a1c7fac6a95f7cde532f3bdf6040bbcc7f5f3
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3241080
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: Vasuki Shankar <vasukis@nvidia.com>
2024-11-28 18:58:30 -08:00
vasukis
cda058bccc tegra: hwpm: next4: Add next4 chip support
Add HWPM driver support for Next4 chip.

Jira MSST-821

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Idc9c99653fa814a24fcab22735ae258f6f1a3f1c
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3250030
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-11-28 18:56:19 -08:00
Frankie Chang
f4b7bb9ead kleaf: add t264 Makefile to fix build error
Missing Makefile.t264.sources would cause kleaf build error.
Add into 'drivers/tegra/hwpm/BUILD.bazel' to fix this build error.

Bug 4344670

Change-Id: Ib0870116c557a29d4c16e0f10057f26bbf24c79a
Signed-off-by: Frankie Chang <frankiec@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3246772
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-11-22 21:31:22 -08:00
vasukis
cfec15ebca tegra: hwpm: type-cast to prevent QNX build errors
Type cast variable to prevent build error on QNX SDP 7.1.

Bug 4893334

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ica23d35aad973eb0dd092978e4a276ef765e1b34
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3251800
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-11-22 10:36:04 -08:00
vasukis
75a6c76cec tegra: hwpm: type-cast to prevent build errors
Type cast variable to prevent build error on QNX SDP 7.1.

Bug 4893334

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I1158a8a19f582fa08c554a56e4ad63e74ee6d5a1
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3249137
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-11-20 22:02:09 -08:00
vasukis
bb4b1def61 tegra: hwpm: t264: Merge t264-hwpm files to hwpm
Merge the T264 private source code to hwpm common code.
This is done after T264 source code can be made public.

Bug 4856428
Bug 4943517

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ie830c5465f32f49978cb465d68785ab3dbaee984
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3219865
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
2024-11-06 01:56:13 -08:00
Vedashree Vidwans
b8a884d226 tegra: hwpm: add cpu ip and resource enums
Add IP and resource enums for CPU IP that support HWPM.

Bug 4730025
Bug 4748888

Change-Id: Ica0d247953500fc6d7eb21144a318f2dbcca2d96
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3198954
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-11-02 12:25:06 -07:00
Vedashree Vidwans
1a5bd8d683 tegra: hwpm: add ucf ip and resource enums
UCF component is comprised of many sub-units such as MSW, OSW, SCB, etc.
Add IP and resource enums for UCF sub-units that support HWPM.

Bug 4730025
Bug 4748888

Change-Id: Ib50bf9a32d807d05ed0a7f55a5aa08009227e105
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3187986
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-10-16 19:44:39 -07:00
vasukis
b6fa559660 tegra: hwpm: fix credit programming count logic
The num_entries variable indicates the number of credit
programming requests that is sent from user space test app.
Current implementation has been configured to loop through
this in such a way that leads to an additional loop.
This change fixes the issue.

Bug 4571175

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Id9fd4315e5ef470c697bf2815e16b42e746edf45
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3212369
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-19 23:34:13 -07:00
Vedashree Vidwans
29c34c51d1 tegra: hwpm: add mcf ocu ip and resource enums
Add IP and resource enums for MCF OCU that support HWPM.

Bug 4730025
Bug 4748888

Change-Id: Ic0a15f60d8c1cbbb3bb46c79672f6a607087f508
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3211219
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2024-09-19 06:06:34 -07:00
Vedashree Vidwans
a8fc1ef30a tegra: hwpm: follow up svcacv fix
This is a follow up to fix svcacv warning in 3186843 about missing spdx
identifier.

Bug 4707244

Change-Id: If004830eb12e19bbdd8c6ef84818aca36ee5ebd7
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3210319
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2024-09-12 08:53:11 -07:00
vasukis
425b5f92ae tegra: hwpm: Linux: Setup trigger IOCTL Infra
Add IOCTL infra for Cross trigger programming in HWPM Driver.
Cross Triggering involves the access to secure register, which
cannot be issued by user space application. Hence, implement
cross trigger functionality in HWPM kernel driver.

Bug 4571175

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ia46227c4678d3ee282ebae8c58e116feaf4e59cb
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3147289
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-11 13:51:43 -07:00
vasukis
f672287ded tegra: hwpm: Support for Credit Programming
- Add HWPM Driver support for Credit Programming. Credit
programming can be accomplished by read-write into
secure HWPM registers, which cannot come in as a reg_ops
request from User Space application.

- Implement an empty credit programming handler for T234 and
th500.

- Implement OS agnostic HALs for Credit programming.

Bug 4571175

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I18dcff47dfe461bce3dcb6d78f39ff0156b4b0a5
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3127013
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-11 13:51:39 -07:00
vasukis
5d80b2edb5 tegra: hwpm: Linux: IOCTL for Credit Programming
- Add IOCTL infra for Credit programming in Linux based
OSs.

Bug 4571175

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: I1a5ff5aefcf8da6ad85507d71c0a9bd3b7f31f6d
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3136565
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-09-11 13:51:35 -07:00
Vedashree Vidwans
b5f2672134 tegra: hwpm: th500: update ip structure files
Update all IP files to include aperture_index variable in
hwpm_ip_aperture structures. This index will be used to
translate dynamic element index to static index.

Bug 4707244

Change-Id: I9999a7dc26c366381f37aea5f602a662d8707a8b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3197913
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-06 11:59:01 -07:00
Vedashree Vidwans
eccff56167 tegra: hwpm: t234: update ip structure files
Update all IP files to include aperture_index variable in
hwpm_ip_aperture structures. This index will be used to
translate dynamic element index to static index.

Bug 4707244

Change-Id: Ic4adb1aadffb4e2039ef5b898ce8ed046881ecde
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3197912
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2024-09-06 11:58:58 -07:00
Vedashree Vidwans
48e85a9c07 tegra: hwpm: update logic to use static indexes
HWPM driver uses nested structures and arrays of structures. The IP
structure setup logic allocates pointer arrays based on dynamic list of
IPs and aperture addresses. This dynamic list is required to search
given regops address in less amount of time.
However, there is a chance that the number of pointers computed
dynamically is huge. And huge amount of memory will be required for the
dynamic pointers array, which is impractical.
This, this patch modifies ip structure setup and address to aperture
conversion logic to use static indexes if the pointer array size is
huge.
This patch modifies relevant functions to always use static arrays
to access instance and aperture structures.

If dynamic pointers array is allocated, the patch adds logic to
translate dynamic index to static index using inst_index_mask for
instances and new added aperture_index for element level structures.

Add/update few log message to improve relayed information.

Bug 4707244

Change-Id: Ib4847e6575f82b628a3ce838ad69196a4bc08fed
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3186843
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-06 11:58:53 -07:00
Vishal Aslot
cdbd6e7a24 tegra: hwpm: th500: fixes and reorg of IPs
This patch fixes issues found during testing
and guidance provided by devtools. The following
is changed in this patch:

1. mcf_iobhx and mcf_ocu are merged into a single mcf_soc IP.
2a. c2c is changed from 2 instances to 1.
2b. Remove C2CS0/1 which are the broadcast apertures.
    Also remove the allowlist offset specific to broadcast
    aperture.
3. mss_hub is changed from 1 instance to 8.
4. mss_channel is changed from 1 instance to 32.
5. mc0 perfmux is added to mcf_clink.
6. mcf_core is changed from 1 instance to 8.
7. License headers updated where necessary.
8. c2c allowlist updated to have just the offsets common
   to all links.
9. Added a verbose comment explaining the design of
   th500_hwpm_force_enable_ips()
10. Added back validate_current_config module parameter
    as many systems still don't support fuses.
11. If all F's are read back for a regop in ip_readl(),
    return -ENODEV.

There is a corresponding patch to update the python scripts
that generated many of the C and header files.

Bug 4287384

Change-Id: I8e14b0165dfa1abb9f5e04de577a41f0eb278246
Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3134365
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Eric Lu <ericlu@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-02 21:52:45 -07:00
Vishal Aslot
fdbe788448 tegra:hwpm:th500: Force-enablement support for IPs
This patch adds support to selectively force-enable
TH500 IPs using module parameters.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I684169ad52da466b51e6b18634a997563390b0a4
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3026101
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-09-02 21:52:41 -07:00
Jon Hunter
11de2bc045 tegra: hwpm: Fix build for Linux v6.11
In Linux v6.11, the 'platform_driver' structure 'remove' callback was
updated to return void instead of 'int'. Update the Tegra HWPM driver
as necessary to fix this.

Bug 4749580

Change-Id: Ide44224bb3e5d0a000a252b4a8117ca203904a54
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3183043
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2024-07-31 08:06:12 -07:00
Jian-Min Liu
b8d1724bb0 Kleaf: add hwpm kernel module
1. Add BUILD.bazel file.
2. Add build target of kernel module and required include folder
   srctree.* in Makefile to fix the build issue

Bug 4344670

Change-Id: I22560573aaa38ec5a2b14290a2ba48e1f2e5ab0c
Signed-off-by: Jian-Min Liu <jianminl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3066227
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Chun Ng <chunn@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-07-16 02:06:57 -07:00
Vedashree Vidwans
5705145f59 tegra: hwpm: th500: correct config flag name
Recently, TH500 HWPM config flag was renamed to CONFIG_TEGRA_HWPM_TH500.
Correct the config flag name in init.c and acpi.h files.

Jira THWPM-112

Change-Id: I9fcc40cd2529c0e5e6894bda95f6d8248e8b61cd
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3167472
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
2024-07-13 08:56:49 -07:00
Vedashree Vidwans
095fc3dea0 tegra: hwpm: add clk rate as chip variable
LA clock rate is specific to a chip. Move LA clock rate macro as a chip
specific variable. Set la_clk_rate variable to correct value for T234
and TH500 chips.

Jira THWPM-112

Change-Id: I962cf579aed33d91d0abbfb8a44fc4063dc8444c
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3140419
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2024-05-22 10:20:24 -07:00
vasukis
ab110e5f27 tegra: hwpm: Add OPT_HWPM_DISABLE mask definition
- Add OPT_HWPM_DISABLE fuse (offset 0xd18) mask for NEXT3
chip.

Jira THWPM-73

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Idc403276886fb2f00b18a69be2c285bc8b3da000
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3139627
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-05-21 17:15:07 -07:00
Vedashree Vidwans
5f4378574c tegra: hwpm: cleanup build logic in makefile
Currently, conditions to compile HWPM driver based on build config is
not defined well in the Makefiles. Update Makefiles to
- use external chip specific flags to include chip source files
- add copyright information
- revise IP force enablement logic, remove unused MINIMAL_IP_ENABLE flag
- follow a standard way of including source files and config flags.

Jira THWPM-109

Change-Id: I6d32b5b67d34c65b56fb9cb9d6a1c4cca7b11cc6
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3121175
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2024-05-05 20:11:48 -07:00
vasukis
1ff862c00a th500: hwpm: Fix EMC Fuse Mask calculation.
A recent change has led to EMC fuse mask calculation regression.
This is being corrected in this patch. The emc_fuse_disable mask
is set in such a way that, each bit corresponds to 4 MSS Channels.
For example, emc_fuse_disable mask=1100, corresponds to MSS_Channel0
to MSS_Channel7 being present, while MSS_Channel8 to MSS_Channel15
are floorswept. However, in HWPM Driver, the logic to represent a
floorswept IP element is indicated by '1'. Correct the logic to
indicate this.

Bug 4490868

Change-Id: Id83d9e1d983c3fbf8f58cef3a1ff45334d7eadd6
Signed-off-by: vasukis <vasukis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3122752
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2024-05-02 08:45:53 -07:00
vasukis
89426a7e0a tegra: hwpm: Fix EMC Fuse Mask calculation.
A recent change has led to EMC fuse mask calculation regression.
This is being corrected in this patch. The emc_fuse_disable mask is
set in such a way that, each bit corresponds to 4 MSS Channels.
For example, emc_fuse_disable mask=1100, corresponds to MSS_Channel0
to MSS_Channel7 being present, while MSS_Channel8 to MSS_Channel15
are floorswept. However, in HWPM Driver, the logic to represent
a floorswept IP element is indicated by '1'. Correct the logic to
indicate this.

Bug 4490868

Signed-off-by: vasukis <vasukis@nvidia.com>
Change-Id: Ia3825db29715e04aa43822283b160252d00f0a81
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3099298
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2024-05-02 08:40:39 -07:00
Ahmad Chaudhry
be41a4158c tegra: hwpm: fix acpi compilation error
ACPI configs are not required for aaos as
aaos is booted with dtb and not acpi.
Disabling CONFIG_ACPI results in a build failure as
it's undefined in the #if directive
so adding this check to see whether it is defined
resolves the issue and allows aaos to build
successfully with CONFIG_ACPI disabled

Bug 4559177

Change-Id: I9f068c373d6dc57acb610a107eb8a2e90a0e944b
Signed-off-by: Ahmad Chaudhry <ahmadc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3115456
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2024-04-18 01:11:13 -07:00
Vedashree Vidwans
2068126fb2 tegra: hwpm: update makefile, add debug log mask
- Modify condition to include TH500 files for correct config. Since
TH500 is only supported with BaseOS and TinyLinux, the TH500 HWPM
config flag will be defined as part of the BaseOS/TinyLinux builds.
- Add new debug log mask for active debugs. This will allow us to enable
debug messages related to active debugs reducing amount of logs.
- Add condition to check ARCH_TEGRA config required for kernel specific
APIs.

Jira THWPM-69

Change-Id: I637bdbd2e5d72808611f63f4f719e5072f85ca34
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2978365
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Tested-by: Vasuki Shankar <vasukis@nvidia.com>
2024-04-17 08:44:17 -07:00
Vedashree Vidwans
6a90ec671c tegra: hwpm: th500: soc: read MC config fuse
On production board, MC config details are available through fuses. Add
function to read MC config fuse. Use the floorsweep fuse info to find
available elements.

Bug 3936487

Change-Id: I28e92c6186ba35fc19bfac67ed137b5c7fca645a
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3006813
(cherry picked from commit 228851f45b787c93044d9ff0daf28baecda73f82)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3115439
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2024-04-15 18:14:50 -07:00
Vedashree Vidwans
3a69716646 tegra: hwpm: fix conftest compilation error
HWPM code from HWPM repo is not currently used to compile with kernel
5.10. However, CL to compile HWPM repo for kernel 5.10 is required to
validate latest changes on Pre-Si.
Since conftest is only available for kernel version later than 5.10.
Add condition to include conftest only if HWPM is used as OOT module.

Bug 4119327

Change-Id: I760164447ff5c340884212f83966af72f1ee27da
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3011333
Tested-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Vishal Aslot <vaslot@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2024-02-10 20:59:51 -08:00
vasukis
06039978a1 tegra: hwpm: Remove un-used NVDLA allow-list regs
HWPM allowlist defines additional allow-list register
offsets which are not used to profile NVDLA IP. Remove
these register offsets to be on par with what NVDLA ResMgr
expects.

Bug 4452024

Change-Id: Ifce31753f32b31592a1868840a8c45b113a578f5
Signed-off-by: vasukis <vasukis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3061071
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2024-02-02 04:17:35 -08:00
Laxman Dewangan
f86c10ed60 Makefile: yocto: Add header_install rule for Yocto
Yocto makefile needs the installation of all public
headers. Add Makefile and rule to achieve this.

Bug 4365981

Change-Id: I986a5791246e83eb12a77d00998175f0630c796c
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3016433
Reviewed-by: Bitan Biswas <bbiswas@nvidia.com>
Reviewed-by: Mayank Pandey <maypandey@nvidia.com>
Tested-by: Bitan Biswas <bbiswas@nvidia.com>
2023-11-15 15:37:22 -08:00
Jon Hunter
f9360f364f tegra: hwpm: Use conftest for get_user_pages
The conftest script already has a test for checking which variant of the
get_user_pages() function is present in the kernel. So use the
definition generated by conftest to select which function variant is
used.

Bug 4276500

Change-Id: I29d216c8cead657c1daca4ce11b3dc3f74928467
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015357
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-15 15:35:05 -08:00
Jon Hunter
13a7312154 tegra: hwpm: Remove class owner
The owner member of the class structure was removed in upstream Linux
v6.4 because it was never used. Therefore, just remove this from the
HWPM driver completely because it is not needed.

Bug 4276500

Change-Id: I50f7e59e08edbea26f7ceaa701e4abfe5cc71c71
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015339
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-15 15:35:00 -08:00
Vishal Aslot
1b8fd6fc4b tegra: hwpm: th500: Add support for PCIE
This patch adds support for PCIE XTLQ, XTLRC,
and XALRC performance monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I0c07a6eb879b1bdc8d80bb085ef2bf58afbbd94b
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2990011
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-15 15:22:50 -08:00
Vedashree Vidwans
845a7137ae tegra: hwpm: add func to write sticky bits
Currently, HWPM requires raw readl/writel functions
to access sticky bits and as workaround for IP registers.
- Move the raw readl/writel logic along with IO mapping
of the address to a static function.
- Implement the wrapper functions to access sticky bits
and IP registers to use the created static functions.

Jira THWPM-86

Change-Id: Ib0b3229d4b8795d19aca142233622a166436e3bd
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3014028
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-15 09:33:12 -08:00
Vishal Aslot
d8fa381df1 tegra: hwpm: th500: Add support for MCF CORE
This patch adds support for MCF CORE performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I75466b28f3539c4b77be274d512e97f4d3a8847c
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2985961
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-12 10:28:42 -08:00
Vishal Aslot
2e41e3a5bd tegra: hwpm: th500: Add support for MCF CLINK
This patch adds support for MCF CLINK performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I6d28bb911b3d2b1623bce9a5d46dc0160570c8ec
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2986107
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 22:42:51 -08:00
Vishal Aslot
eb50361122 tegra: hwpm: th500: Add support for MCF C2C
This patch adds support for MCF C2C performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I7240fd8765d5c99d590549a6e4f02ba1236d2f99
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2986118
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 22:42:47 -08:00
Vishal Aslot
2f26b5849e tegra: hwpm: th500: Add support for MCF SOC
This patch adds support for MCF SOC performance
monitoring in the driver. MCF SOC has two different
types of perfmuxes connected to the same perfmon:
one is the OCU type and the other is IBHX and OBHX.
IBHX is only accessible via MC16 aperture. Therefore,
this patch adds two separate IPs: OCU and IOBHX.
However, both are tied to the MCF SOC perfmon (mcfsoc0).

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: If15498a44e02270f9106337078931edbe043c254
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2986232
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 22:42:42 -08:00
Vishal Aslot
b689a36372 tegra: hwpm: th500: Add support for MSS HUB
This patch adds support for MSS HUB performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I35b8c8c9bf1eb8b43dc1baeb10a9701fbd3f2dd9
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2987019
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 02:33:43 -08:00
Vishal Aslot
02864dec7a tegra: hwpm: th500: Update C2C and MSS Channel
This patch updates the IP structures for C2C and
MSS channels to include .fd and .dev_name fields.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I87aed08db3bb20c26bca9723fde7957f75d1b0f4
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3001695
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 02:33:38 -08:00
Vishal Aslot
bc6fdf1f18 tegra: hwpm: th500: Add support for C-NVLINK
This patch adds support for C-NVLINK performance
monitoring in the driver. C-NVLINK consists of
RX, TX, and CTRL apertures, each with its own
perfmux signals and perfmons. So this patch
breaks them up into three sets of perfmux-perfmon
data structures.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: Id8be4c965018125765f75a7b8bc8ab809bb7f976
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2999166
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-07 02:33:34 -08:00
Vishal Aslot
6e75fd7b50 tegra: hwpm: th500: Add support for CL2
This patch adds support for CL2 (LTS) performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: Ieed663f0149bc52576fcf6d71de0e627b11fdc84
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2988343
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-05 03:04:09 -08:00
Vishal Aslot
095e1bafd8 tegra: hwpm: th500: Add support for SMMU
This patch adds support for SMMU performance
monitoring in the driver.

Bug 4287384

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I59e33a5ac6e8d860f4454fdf46476847aef42106
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2986919
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-05 03:04:05 -08:00
Vedashree Vidwans
9b9c743199 tegra: hwpm: th500: fix bug in disable triggers
Update wait PMA idle condition to use pma perfmux structure to read PMA
register.

Jira THWPM-109

Change-Id: Ia3bb204dc182025e229f258c0a3191dc0d74dad1
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2996277
Reviewed-by: Vishal Aslot <vaslot@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-20 17:10:43 -07:00
Jon Hunter
5be46c6927 tegra: hwpm: Use conftest for 'struct class' changes
In Linux v6.2, the 'struct class.devnode()' function was updated to take
a 'const struct device *' instead of a 'struct device *'. A test has
been added to the conftest script to check for this and so instead of
relying on kernel version, use the definition generated by conftest to
select the appropriate function to use.

This is beneficial for working with 3rd party Linux kernels that may
have back-ported upstream changes into their kernel and so the kernel
version checks do not work.

Bug 4119327

Change-Id: I751b7401adee7b337192e255253b974cbd803642
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2991966
(cherry picked from commit 4b2fd8250d)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2995574
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-17 18:48:38 -07:00
Vishal Aslot
9890cbf901 tegra: hwpm: Update Makefile
This patch updates makefile so that it can build
correctly under baseos.

Bug 4266701

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: I4cf842212afb08badb9cb5f7287c1729fc4d1530
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2994464
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-13 02:33:46 -07:00
vasukis
f510c86528 tegra: hwpm: Add NVDEC IP debug node info
HWPM resource manager in QNX will query register read/write
ops to the IP debug node exposed. This is done via devctl
calls from the HWPM Res Mgr. Hence, update the NVDEC debug
node name in ip source file.

Bug 4170733
DOS-SHR-7601

Change-Id: I817aa18be43534907d761c992b9953918a39525d
Signed-off-by: vasukis <vasukis@nvidia.com>
(cherry picked from commit 7ed60c287e1253b834bfe050952240e97549e320)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2991341
Reviewed-by: Vishal Aslot <vaslot@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-12 20:13:51 -07:00
vasukis
806dbdf6fb tegra: hwpm: Macros to indicate presence of IP fd
Add macros to indicate if IP debug fd is present
or not. This is used in HWPM resource manager to
communicate with IPs during register operations.

Jira THWPM-105

Change-Id: I24a11e8e563b9d1ad8aaa560fb507468819f06dc
Signed-off-by: vasukis <vasukis@nvidia.com>
(cherry picked from commit 0a1317656fb3a8e126d29cef2c01da58feafcb41)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2991333
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2023-10-12 20:13:47 -07:00
vasukis
44fa2b0ebb tegra: hwpm: Add NVDEC IP for HWPM profiling
- Set the 'CONFIG_T234_HWPM_IP_NVDEC' defconfig in HWPM
Makefile, so that nvdec IP related code can be compiled
alongside HWPM driver. This is required for enabling
NVDEC IP for profiling by HWPM.

- This change affects both L4T and AV+L configs.

DOS-SHR-7601

Change-Id: I654fc3024731660d20c874b1e31659bc28627191
Signed-off-by: vasukis <vasukis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2983399
Reviewed-by: Vishal Aslot <vaslot@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-12 20:13:30 -07:00
Vedashree Vidwans
da3bda1364 tegra: hwpm: improve common function readability
- HALs get_rtr_int_idx and get_ip_max_idx return the chip specific
router index and number of IPs. This information is static for a chip
and doesn't require any input. Hence, update the HAL definition to not
require hwpm pointer as an argument. Update definition and references
for these HALs.
- Add new HAL to get PMA and RTR structure pointers. Implement and
update other chip specific functions to use new HAL.
- Add new timer macro to check a condition and timeout after given
retries. Update necessary code to use new timer macro.
- Correct validate_emc_config function to compute correct available mss
channel mask based on fuse value.
- Update tegra_hwpm_readl and tegra_hwpm_writel macros to assert error
value. This way error checks are added at one spot and not sprinkled all
over the driver code.
- Update get_mem_bytes_put_ptr() and membuf_overflow_status() to return
error as function return and accept arguments to return mem_head pointer
and overflow status respectively. Add overflow status macros to use
throughout driver. Update HAL definition and references accordingly.
- conftest is only compiled for OOT config atm. Add OOT config check to
include conftest header.

Jira THWPM-109

Change-Id: I77d150e860fa344a1604d241e27718150fdb8647
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2982555
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Vishal Aslot <vaslot@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-05 04:30:46 -07:00
vasukis
bc1f044c15 tegra: hwpm: Add Video Engine IP debug node info
HWPM resource manager in QNX will query register read/write
ops to the IP debug nodes exposed. This is done via devctl
calls from the HWPM Res Mgr. Hence, update the IP debug node
names in ip souce files.

Bug 4170733
DOS-SHR-7601

Change-Id: I58a39305aa8d6fcbbe01494d1e18069a369ee46f
Signed-off-by: vasukis <vasukis@nvidia.com>
(cherry picked from commit d37297cb494bb6bfc3b531e38302de18d0fddfc5)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2985248
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-09-27 08:22:21 -07:00
Vishal Aslot
c630042921 tegra: hwpm: th500: Merge hwpm-th500 files in hwpm
This patch carefully merges approved TH500 files from kernel/hwpm-next
into this public repo.

Bug 4266701

Signed-off-by: Vishal Aslot <vaslot@nvidia.com>
Change-Id: Ia869b75e1652c214e32c53f0edb3d4bf709d72f4
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2972033
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-09-21 21:16:39 -07:00
Jon Hunter
54ce334474 tegra: hwpm: Add compilation flag for iosys-map.h
Determining whether the header file iosys-map.h is present in the kernel
is currently determine by kernel version. However, for Linux v5.15,
iosys-map.h has been backported in order to support simple-framebuffer
for early display. Therefore, we cannot rely on the kernel version to
indicate whether iosys-map is present. This is also true for 3rd party
Linux kernels that backport changes as well. Fix this by adding a
compile time flag, that will be set accordingly by the conftest script
if this header is present.

Bug 4119327
Bug 4228080

Change-Id: I9de07a4615a6c9da504b36750c48e73e200da301
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2974080
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2023-09-14 13:28:18 -07:00
Shardar Mohammed
85732c9084 hwpm: remove unused vmas parameter from get_user_pages()
Remove unused vmas parameter from get_user_pages() based
on following change in core kernel.

=====
    Upstream commit "54d020692b34"

    mm/gup: remove unused vmas parameter from get_user_pages()

    Patch series "remove the vmas parameter from GUP APIs", v6.

    (pin_/get)_user_pages[_remote]() each provide an optional output parameter
    for an array of VMA objects associated with each page in the input range.

    These provide the means for VMAs to be returned, as long as mm->mmap_lock
    is never released during the GUP operation (i.e.  the internal flag
    FOLL_UNLOCKABLE is not specified).

    In addition, these VMAs can only be accessed with the mmap_lock held and
    become invalidated the moment it is released.

    The vast majority of invocations do not use this functionality and of
    those that do, all but one case retrieve a single VMA to perform checks
    upon.

    It is not egregious in the single VMA cases to simply replace the
    operation with a vma_lookup().  In these cases we duplicate the (fast)
    lookup on a slow path already under the mmap_lock, abstracted to a new
    get_user_page_vma_remote() inline helper function which also performs
    error checking and reference count maintenance.

    The special case is io_uring, where io_pin_pages() specifically needs to
    assert that the VMAs underlying the range do not result in broken
    long-term GUP file-backed mappings.

    As GUP now internally asserts that FOLL_LONGTERM mappings are not
    file-backed in a broken fashion (i.e.  requiring dirty tracking) - as
    implemented in "mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to
    file-backed mappings" - this logic is no longer required and so we can
    simply remove it altogether from io_uring.

    Eliminating the vmas parameter eliminates an entire class of danging
    pointer errors that might have occured should the lock have been
    incorrectly released.

    In addition, the API is simplified and now clearly expresses what it is
    intended for - applying the specified GUP flags and (if pinning) returning
    pinned pages.

    This change additionally opens the door to further potential improvements
    in GUP and the possible marrying of disparate code paths.

    I have run this series against gup_test with no issues.

    Thanks to Matthew Wilcox for suggesting this refactoring!

    This patch (of 6):

    No invocation of get_user_pages() use the vmas parameter, so remove it.

    The GUP API is confusing and caveated.  Recent changes have done much to
    improve that, however there is more we can do.  Exporting vmas is a prime
    target as the caller has to be extremely careful to preclude their use
    after the mmap_lock has expired or otherwise be left with dangling
    pointers.

    Removing the vmas parameter focuses the GUP functions upon their primary
    purpose - pinning (and outputting) pages as well as performing the actions
    implied by the input flags.

    This is part of a patch series aiming to remove the vmas parameter
    altogether.

    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
=====

Bug 4276500

Change-Id: Ie2833b7aa4e8fef1362694de6e8a27bba553e3d4
Signed-off-by: Shardar Mohammed <smohammed@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2978634
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-09-12 15:01:06 -07:00
Shardar Mohammed
f116216688 hwpm: Remove module owner parameter
Remove the module owner from the struct class based
on following change in core kernel

=====
    Upstream commit "6e30a66433af"

    driver core: class: remove struct module owner out of struct class

    The module owner field for a struct class was never actually used, so
    remove it as it is not doing anything at all.

    Cc: "Rafael J. Wysocki" <rafael@kernel.org>
    Link: https://lore.kernel.org/r/20230313181843.1207845-3-gregkh@linuxfoundation.org
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
=====

Bug 4276500

Change-Id: I0b68273e38f79ee6d903172b8f4d9d1807202abe
Signed-off-by: Shardar Mohammed <smohammed@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2978633
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-09-12 15:01:02 -07:00
vasukis
bbe13a4fa2 tegra: hwpm: Add support for next3 chip
- This patch adds the support for next3 chip in the hwpm kernel repo.
- Add NULL check for fake registers before read/write operations.
- On simulation platform, HWPM allocates memory to simulate perfmux and
perfmon address spaces. Update IP instance mask logic to assume perfmux
is available.

Jira THWPM-87
Jira THWPM-88

Change-Id: I6cdc882025d29268452c18b91873f4570f0d3462
Signed-off-by: vasukis <vasukis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2924799
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Tested-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-08-25 11:15:27 -07:00
Vedashree Vidwans
034993b547 tegra: hwpm: add hwpm-next, k5.10 support
Kernel 5.10 support
- Use code from HWPM repo with kernel 5.10 builds.
- Add HWPM source files as built-in driver as IP drivers like PVA,
DLA are built-in and dependent on HWPM for IP registration.

HWPM next chips support
- Currently, only HWPM code in the current (public) repo is included in
compilation on TOT. This patch includes Makefile from HWPM next repo for
next/future chips.

Jira THWPM-69

Change-Id: I8f2bbcabf0c01f2b2cbc722c481a1fe83490c76b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2921358
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-08-25 11:15:23 -07:00
Laxman Dewangan
808f5666a0 tegra: hwpm: nvhwpm support on PROD builds
- CONFIG_TEGRA_LINUX_PROD is used to indicate if the build belongs
to that of Prod or Prod_Debug. Use the Config to compile mock
hwpm driver in case of Prod builds. The reason being, IPs like
PVA and NVDLA are dependent on tegra_soc_hwpm_ip_register and
tegra_soc_hwpm_ip_unregister Symbols during boot.

- The packaging files only look for the nvhwpm drivers on
all build flavor. Hence, keep the same name HWPM driver
regardless of how it is build, fully supported or mock.

Bug 4206386

Change-Id: Ic554a7e7a22d55adb802636fd669c7d1fcb82830
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
(cherry picked from commit e2f7b1a75312cfe486d9b256aefa263c151ccb68)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2948941
Tested-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-08-05 13:11:09 -07:00
Vedashree Vidwans
4a4774bc0a tegra: hwpm: fix bug in hwpm unregister
Currently, both register and unregister calls to HWPM continue to mark
IP to be available. Fix this bug by updating tegra_hwpm_record_ip_ops()
to accept IP "available" as boolean argument.

Jira THWPM-8

Change-Id: I5a80ffa7ff20c1dc94528f20fd760a4f09721910
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2925492
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-06-30 17:56:18 -07:00
Vedashree Vidwans
fe31e92d6c tegra: hwpm: enable video unit profiling
Enable HWPM profiling for VIC, OFA and NVENC video units in external
builds.

Bug 4158291

Change-Id: I09589bbd70de2f1061dc91926f689266f36d062c
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2914401
(cherry picked from commit b76c2ace05b5621a6f0d1fcbd9456366029a56a7)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2924713
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-06-30 12:46:34 -07:00
Vedashree Vidwans
91d75567c0 tegra: hwpm: include all ip files
The config flags defined in Kconfig file are not available/used with
OOT kernel builds. To support all kernel versions, HWPM compiles
independent of CONFIG_TEGRA_SOC_HWPM flag. This also applies to
IP config flags which are not supported as well. Hence,
include HWPM IP files irrespective of the IP config flag status.

For OOT builds, use tegra_is_hypervisor_mode() instead of using
static function defined in HWPM driver.

Bug 4061775

Change-Id: Ifab4ad5c7c652a4ad17820a82b363e92280fdd1a
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2918870
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-06-15 12:16:23 -07:00
198 changed files with 44164 additions and 1028 deletions

15
BUILD.bazel Normal file
View File

@@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0-only
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
package(
default_visibility = [
"//visibility:public",
],
)
filegroup(
name = "hwpm_headers",
srcs = glob([
"include/**/*.h", ]),
)

15
Makefile.yocto Normal file
View File

@@ -0,0 +1,15 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# The makefile to install public headers on desired path.
# Get the path of Makefile
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
headers_install:
mkdir -p $(INSTALL_HDR_PATH); \
rsync -mrl --include='*/' --include='*\.h' --exclude='*' \
$(ROOT_DIR)/include $(INSTALL_HDR_PATH)
clean:
rm -rf $(INSTALL_HDR_PATH)

View File

@@ -0,0 +1,41 @@
# SPDX-License-Identifier: GPL-2.0-only
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
load("//build/kernel/kleaf:kernel.bzl", "kernel_module")
package(
default_visibility = [
"//visibility:public",
],
)
filegroup(
name = "headers",
srcs = glob([
] + [
"Makefile.hwpm.sources",
"Makefile.t234.sources",
"Makefile.t264.sources",
"Makefile.th500.sources",
"Makefile.common.sources",
"Makefile.linux.sources",
"Makefile.th500.soc.sources",
]),
)
kernel_module(
name = "hwpm",
srcs = glob([
"**/*.c",
"**/*.h",
]) + [
":headers",
"//hwpm:hwpm_headers",
"//nvidia-oot/scripts/conftest:conftest_headers",
],
outs = [
"nvhwpm.ko",
],
kernel_build = "//nvidia-build/kleaf:tegra_android",
)

View File

@@ -1,6 +1,6 @@
config TEGRA_SOC_HWPM
tristate "Tegra SOC HWPM driver"
default m
bool "Tegra SOC HWPM driver"
default y
help
The SOC HWPM driver enables performance monitoring for various Tegra
IPs.
@@ -10,4 +10,18 @@ config TEGRA_T234_HWPM
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_23x_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_23x_SOC)
help
T23x performance monitoring driver.
T23x performance monitoring driver.
config TEGRA_TH500_HWPM
bool "Tegra TH500 HWPM driver"
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_TH500_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_TH500_SOC)
help
TH500 performance monitoring driver.
config TEGRA_T264_HWPM
bool "Tegra T264 HWPM driver"
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_T264_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_T264_SOC)
help
T264 performance monitoring driver.

View File

@@ -1,4 +1,24 @@
# Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
# -*- mode: makefile -*-
#
# Copyright (c) 2022-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM
#
@@ -9,41 +29,61 @@ ifeq ($(origin srctree.hwpm), undefined)
srctree.hwpm := $(abspath $(shell dirname $(lastword $(MAKEFILE_LIST))))/../../..
endif
CONFIG_TEGRA_SOC_HWPM := y
ccflags-y += -DCONFIG_TEGRA_SOC_HWPM
CONFIG_TEGRA_T234_HWPM := y
ccflags-y += -DCONFIG_TEGRA_T234_HWPM
NVHWPM_OBJ = m
ifdef CONFIG_TEGRA_KLEAF_BUILD
srctree.nvconftest := $(abspath $(NV_BUILD_KERNEL_NVCONFTEST_OUT))
endif
# For OOT builds, set required config flags
ifeq ($(CONFIG_TEGRA_OOT_MODULE),m)
NVHWPM_OBJ = m
CONFIG_TEGRA_HWPM_OOT := y
ccflags-y += -DCONFIG_TEGRA_HWPM_OOT
CONFIG_TEGRA_FUSE_UPSTREAM := y
ccflags-y += -DCONFIG_TEGRA_FUSE_UPSTREAM
LINUXINCLUDE += -I$(srctree.nvconftest)
LINUXINCLUDE += -I$(srctree.hwpm)/include
LINUXINCLUDE += -I$(srctree.hwpm)/drivers/tegra/hwpm/include
LINUXINCLUDE += -I$(srctree.hwpm)/drivers/tegra/hwpm
ifneq ($(srctree.nvconftest),)
ccflags-y += -DCONFIG_TEGRA_HWPM_CONFTEST
ccflags-y += -I$(srctree.nvconftest)
endif
else
ccflags-y += -I$(srctree.nvidia)/include
else # CONFIG_TEGRA_OOT_MODULE != m
NVHWPM_OBJ = y
endif # CONFIG_TEGRA_OOT_MODULE
# Include paths
ccflags-y += -I$(srctree.hwpm)/include
ccflags-y += -I$(srctree.hwpm)/drivers/tegra/hwpm/include
ccflags-y += -I$(srctree.hwpm)/drivers/tegra/hwpm
endif
# Validate build config to add HWPM module support
ifeq ($(NV_BUILD_CONFIGURATION_IS_SAFETY),1)
obj-${NVHWPM_OBJ} += tegra_hwpm_mock.o
nvhwpm-objs := tegra_hwpm_mock.o
else ifeq ($(CONFIG_TEGRA_LINUX_PROD),1)
nvhwpm-objs := tegra_hwpm_mock.o
else ifneq ($(CONFIG_ARCH_TEGRA),y)
nvhwpm-objs := tegra_hwpm_mock.o
else
# Add required objects to nvhwpm object variable
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.sources
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.hwpm.sources
endif
obj-${NVHWPM_OBJ} += nvhwpm.o
ifdef CONFIG_TEGRA_KLEAF_BUILD
KERNEL_SRC ?= /lib/modules/$(shell uname -r)/build
M ?= $(shell pwd)
modules modules_install:
make -C $(KERNEL_SRC) M=$(M) $(ccflags) CONFIG_TEGRA_OOT_MODULE=m $(@)
clean:
make -C $(KERNEL_SRC) M=$(M) CONFIG_TEGRA_OOT_MODULE=m clean
else
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) $(ccflags) CONFIG_TEGRA_OOT_MODULE=m modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) CONFIG_TEGRA_OOT_MODULE=m clean
endif

View File

@@ -1,9 +1,28 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Common sources
#
# SPDX-License-Identifier: GPL-2.0
nvhwpm-common-objs += common/allowlist.o
nvhwpm-common-objs += common/aperture.o
nvhwpm-common-objs += common/ip.o

View File

@@ -0,0 +1,98 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Sources
#
# Based on build config, set HWPM flags
# Flag indicates internal build config
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL), 0)
CONFIG_HWPM_BUILD_INTERNAL := y
endif
# T234 supported on all valid platforms
CONFIG_TEGRA_HWPM_T234 := y
# TH500 supported only on OOT config
ifeq ($(CONFIG_TEGRA_HWPM_OOT),y)
ifeq ($(NV_BUILD_CONFIGURATION_EXPOSING_TH50X), 1)
CONFIG_TEGRA_HWPM_TH500 := y
endif
endif
# Set HWPM next path and include sources as per build config
ifeq ($(CONFIG_TEGRA_HWPM_OOT),y)
srctree.hwpm-next := ${srctree.hwpm}
# Include next sources only if Makefile.hwpm-next.sources exists
ifneq ($(wildcard ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources),)
include ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources
nvhwpm-objs += ${nvhwpm-next-objs}
endif
else # Non-OOT kernel
ifeq ($(origin NV_SOURCE), undefined)
ifeq ($(origin TEGRA_TOP), undefined)
# No reference to hwpm-next repo
else
srctree.hwpm-next := ${TEGRA_TOP}/kernel/hwpm-next
endif
else
srctree.hwpm-next := ${NV_SOURCE}/kernel/hwpm-next
endif
ifneq ($(origin srctree.hwpm-next), undefined)
ifneq ($(wildcard ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources),)
include ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources
nvhwpm-objs += ${nvhwpm-next-objs}
endif
endif
endif # CONFIG_TEGRA_HWPM_OOT
# Include common files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.common.sources
nvhwpm-objs += ${nvhwpm-common-objs}
# Include linux files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.linux.sources
nvhwpm-objs += ${nvhwpm-linux-objs}
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
ccflags-y += -DCONFIG_TEGRA_HWPM_T234
# Include T234 files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.t234.sources
nvhwpm-objs += ${nvhwpm-t234-objs}
endif
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
ccflags-y += -DCONFIG_TEGRA_HWPM_TH500
# Include TH500 files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.th500.sources
nvhwpm-objs += ${nvhwpm-th500-objs}
endif
# Include T264 files
CONFIG_TEGRA_T264_HWPM := y
ccflags-y += -DCONFIG_TEGRA_T264_HWPM
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.t264.sources
nvhwpm-objs += ${nvhwpm-t264-objs}

View File

@@ -1,9 +1,28 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Linux Sources
#
# SPDX-License-Identifier: GPL-2.0
nvhwpm-linux-objs += os/linux/aperture_utils.o
nvhwpm-linux-objs += os/linux/clk_rst_utils.o
nvhwpm-linux-objs += os/linux/driver.o

View File

@@ -1,18 +0,0 @@
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Tegra SOC HWPM Sources
#
# Include common files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.common.sources
nvhwpm-objs += ${nvhwpm-common-objs}
# Include linux files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.linux.sources
nvhwpm-objs += ${nvhwpm-linux-objs}
ifeq ($(CONFIG_TEGRA_T234_HWPM),y)
# Include T234 files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.t234.sources
nvhwpm-objs += ${nvhwpm-t234-objs}
endif

View File

@@ -1,10 +1,29 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM T234 sources
#
# SPDX-License-Identifier: GPL-2.0
ifeq ($(CONFIG_TEGRA_T234_HWPM),y)
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
nvhwpm-t234-objs += hal/t234/t234_aperture.o
nvhwpm-t234-objs += hal/t234/t234_interface.o
nvhwpm-t234-objs += hal/t234/t234_ip.o
@@ -31,47 +50,18 @@ nvhwpm-t234-objs += hal/t234/ip/pma/t234_pma.o
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. Such IPs can be force enabled from HWPM driver
# perspective (option 2). Marking an IP available forcefully requires the user
# to unpowergate the IP before running any HWPM experiments.
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
#
# Enable CONFIG_T234_HWPM_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enable flag dependency.
# Note: We should work towards removing force enabling IP.
#
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL),0)
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ccflags-y += -DCONFIG_T234_HWPM_ALLOW_FORCE_ENABLE
endif
#
# Currently, PVA, DLA and MSS channel are the IPs supported
# for performance metrics in external builds.
# Define CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE flag.
#
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL),1)
CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE=y
ccflags-y += -DCONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE
endif
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDLA
nvhwpm-t234-objs += hal/t234/ip/nvdla/t234_nvdla.o
ccflags-y += -DCONFIG_T234_HWPM_IP_PVA
nvhwpm-t234-objs += hal/t234/ip/pva/t234_pva.o
ccflags-y += -DCONFIG_T234_HWPM_IP_MSS_CHANNEL
nvhwpm-t234-objs += hal/t234/ip/mss_channel/t234_mss_channel.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVENC
nvhwpm-t234-objs += hal/t234/ip/nvenc/t234_nvenc.o
ccflags-y += -DCONFIG_T234_HWPM_IP_OFA
nvhwpm-t234-objs += hal/t234/ip/ofa/t234_ofa.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VIC
nvhwpm-t234-objs += hal/t234/ip/vic/t234_vic.o
# Include other IPs if minimal build is not enabled.
ifneq ($(CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE),y)
# Include non-prod IPs if minimal build is not enabled for validation
ccflags-y += -DCONFIG_T234_HWPM_IP_DISPLAY
nvhwpm-t234-objs += hal/t234/ip/display/t234_display.o
@@ -101,7 +91,29 @@ nvhwpm-t234-objs += hal/t234/ip/scf/t234_scf.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VI
nvhwpm-t234-objs += hal/t234/ip/vi/t234_vi.o
endif
# Below IPs are enabled for all builds
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDLA
nvhwpm-t234-objs += hal/t234/ip/nvdla/t234_nvdla.o
ccflags-y += -DCONFIG_T234_HWPM_IP_PVA
nvhwpm-t234-objs += hal/t234/ip/pva/t234_pva.o
ccflags-y += -DCONFIG_T234_HWPM_IP_MSS_CHANNEL
nvhwpm-t234-objs += hal/t234/ip/mss_channel/t234_mss_channel.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVENC
nvhwpm-t234-objs += hal/t234/ip/nvenc/t234_nvenc.o
ccflags-y += -DCONFIG_T234_HWPM_IP_OFA
nvhwpm-t234-objs += hal/t234/ip/ofa/t234_ofa.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VIC
nvhwpm-t234-objs += hal/t234/ip/vic/t234_vic.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDEC
nvhwpm-t234-objs += hal/t234/ip/nvdec/t234_nvdec.o
endif

View File

@@ -0,0 +1,97 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM T264 sources
#
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
nvhwpm-t264-objs += hal/t264/t264_aperture.o
nvhwpm-t264-objs += hal/t264/t264_interface.o
nvhwpm-t264-objs += hal/t264/t264_ip.o
nvhwpm-t264-objs += hal/t264/t264_mem_mgmt.o
nvhwpm-t264-objs += hal/t264/t264_resource.o
nvhwpm-t264-objs += hal/t264/t264_regops_allowlist.o
#
# RTR/PMA are HWPM IPs and can be enabled by default
#
nvhwpm-t264-objs += hal/t264/ip/pma/t264_pma.o
nvhwpm-t264-objs += hal/t264/ip/rtr/t264_rtr.o
#
# One of the HWPM components is a perfmux. Perfmux registers belong to the
# IP domain. There are 2 ways of accessing perfmux registers
# - option 1: implement HWPM <-> IP interface. IP drivers register with HWPM
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
#
# Enable CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enabling IP.
#
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ccflags-y += -DCONFIG_T264_HWPM_ALLOW_FORCE_ENABLE
endif
# Below IPs are enabled for all builds
ccflags-y += -DCONFIG_T264_HWPM_IP_PVA
nvhwpm-t264-objs += hal/t264/ip/pva/t264_pva.o
ccflags-y += -DCONFIG_T264_HWPM_IP_MSS_CHANNEL
nvhwpm-t264-objs += hal/t264/ip/mss_channel/t264_mss_channel.o
ccflags-y += -DCONFIG_T264_HWPM_IP_VIC
nvhwpm-t264-objs += hal/t264/ip/vic/t264_vic.o
ccflags-y += -DCONFIG_T264_HWPM_IP_MSS_HUBS
nvhwpm-t264-objs += hal/t264/ip/mss_hubs/t264_mss_hubs.o
ccflags-y += -DCONFIG_T264_HWPM_IP_OCU
nvhwpm-t264-objs += hal/t264/ip/ocu/t264_ocu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_SMMU
nvhwpm-t264-objs += hal/t264/ip/smmu/t264_smmu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_MSW
nvhwpm-t264-objs += hal/t264/ip/ucf_msw/t264_ucf_msw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_PSW
nvhwpm-t264-objs += hal/t264/ip/ucf_psw/t264_ucf_psw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_CSW
nvhwpm-t264-objs += hal/t264/ip/ucf_csw/t264_ucf_csw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_CPU
nvhwpm-t264-objs += hal/t264/ip/cpu/t264_cpu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_VI
nvhwpm-t264-objs += hal/t264/ip/vi/t264_vi.o
ccflags-y += -DCONFIG_T264_HWPM_IP_ISP
nvhwpm-t264-objs += hal/t264/ip/isp/t264_isp.o
endif

View File

@@ -0,0 +1,101 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM TH500 SOC sources
#
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_aperture.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_mem_mgmt.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_regops_allowlist.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_resource.o
#
# Control IP config
# To disable an IP config in compilation, add condition for both
# IP config flag and IP specific .o file.
#
#
# RTR/PMA are HWPM IPs and can be enabled by default
#
nvhwpm-th500-soc-objs += hal/th500/soc/ip/rtr/th500_rtr.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pma/th500_pma.o
#
# One of the HWPM components is a perfmux. Perfmux registers belong to the
# IP domain. There are 2 ways of accessing perfmux registers
# - option 1: implement HWPM <-> IP interface. IP drivers register with HWPM
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
#
# Enable CONFIG_HWPM_TH500_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enabling IP.
#
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ccflags-y += -DCONFIG_TH500_HWPM_ALLOW_FORCE_ENABLE
ccflags-y += -DCONFIG_TH500_HWPM_IP_MSS_CHANNEL
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mss_channel/th500_mss_channel.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_C2C
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c2c/th500_c2c.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_SMMU
nvhwpm-th500-soc-objs += hal/th500/soc/ip/smmu/th500_smmu.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_CL2
nvhwpm-th500-soc-objs += hal/th500/soc/ip/cl2/th500_cl2.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_C_NVLINK
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvlrx.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvltx.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvlctrl.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MSS_HUB
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mss_hub/th500_mss_hub.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_SOC
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_soc/th500_mcf_soc.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_C2C
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_c2c/th500_mcf_c2c.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_CLINK
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_clink/th500_mcf_clink.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_CORE
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_core/th500_mcf_core.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_PCIE
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xalrc.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xtlrc.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xtlq.o
endif # CONFIG_HWPM_BUILD_INTERNAL=y
endif

View File

@@ -0,0 +1,34 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM TH500 sources
#
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
nvhwpm-th500-objs += hal/th500/th500_interface.o
nvhwpm-th500-objs += hal/th500/th500_ip.o
# Include TH500 SOC files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.th500.soc.sources
nvhwpm-th500-objs += $(nvhwpm-th500-soc-objs)
endif

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -213,12 +213,35 @@ static int tegra_hwpm_alloc_dynamic_inst_element_array(
return 0;
}
/* This is for IP that is pre-configured with instance overlimit. */
if (inst_a_info->islots_overlimit == true) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP inst range(0x%llx-0x%llx) a_type = %d inst_slots %d"
"forced over limit, skip allocating dynamic array",
(unsigned long long)inst_a_info->range_start,
(unsigned long long)inst_a_info->range_end,
a_type, inst_a_info->inst_slots);
return 0;
}
ip_element_range = tegra_hwpm_safe_add_u64(
tegra_hwpm_safe_sub_u64(inst_a_info->range_end,
inst_a_info->range_start), 1ULL);
inst_a_info->inst_slots = tegra_hwpm_safe_cast_u64_to_u32(
ip_element_range / inst_a_info->inst_stride);
if (inst_a_info->inst_slots > TEGRA_HWPM_APERTURE_SLOTS_LIMIT) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP inst range(0x%llx-0x%llx) a_type = %d inst_slots %d"
"over limit, skip allocating dynamic array",
(unsigned long long)inst_a_info->range_start,
(unsigned long long)inst_a_info->range_end,
a_type, inst_a_info->inst_slots);
inst_a_info->islots_overlimit = true;
/* This is a valid case */
return 0;
}
inst_a_info->inst_arr = tegra_hwpm_kcalloc(
hwpm, inst_a_info->inst_slots, sizeof(struct hwpm_ip_inst *));
if (inst_a_info->inst_arr == NULL) {
@@ -268,14 +291,14 @@ fail:
static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 static_inst_idx, u32 a_type, u32 static_aperture_idx)
u32 s_inst_idx, u32 a_type, u32 s_element_idx)
{
int err = 0, ret = 0;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[static_inst_idx];
&chip_ip->ip_inst_static_array[s_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element =
&e_info->element_static_array[static_aperture_idx];
&e_info->element_static_array[s_element_idx];
u64 element_offset = 0ULL;
u32 idx = 0U;
u32 reg_val = 0U;
@@ -284,6 +307,14 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
switch (iia_func) {
case TEGRA_HWPM_INIT_IP_STRUCTURES:
if (e_info->eslots_overlimit) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d s_inst_idx %d a_type %u s_element_idx %u"
"Skip using dynamic element array",
ip_idx, s_inst_idx, a_type, s_element_idx);
break;
}
/* Compute element offset from element range start */
element_offset = tegra_hwpm_safe_sub_u64(
element->start_abs_pa, e_info->range_start);
@@ -295,9 +326,10 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx static idx %d == dynamic idx %d",
ip_idx, static_inst_idx, a_type,
element->element_type, (unsigned long long)element->start_abs_pa,
static_aperture_idx, idx);
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa,
s_element_idx, idx);
/* Set element slot pointer */
e_info->element_arr[idx] = element;
@@ -320,10 +352,24 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
}
}
/* Validate perfmux availability by reading 1st alist offset */
ret = tegra_hwpm_regops_readl(hwpm, ip_inst, element,
tegra_hwpm_safe_add_u64(element->start_abs_pa,
element->alist[0U].reg_offset), &reg_val);
if (hwpm->fake_registers_enabled) {
/*
* In this case, HWPM will allocate memory to simulate
* IP perfmux address space. Hence, the perfmux will
* always be available.
* Indicate this by setting ret = 0.
*/
ret = 0;
} else {
/*
* Validate perfmux availability by reading 1st alist offset
*/
ret = tegra_hwpm_regops_readl(hwpm, ip_inst, element,
tegra_hwpm_safe_add_u64(element->start_abs_pa,
element->alist[0U].reg_offset),
&reg_val);
}
if (ret != 0) {
/*
* If an IP element is unavailable, perfmux register
@@ -349,7 +395,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_allowlist,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -362,7 +408,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
} else {
tegra_hwpm_err(hwpm, "IP %d"
" element type %d static_idx %d NULL alist",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
}
break;
case TEGRA_HWPM_COMBINE_ALIST:
@@ -371,7 +417,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_allowlist,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -382,7 +428,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_err(hwpm,
"IP %d element type %d static_idx %d"
" alist copy failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
return err;
}
break;
@@ -392,7 +438,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_reserve_resource,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reservable",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -401,7 +447,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d static_idx %d reserve failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
goto fail;
}
break;
@@ -412,7 +458,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -421,7 +467,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (ret != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d release failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
}
break;
case TEGRA_HWPM_BIND_RESOURCES:
@@ -430,7 +476,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -440,7 +486,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d zero regs failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
goto fail;
}
@@ -448,7 +494,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d enable failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
goto fail;
}
break;
@@ -458,7 +504,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -467,8 +513,8 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
err = tegra_hwpm_element_disable(hwpm, element);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d enable failed",
ip_idx, a_type, static_aperture_idx);
" type %d idx %d disable failed",
ip_idx, a_type, s_element_idx);
goto fail;
}
@@ -477,7 +523,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d zero regs failed",
ip_idx, a_type, static_aperture_idx);
ip_idx, a_type, s_element_idx);
goto fail;
}
break;
@@ -487,7 +533,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_release,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -507,13 +553,13 @@ fail:
static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 static_inst_idx, u32 a_type)
u32 s_inst_idx, u32 a_type)
{
u32 static_idx = 0U, idx = 0U;
u64 inst_element_range = 0ULL;
int err = 0;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[static_inst_idx];
&chip_ip->ip_inst_static_array[s_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
tegra_hwpm_fn(hwpm, " ");
@@ -523,7 +569,23 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
/* no a_type elements in this IP */
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"No a_type = %d elements in IP %d stat inst %d",
a_type, ip_idx, static_inst_idx);
a_type, ip_idx, s_inst_idx);
return 0;
}
/**
* This is for IP instance that is pre-configured with element
* overlimit.
*/
if (e_info->eslots_overlimit == true) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"force over limit, skip allocating dynamic array",
iia_func, ip_idx, s_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots);
return 0;
}
@@ -533,6 +595,20 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
e_info->element_slots = tegra_hwpm_safe_cast_u64_to_u32(
inst_element_range / e_info->element_stride);
if (e_info->element_slots > TEGRA_HWPM_APERTURE_SLOTS_LIMIT) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"over limit, skip allocating dynamic array",
iia_func, ip_idx, s_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots);
e_info->eslots_overlimit = true;
/* This is a valid case */
return 0;
}
e_info->element_arr = tegra_hwpm_kcalloc(
hwpm, e_info->element_slots,
sizeof(struct hwpm_ip_aperture *));
@@ -550,7 +626,7 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"num_element_per_inst %d",
iia_func, ip_idx, static_inst_idx, a_type,
iia_func, ip_idx, s_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots, e_info->num_element_per_inst);
@@ -567,11 +643,11 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
static_idx++) {
err = tegra_hwpm_func_single_element(
hwpm, func_args, iia_func, ip_idx,
chip_ip, static_inst_idx, a_type, static_idx);
chip_ip, s_inst_idx, a_type, static_idx);
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d a_type %d idx %d func %d failed",
ip_idx, static_inst_idx, a_type,
ip_idx, s_inst_idx, a_type,
static_idx, iia_func);
goto fail;
}
@@ -591,7 +667,7 @@ fail:
static int tegra_hwpm_func_all_elements(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 static_inst_idx)
u32 s_inst_idx)
{
u32 a_type;
int err = 0;
@@ -600,11 +676,11 @@ static int tegra_hwpm_func_all_elements(struct tegra_soc_hwpm *hwpm,
for (a_type = 0U; a_type < TEGRA_HWPM_APERTURE_TYPE_MAX; a_type++) {
err = tegra_hwpm_func_all_elements_of_type(hwpm, func_args,
iia_func, ip_idx, chip_ip, static_inst_idx, a_type);
iia_func, ip_idx, chip_ip, s_inst_idx, a_type);
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d a_type %d func %d failed",
ip_idx, static_inst_idx, a_type, iia_func);
ip_idx, s_inst_idx, a_type, iia_func);
goto fail;
}
}
@@ -617,13 +693,13 @@ fail:
static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 static_inst_idx)
u32 s_inst_idx)
{
int err = 0;
u32 a_type, idx = 0U;
u64 inst_offset = 0ULL;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[static_inst_idx];
&chip_ip->ip_inst_static_array[s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info = NULL;
struct hwpm_ip_element_info *e_info = NULL;
@@ -637,8 +713,15 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (inst_a_info->range_end == 0ULL) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"No a_type = %d elements in IP %d",
a_type, ip_idx);
"No a_type = %d elements in IP %d",
a_type, ip_idx);
continue;
}
if (inst_a_info->islots_overlimit) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d s_inst_idx %d Skip using dynamic instance array",
ip_idx, s_inst_idx);
continue;
}
@@ -654,8 +737,10 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
"IP %d a_type %d inst range start 0x%llx"
"element range start 0x%llx"
" static inst idx %d == dynamic idx %d",
ip_idx, a_type, (unsigned long long)inst_a_info->range_start,
(unsigned long long)e_info->range_start, static_inst_idx, idx);
ip_idx, a_type,
(unsigned long long)inst_a_info->range_start,
(unsigned long long)e_info->range_start,
s_inst_idx, idx);
/* Set perfmux slot pointer */
inst_a_info->inst_arr[idx] = ip_inst;
@@ -674,17 +759,17 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d power mgmt disable failed",
ip_idx, static_inst_idx);
ip_idx, s_inst_idx);
goto fail;
}
}
/* Continue functionality for all apertures */
err = tegra_hwpm_func_all_elements(hwpm, func_args, iia_func,
ip_idx, chip_ip, static_inst_idx);
ip_idx, chip_ip, s_inst_idx);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d inst %d func 0x%x failed",
ip_idx, static_inst_idx, iia_func);
ip_idx, s_inst_idx, iia_func);
goto fail;
}
@@ -707,7 +792,7 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d power mgmt enable failed",
ip_idx, static_inst_idx);
ip_idx, s_inst_idx);
goto fail;
}
}
@@ -721,22 +806,22 @@ static int tegra_hwpm_func_all_inst(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip)
{
int err = 0, ret = 0;
u32 inst_idx = 0U;
u32 s_inst_idx = 0U;
unsigned long reserved_insts = 0UL, idx = 0UL;
tegra_hwpm_fn(hwpm, " ");
for (inst_idx = 0U; inst_idx < chip_ip->num_instances; inst_idx++) {
for (s_inst_idx = 0U; s_inst_idx < chip_ip->num_instances; s_inst_idx++) {
err = tegra_hwpm_func_single_inst(hwpm, func_args, iia_func,
ip_idx, chip_ip, inst_idx);
ip_idx, chip_ip, s_inst_idx);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d inst %d func 0x%x failed",
ip_idx, inst_idx, iia_func);
ip_idx, s_inst_idx, iia_func);
goto fail;
}
if (iia_func == TEGRA_HWPM_RESERVE_GIVEN_RESOURCE) {
reserved_insts |= BIT(inst_idx);
reserved_insts |= BIT(s_inst_idx);
}
}
@@ -827,7 +912,7 @@ int tegra_hwpm_func_single_ip(struct tegra_soc_hwpm *hwpm,
}
break;
case TEGRA_HWPM_RELEASE_RESOURCES:
if (ip_idx == active_chip->get_rtr_int_idx(hwpm)) {
if (ip_idx == active_chip->get_rtr_int_idx()) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"Router will be released later");
return 0;
@@ -919,7 +1004,7 @@ int tegra_hwpm_func_all_ip(struct tegra_soc_hwpm *hwpm,
func_args->full_alist_idx = 0ULL;
}
for (ip_idx = 0U; ip_idx < active_chip->get_ip_max_idx(hwpm);
for (ip_idx = 0U; ip_idx < active_chip->get_ip_max_idx();
ip_idx++) {
err = tegra_hwpm_func_single_ip(

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -30,12 +30,15 @@
#include <tegra_hwpm.h>
#include <hal/t234/t234_init.h>
#include <hal/th500/th500_init.h>
#include <hal/t264/t264_init.h>
#ifdef CONFIG_TEGRA_NEXT1_HWPM
#include <tegra_hwpm_next1_init.h>
#endif
#ifdef CONFIG_TEGRA_NEXT2_HWPM
#include <tegra_hwpm_next2_init.h>
#ifdef CONFIG_TEGRA_NEXT4_HWPM
#include <tegra_hwpm_next4_init.h>
#endif
static int tegra_hwpm_init_chip_ip_structures(struct tegra_soc_hwpm *hwpm,
@@ -62,13 +65,48 @@ static int tegra_hwpm_init_chip_ip_structures(struct tegra_soc_hwpm *hwpm,
break;
}
break;
default:
#ifdef CONFIG_TEGRA_NEXT2_HWPM
err = tegra_hwpm_next2_init_chip_ip_structures(
hwpm, chip_id, chip_id_rev);
#else
tegra_hwpm_err(hwpm, "Chip 0x%x not supported", chip_id);
#ifdef CONFIG_TEGRA_HWPM_TH500
case 0x50:
switch (chip_id_rev) {
case 0x0:
err = th500_hwpm_init_chip_info(hwpm);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
#ifdef CONFIG_TEGRA_T264_HWPM
case 0x26:
switch (chip_id_rev) {
case 0x4:
err = t264_hwpm_init_chip_info(hwpm);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
#ifdef CONFIG_TEGRA_NEXT4_HWPM
case 0x41:
switch (chip_id_rev) {
case 0x0:
err = tegra_hwpm_next4_init_chip_ip_structures(
hwpm, chip_id, chip_id_rev);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
default:
tegra_hwpm_err(hwpm, "Chip 0x%x not supported", chip_id);
break;
}
@@ -94,6 +132,7 @@ int tegra_hwpm_init_sw_components(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
hwpm->dbg_mask = TEGRA_HWPM_DEFAULT_DBG_MASK;
hwpm->dbg_skip_alist = false;
err = tegra_hwpm_init_chip_ip_structures(hwpm, chip_id, chip_id_rev);
if (err != 0) {
@@ -115,6 +154,13 @@ int tegra_hwpm_setup_sw(struct tegra_soc_hwpm *hwpm)
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
ret = hwpm->active_chip->force_enable_ips(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to force enable IPs");
/* Do not fail because of force enable failure */
return 0;
}
ret = hwpm->active_chip->validate_current_config(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to validate current conifg");
@@ -258,6 +304,11 @@ bool tegra_hwpm_validate_primary_hals(struct tegra_soc_hwpm *hwpm)
return false;
}
if (hwpm->active_chip->get_rtr_pma_perfmux_ptr == NULL) {
tegra_hwpm_err(hwpm, "get_rtr_pma_perfmux_ptr HAL uninitialized");
return false;
}
if (hwpm->active_chip->extract_ip_ops == NULL) {
tegra_hwpm_err(hwpm, "extract_ip_ops uninitialized");
return false;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -26,6 +26,7 @@
#include <tegra_hwpm_ip.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_aperture.h>
#include <tegra_hwpm_static_analysis.h>
int tegra_hwpm_ip_handle_power_mgmt(struct tegra_soc_hwpm *hwpm,
@@ -56,13 +57,12 @@ int tegra_hwpm_ip_handle_power_mgmt(struct tegra_soc_hwpm *hwpm,
}
int tegra_hwpm_update_ip_inst_fs_mask(struct tegra_soc_hwpm *hwpm,
u32 ip_idx, u32 a_type, u32 inst_idx, bool available)
u32 ip_idx, u32 a_type, u32 s_inst_idx, bool available)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[inst_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
@@ -100,13 +100,12 @@ int tegra_hwpm_update_ip_inst_fs_mask(struct tegra_soc_hwpm *hwpm,
static int tegra_hwpm_update_ip_ops_info(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_ip_ops *ip_ops,
u32 ip_idx, u32 a_type, u32 inst_idx, bool available)
u32 ip_idx, u32 a_type, u32 s_inst_idx, bool available)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[inst_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
/* Update IP ops info for the instance */
struct tegra_hwpm_ip_ops *ops = &ip_inst->ip_ops;
@@ -135,7 +134,7 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
int ret = 0;
bool found = false;
u32 idx = ip_idx;
u32 inst_idx = 0U, element_idx = 0U;
u32 s_inst_idx = 0U, s_element_idx = 0U;
u32 a_type = 0U;
enum tegra_hwpm_element_type element_type = HWPM_ELEMENT_INVALID;
@@ -144,7 +143,7 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
/* Find IP aperture containing phys_addr in allowlist */
found = tegra_hwpm_aperture_for_address(hwpm,
TEGRA_HWPM_MATCH_BASE_ADDRESS, base_address,
&idx, &inst_idx, &element_idx, &element_type);
&idx, &s_inst_idx, &s_element_idx, &element_type);
if (!found) {
tegra_hwpm_err(hwpm, "Base addr 0x%llx not in IP %d",
(unsigned long long)base_address, idx);
@@ -152,9 +151,10 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
}
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Found addr 0x%llx IP %d inst_idx %d element_idx %d e_type %d",
(unsigned long long)base_address, idx, inst_idx,
element_idx, element_type);
"Found addr 0x%llx IP %d s_inst_idx %d "
"s_element_idx %d e_type %d",
(unsigned long long)base_address, idx, s_inst_idx,
s_element_idx, element_type);
switch (element_type) {
case HWPM_ELEMENT_PERFMON:
@@ -175,21 +175,21 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
if (ip_ops != NULL) {
/* Update IP ops */
ret = tegra_hwpm_update_ip_ops_info(hwpm, ip_ops,
ip_idx, a_type, inst_idx, available);
ip_idx, a_type, s_inst_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst_idx %d: Failed to update ip_ops",
ip_idx, inst_idx);
"IP %d s_inst_idx %d: Failed to update ip_ops",
ip_idx, s_inst_idx);
goto fail;
}
}
ret = tegra_hwpm_update_ip_inst_fs_mask(hwpm, ip_idx, a_type,
inst_idx, available);
s_inst_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst_idx %d: Failed to update fs_info",
ip_idx, inst_idx);
"IP %d s_inst_idx %d: Failed to update fs_info",
ip_idx, s_inst_idx);
goto fail;
}
@@ -200,7 +200,7 @@ fail:
int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u64 *fs_mask, u8 *ip_status)
{
u32 ip_idx = 0U, inst_idx = 0U, element_mask_shift = 0U;
u32 ip_idx = 0U, s_inst_idx = 0U, element_mask_shift = 0U;
u64 floorsweep = 0ULL;
struct tegra_soc_hwpm_chip *active_chip = NULL;
struct hwpm_ip *chip_ip = NULL;
@@ -213,12 +213,13 @@ int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
active_chip = hwpm->active_chip;
chip_ip = active_chip->chip_ips[ip_idx];
if (!(chip_ip->override_enable) && chip_ip->inst_fs_mask) {
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
element_mask_shift = 0U;
for (s_inst_idx = 0U;
s_inst_idx < chip_ip->num_instances;
s_inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
inst_idx];
element_mask_shift = (inst_idx == 0U ? 0U :
ip_inst->num_core_elements_per_inst);
s_inst_idx];
if (ip_inst->hw_inst_mask &
chip_ip->inst_fs_mask) {
@@ -226,10 +227,15 @@ int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
ip_inst->element_fs_mask <<
element_mask_shift);
}
element_mask_shift += ip_inst->num_core_elements_per_inst;
}
*fs_mask = floorsweep;
*ip_status = TEGRA_HWPM_IP_STATUS_VALID;
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm IP %d is available", ip_enum);
return 0;
}
}
@@ -259,12 +265,17 @@ int tegra_hwpm_get_resource_info(struct tegra_soc_hwpm *hwpm,
if (!(chip_ip->override_enable)) {
*status = tegra_hwpm_safe_cast_u32_to_u8(
chip_ip->resource_status);
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm Resource %d is %d",
resource_enum, chip_ip->resource_status);
return 0;
}
}
*status = tegra_hwpm_safe_cast_u32_to_u8(
TEGRA_HWPM_RESOURCE_STATUS_INVALID);
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm Resource %d is unavailable", resource_enum);
return 0;
}
@@ -293,35 +304,30 @@ int tegra_hwpm_finalize_chip_info(struct tegra_soc_hwpm *hwpm)
return ret;
}
ret = hwpm->active_chip->force_enable_ips(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to force enable IPs");
/* Do not fail because of force enable failure */
return 0;
}
return 0;
}
static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element = e_info->element_arr[*element_idx];
struct hwpm_ip_aperture *element =
&e_info->element_static_array[*s_element_idx];
tegra_hwpm_fn(hwpm, " ");
if (element == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx inst_idx %d "
"a_type %d: element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d: s_element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, *s_inst_idx,
a_type, *s_element_idx);
return false;
}
@@ -330,20 +336,21 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
if ((element->element_index_mask &
ip_inst->element_fs_mask) == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx inst_idx %d "
"a_type %d: element_idx %d: not available",
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d: s_element_idx %d: not available",
*ip_idx, (unsigned long long)find_addr,
*inst_idx, a_type, *element_idx);
*s_inst_idx, a_type, *s_element_idx);
return false;
}
/* Make sure phys addr belongs to this element */
if ((find_addr < element->start_abs_pa) ||
(find_addr > element->end_abs_pa)) {
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d: out of bounds",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d: out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
return false;
}
@@ -353,10 +360,18 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
}
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d address not in alist",
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d address not in alist",
*ip_idx, (unsigned long long)find_addr,
*inst_idx, a_type, *element_idx);
*s_inst_idx, a_type, *s_element_idx);
if (hwpm->dbg_skip_alist) {
*element_type = element->element_type;
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"skipping allowlist check");
return true;
}
return false;
}
@@ -364,10 +379,10 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
/* Confirm that given addr is base address of this element */
if (find_addr != element->start_abs_pa) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d: addr != start addr",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d addr != start addr",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
return false;
}
*element_type = element->element_type;
@@ -380,73 +395,118 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
static bool tegra_hwpm_addr_in_all_elements(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element = NULL;
u64 element_offset = 0ULL;
u32 idx;
u32 idx = 0U;
u32 dyn_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
/* Make sure address falls in elements of a_type */
if (e_info->num_element_per_inst == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx: inst_idx %d no type %d elements",
*ip_idx, (unsigned long long)find_addr, *inst_idx, a_type);
"IP %d addr 0x%llx: s_inst_idx %d no type %d elements",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type);
return false;
}
if ((find_addr < e_info->range_start) ||
(find_addr > e_info->range_end)) {
/* Address not in this instance corresponding to a_type */
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d inst_idx %d: "
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d s_inst_idx %d: "
"addr 0x%llx not in type %d elements",
*ip_idx, *inst_idx, (unsigned long long)find_addr, a_type);
*ip_idx, *s_inst_idx,
(unsigned long long)find_addr, a_type);
return false;
}
/* Find element index to which address belongs to */
element_offset = tegra_hwpm_safe_sub_u64(
find_addr, e_info->range_start);
idx = tegra_hwpm_safe_cast_u64_to_u32(
element_offset / e_info->element_stride);
if (e_info->eslots_overlimit) {
/* Use brute force approach to find element index */
for (idx = 0U; idx < e_info->num_element_per_inst; idx++) {
element = &e_info->element_static_array[idx];
if ((find_addr >= element->start_abs_pa) &&
(find_addr <= element->end_abs_pa)) {
/* Found element with given address */
break;
}
}
/* Make sure element index is valid */
if (idx >= e_info->element_slots) {
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx inst_idx %d a_type %d: "
"element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr, *inst_idx, a_type, idx);
return false;
/* Make sure element index is valid */
if (idx >= e_info->num_element_per_inst) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"s_element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, idx);
return false;
}
} else {
/* Find element index to which address belongs to */
element_offset = tegra_hwpm_safe_sub_u64(
find_addr, e_info->range_start);
dyn_idx = tegra_hwpm_safe_cast_u64_to_u32(
element_offset / e_info->element_stride);
/* Make sure element index is valid */
if (dyn_idx >= e_info->element_slots) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"dynamic element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, dyn_idx);
return false;
}
/* Convert dynamic index to static index */
element = e_info->element_arr[dyn_idx];
if (element == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"dynamic element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, dyn_idx);
return false;
}
idx = element->aperture_index;
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"find_addr 0x%llx element dyn_idx %u static idx %u",
(unsigned long long)find_addr, dyn_idx, idx);
}
*element_idx = idx;
*s_element_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_element(hwpm, iia_func,
find_addr, ip_idx, inst_idx, element_idx, element_type, a_type);
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
}
static bool tegra_hwpm_addr_in_single_instance(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
tegra_hwpm_fn(hwpm, " ");
if (ip_inst == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d addr 0x%llx: "
"a_type %d inst_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, a_type, *inst_idx);
"a_type %d s_inst_idx %d not populated",
*ip_idx, (unsigned long long)find_addr,
a_type, *s_inst_idx);
return false;
}
@@ -455,56 +515,103 @@ static bool tegra_hwpm_addr_in_single_instance(struct tegra_soc_hwpm *hwpm,
if ((chip_ip->inst_fs_mask & ip_inst->hw_inst_mask) == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx: "
"a_type %d inst_idx %d not available",
*ip_idx, (unsigned long long)find_addr, a_type, *inst_idx);
"a_type %d s_inst_idx %d not available",
*ip_idx, (unsigned long long)find_addr,
a_type, *s_inst_idx);
return false;
}
}
/* Process further and return */
return tegra_hwpm_addr_in_all_elements(hwpm, iia_func,
find_addr, ip_idx, inst_idx, element_idx, element_type, a_type);
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
}
static bool tegra_hwpm_addr_in_all_instances(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = NULL;
struct hwpm_ip_element_info *e_info = NULL;
bool found = false;
u64 inst_offset = 0ULL;
u32 idx = 0U;
u32 dyn_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
/* Find instance to which address belongs to */
inst_offset = tegra_hwpm_safe_sub_u64(
find_addr, inst_a_info->range_start);
idx = tegra_hwpm_safe_cast_u64_to_u32(
inst_offset / inst_a_info->inst_stride);
if (inst_a_info->islots_overlimit) {
/* Use brute force approach to find instance index */
for (idx = 0U; idx < chip_ip->num_instances; idx++) {
ip_inst = &chip_ip->ip_inst_static_array[idx];
e_info = &ip_inst->element_info[a_type];
if ((find_addr >= e_info->range_start) &&
(find_addr <= e_info->range_end)) {
*s_inst_idx = idx;
/* Found element with given address */
found = tegra_hwpm_addr_in_single_instance(
hwpm, iia_func, find_addr, ip_idx,
s_inst_idx, s_element_idx,
element_type, a_type);
if (found) {
return found;
}
}
}
/* Make sure instance index is valid */
if (idx >= inst_a_info->inst_slots) {
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx a_type %d: "
"inst_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr, a_type, idx);
return false;
/* Make sure instance index is valid */
if (idx >= chip_ip->num_instances) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"Addr 0x%llx not in IP %d a_type %d",
(unsigned long long)find_addr, *ip_idx, a_type);
return false;
}
} else {
/* Find instance to which address belongs to */
inst_offset = tegra_hwpm_safe_sub_u64(
find_addr, inst_a_info->range_start);
dyn_idx = tegra_hwpm_safe_cast_u64_to_u32(
inst_offset / inst_a_info->inst_stride);
/* Make sure instance index is valid */
if (dyn_idx >= inst_a_info->inst_slots) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx a_type %d: "
"dynamic inst_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
a_type, dyn_idx);
return false;
}
/* Convert dynamic inst index to static inst index */
ip_inst = inst_a_info->inst_arr[dyn_idx];
idx = tegra_hwpm_ffs(hwpm, ip_inst->hw_inst_mask);
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"IP %d find_addr 0x%llx inst dyn_idx %u static idx %u",
*ip_idx, (unsigned long long)find_addr, dyn_idx, idx);
*s_inst_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_instance(hwpm, iia_func,
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
}
*inst_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_instance(hwpm, iia_func,
find_addr, ip_idx, inst_idx, element_idx,
element_type, a_type);
/* Execution shouldn't reach here */
tegra_hwpm_err(hwpm, "Execution shouldn't reach here");
return false;
}
static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
@@ -515,7 +622,8 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
if (chip_ip == NULL) {
tegra_hwpm_err(hwpm, "IP %d not populated as expected", *ip_idx);
tegra_hwpm_err(hwpm,
"IP %d not populated as expected", *ip_idx);
return false;
}
@@ -549,6 +657,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
if ((find_addr < inst_a_info->range_start) ||
(find_addr > inst_a_info->range_end)) {
/* Address not in this IP for this a_type */
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx not in a_type %d elements",
@@ -558,7 +667,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
/* Process further and return */
found = tegra_hwpm_addr_in_all_instances(hwpm, iia_func,
find_addr, ip_idx, inst_idx, element_idx,
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
if (found) {
break;
@@ -576,7 +685,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type)
{
u32 idx;
@@ -585,7 +694,7 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
struct hwpm_ip *chip_ip = active_chip->chip_ips[idx];
if (chip_ip == NULL) {
@@ -601,7 +710,7 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
}
found = tegra_hwpm_addr_in_single_ip(hwpm, iia_func, find_addr,
&idx, inst_idx, element_idx, element_type);
&idx, s_inst_idx, s_element_idx, element_type);
if (found) {
*ip_idx = idx;
return true;
@@ -613,15 +722,15 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
enum tegra_hwpm_element_type *element_type)
{
bool found = false;
tegra_hwpm_fn(hwpm, " ");
if ((ip_idx == NULL) || (inst_idx == NULL) ||
(element_idx == NULL) || (element_type == NULL)) {
if ((ip_idx == NULL) || (s_inst_idx == NULL) ||
(s_element_idx == NULL) || (element_type == NULL)) {
tegra_hwpm_err(hwpm, "NULL index pointer");
return false;
}
@@ -629,7 +738,7 @@ bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
if (iia_func == TEGRA_HWPM_FIND_GIVEN_ADDRESS) {
/* IP index is not known, search in all IPs */
found = tegra_hwpm_addr_in_all_ip(hwpm, iia_func, find_addr,
ip_idx, inst_idx, element_idx, element_type);
ip_idx, s_inst_idx, s_element_idx, element_type);
if (!found) {
tegra_hwpm_err(hwpm,
"Address 0x%llx not in any IP",
@@ -640,7 +749,7 @@ bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
if (iia_func == TEGRA_HWPM_MATCH_BASE_ADDRESS) {
found = tegra_hwpm_addr_in_single_ip(hwpm, iia_func, find_addr,
ip_idx, inst_idx, element_idx, element_type);
ip_idx, s_inst_idx, s_element_idx, element_type);
if (!found) {
tegra_hwpm_err(hwpm, "Address 0x%llx not in IP %d",
(unsigned long long)find_addr, *ip_idx);

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -44,10 +44,10 @@ int tegra_hwpm_reserve_rtr(struct tegra_soc_hwpm *hwpm)
err = tegra_hwpm_func_single_ip(hwpm, NULL,
TEGRA_HWPM_RESERVE_GIVEN_RESOURCE,
active_chip->get_rtr_int_idx(hwpm));
active_chip->get_rtr_int_idx());
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to reserve IP %d",
active_chip->get_rtr_int_idx(hwpm));
active_chip->get_rtr_int_idx());
return err;
}
return err;
@@ -62,10 +62,10 @@ int tegra_hwpm_release_rtr(struct tegra_soc_hwpm *hwpm)
err = tegra_hwpm_func_single_ip(hwpm, NULL,
TEGRA_HWPM_RELEASE_ROUTER,
active_chip->get_rtr_int_idx(hwpm));
active_chip->get_rtr_int_idx());
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to release IP %d",
active_chip->get_rtr_int_idx(hwpm));
active_chip->get_rtr_int_idx());
return err;
}
return err;
@@ -79,7 +79,7 @@ int tegra_hwpm_reserve_resource(struct tegra_soc_hwpm *hwpm, u32 resource)
tegra_hwpm_fn(hwpm, " ");
tegra_hwpm_dbg(hwpm, hwpm_info,
tegra_hwpm_dbg(hwpm, hwpm_info | hwpm_dbg_reserve_resource,
"User requesting to reserve resource %d", resource);
/* Translate resource to ip_idx */

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_display.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_display_inst0_perfmon_element_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_display_inst0_perfmux_element_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -84,10 +91,10 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_display_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_disp_base_r(),
.range_end = addr_map_disp_limit_r(),
.element_stride =
addr_map_disp_limit_r() -
.element_stride = addr_map_disp_limit_r() -
addr_map_disp_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
@@ -117,8 +124,7 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
t234_display_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_disp_base_r(),
.range_end = addr_map_rpg_pm_disp_limit_r(),
.element_stride =
addr_map_rpg_pm_disp_limit_r() -
.element_stride = addr_map_rpg_pm_disp_limit_r() -
addr_map_rpg_pm_disp_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
@@ -129,7 +135,7 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -148,6 +154,7 @@ struct hwpm_ip t234_hwpm_ip_display = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_disp_base_r(),
.range_end = addr_map_disp_limit_r(),
.inst_stride = addr_map_disp_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_DISPLAY_H
#define T234_HWPM_IP_DISPLAY_H
#if defined(CONFIG_T234_HWPM_IP_DISPLAY)
#define T234_HWPM_ACTIVE_IP_DISPLAY T234_HWPM_IP_DISPLAY,
#define T234_HWPM_ACTIVE_IP_DISPLAY T234_HWPM_IP_DISPLAY,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_DISPLAY_NUM_INSTANCES 1U
#define T234_HWPM_IP_DISPLAY_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_DISPLAY_NUM_INSTANCES 1U
#define T234_HWPM_IP_DISPLAY_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_display;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_isp.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_isp_inst0_perfmon_element_static_array[
T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_isp_inst0_perfmux_element_static_array[
T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -84,6 +91,7 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_isp_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.element_stride = addr_map_isp_thi_limit_r() -
@@ -127,7 +135,7 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -135,6 +143,7 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
},
};
/* IP structure */
struct hwpm_ip t234_hwpm_ip_isp = {
.num_instances = T234_HWPM_IP_ISP_NUM_INSTANCES,
.ip_inst_static_array = t234_isp_inst_static_array,
@@ -145,6 +154,7 @@ struct hwpm_ip t234_hwpm_ip_isp = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.inst_stride = addr_map_isp_thi_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_ISP_H
#define T234_HWPM_IP_ISP_H
#if defined(CONFIG_T234_HWPM_IP_ISP)
#define T234_HWPM_ACTIVE_IP_ISP T234_HWPM_IP_ISP,
#define T234_HWPM_ACTIVE_IP_ISP T234_HWPM_IP_ISP,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_ISP_NUM_INSTANCES 1U
#define T234_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_ISP_NUM_INSTANCES 1U
#define T234_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_isp;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mgbe.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_mgbe_inst0_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst1_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -72,6 +79,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst2_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -92,6 +100,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst3_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -112,6 +121,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst0_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -131,6 +141,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst1_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -150,6 +161,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst2_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -169,6 +181,7 @@ static struct hwpm_ip_aperture t234_mgbe_inst3_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -201,6 +214,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe0_mac_rm_base_r(),
.range_end = addr_map_mgbe0_mac_rm_limit_r(),
.element_stride = addr_map_mgbe0_mac_rm_limit_r() -
@@ -244,7 +258,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -264,6 +278,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe1_mac_rm_base_r(),
.range_end = addr_map_mgbe1_mac_rm_limit_r(),
.element_stride = addr_map_mgbe1_mac_rm_limit_r() -
@@ -307,9 +322,11 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
@@ -325,6 +342,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe2_mac_rm_base_r(),
.range_end = addr_map_mgbe2_mac_rm_limit_r(),
.element_stride = addr_map_mgbe2_mac_rm_limit_r() -
@@ -368,9 +386,11 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
@@ -386,6 +406,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe3_mac_rm_base_r(),
.range_end = addr_map_mgbe3_mac_rm_limit_r(),
.element_stride = addr_map_mgbe3_mac_rm_limit_r() -
@@ -429,9 +450,11 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
@@ -446,6 +469,7 @@ struct hwpm_ip t234_hwpm_ip_mgbe = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe0_mac_rm_base_r(),
.range_end = addr_map_mgbe3_mac_rm_limit_r(),
.inst_stride = addr_map_mgbe0_mac_rm_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MGBE_H
#define T234_HWPM_IP_MGBE_H
#if defined(CONFIG_T234_HWPM_IP_MGBE)
#define T234_HWPM_ACTIVE_IP_MGBE T234_HWPM_IP_MGBE,
#define T234_HWPM_ACTIVE_IP_MGBE T234_HWPM_IP_MGBE,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MGBE_NUM_INSTANCES 4U
#define T234_HWPM_IP_MGBE_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_MGBE_NUM_INSTANCES 4U
#define T234_HWPM_IP_MGBE_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_mgbe;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_channel.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -48,6 +54,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -64,6 +71,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -80,6 +88,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -96,6 +105,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -112,6 +122,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -128,6 +139,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -144,6 +156,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -160,6 +173,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -176,6 +190,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
@@ -192,6 +207,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
@@ -208,6 +224,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
@@ -224,6 +241,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
@@ -240,6 +258,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
@@ -256,6 +275,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
@@ -272,6 +292,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
@@ -292,6 +313,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -307,6 +329,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -322,6 +345,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -337,6 +361,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -352,6 +377,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -367,6 +393,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -382,6 +409,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -397,6 +425,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -412,6 +441,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -427,6 +457,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
@@ -442,6 +473,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
@@ -457,6 +489,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
@@ -472,6 +505,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
@@ -487,6 +521,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
@@ -502,6 +537,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
@@ -517,6 +553,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
@@ -536,6 +573,7 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_broadcast_element_static_a
T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -614,7 +652,7 @@ static struct hwpm_ip_inst t234_mss_channel_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_CHANNEL_H
#define T234_HWPM_IP_MSS_CHANNEL_H
#if defined(CONFIG_T234_HWPM_IP_MSS_CHANNEL)
#define T234_HWPM_ACTIVE_IP_MSS_CHANNEL T234_HWPM_IP_MSS_CHANNEL,
#define T234_HWPM_ACTIVE_IP_MSS_CHANNEL T234_HWPM_IP_MSS_CHANNEL,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_channel;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_gpu_hub.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,8 +58,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_1_base_r(),
@@ -67,8 +74,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_2_base_r(),
@@ -82,8 +90,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_3_base_r(),
@@ -97,8 +106,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_4_base_r(),
@@ -112,8 +122,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_5_base_r(),
@@ -127,8 +138,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_6_base_r(),
@@ -142,8 +154,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_7_base_r(),
@@ -157,8 +170,9 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_8_base_r(),
@@ -189,6 +203,7 @@ static struct hwpm_ip_inst t234_mss_gpu_hub_inst_static_array[
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mss_gpu_hub_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mss_nvlink_8_base_r(),
.range_end = addr_map_mss_nvlink_7_limit_r(),
.element_stride = addr_map_mss_nvlink_8_limit_r() -
@@ -232,7 +247,7 @@ static struct hwpm_ip_inst t234_mss_gpu_hub_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -251,6 +266,7 @@ struct hwpm_ip t234_hwpm_ip_mss_gpu_hub = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mss_nvlink_8_base_r(),
.range_end = addr_map_mss_nvlink_7_limit_r(),
.inst_stride = addr_map_mss_nvlink_7_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_GPU_HUB_H
#define T234_HWPM_IP_MSS_GPU_HUB_H
#if defined(CONFIG_T234_HWPM_IP_MSS_GPU_HUB)
#define T234_HWPM_ACTIVE_IP_MSS_GPU_HUB T234_HWPM_IP_MSS_GPU_HUB,
#define T234_HWPM_ACTIVE_IP_MSS_GPU_HUB T234_HWPM_IP_MSS_GPU_HUB,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_mss_gpu_hub;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,21 +19,27 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_iso_niso_hubs.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_msshub0",
.device_index = T234_MSSHUB0_PERFMON_DEVICE_NODE_INDEX,
@@ -48,8 +54,9 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmon_element_stati
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.element_index_mask = BIT(0),
.element_index = 1U,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_msshub1",
.device_index = T234_MSSHUB1_PERFMON_DEVICE_NODE_INDEX,
@@ -68,6 +75,7 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -78,12 +86,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -94,12 +102,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -110,12 +118,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -126,12 +134,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -142,12 +150,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -158,12 +166,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -174,12 +182,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -190,12 +198,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -215,6 +223,7 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_broadcast_element_sta
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -293,7 +302,7 @@ static struct hwpm_ip_inst t234_mss_iso_niso_hub_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -352,4 +361,3 @@ struct hwpm_ip t234_hwpm_ip_mss_iso_niso_hubs = {
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,6 +19,11 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_ISO_NISO_HUBS_H
@@ -28,11 +33,11 @@
#define T234_HWPM_ACTIVE_IP_MSS_ISO_NISO_HUBS T234_HWPM_IP_MSS_ISO_NISO_HUBS,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST 2U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST 2U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_iso_niso_hubs;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,6 +19,11 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_mcf.h"
@@ -32,6 +37,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -48,6 +54,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -64,6 +71,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
@@ -84,6 +92,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -99,6 +108,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -114,6 +124,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -129,6 +140,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -144,6 +156,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -159,6 +172,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -174,6 +188,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -189,6 +204,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -208,6 +224,7 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_broadcast_element_static_array
T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -286,7 +303,7 @@ static struct hwpm_ip_inst t234_mss_mcf_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_MCF_H
#define T234_HWPM_IP_MSS_MCF_H
#if defined(CONFIG_T234_HWPM_IP_MSS_MCF)
#define T234_HWPM_ACTIVE_IP_MSS_MCF T234_HWPM_IP_MSS_MCF,
#define T234_HWPM_ACTIVE_IP_MSS_MCF T234_HWPM_IP_MSS_MCF,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_MCF_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_MCF_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_MCF_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_MCF_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_mcf;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_nvdec.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_nvdec_inst0_perfmon_element_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_nvdec_inst0_perfmux_element_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -84,6 +91,7 @@ static struct hwpm_ip_inst t234_nvdec_inst_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdec_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdec_base_r(),
.range_end = addr_map_nvdec_limit_r(),
.element_stride = addr_map_nvdec_limit_r() -
@@ -127,7 +135,7 @@ static struct hwpm_ip_inst t234_nvdec_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -146,6 +154,7 @@ struct hwpm_ip t234_hwpm_ip_nvdec = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdec_base_r(),
.range_end = addr_map_nvdec_limit_r(),
.inst_stride = addr_map_nvdec_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_NVDEC_H
#define T234_HWPM_IP_NVDEC_H
#if defined(CONFIG_T234_HWPM_IP_NVDEC)
#define T234_HWPM_ACTIVE_IP_NVDEC T234_HWPM_IP_NVDEC,
#define T234_HWPM_ACTIVE_IP_NVDEC T234_HWPM_IP_NVDEC,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_NVDEC_NUM_INSTANCES 1U
#define T234_HWPM_IP_NVDEC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_NVDEC_NUM_INSTANCES 1U
#define T234_HWPM_IP_NVDEC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_nvdec;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_nvdla.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_nvdla_inst0_perfmon_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_nvdla_inst1_perfmon_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -72,6 +79,7 @@ static struct hwpm_ip_aperture t234_nvdla_inst0_perfmux_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,6 +99,7 @@ static struct hwpm_ip_aperture t234_nvdla_inst1_perfmux_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -123,6 +132,7 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdla_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla0_base_r(),
.range_end = addr_map_nvdla0_limit_r(),
.element_stride = addr_map_nvdla0_limit_r() -
@@ -166,11 +176,11 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = 1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvdladebugfs/nvdla0/hwpm/ctrl",
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
@@ -186,6 +196,7 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdla_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla1_base_r(),
.range_end = addr_map_nvdla1_limit_r(),
.element_stride = addr_map_nvdla1_limit_r() -
@@ -229,11 +240,11 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = 1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvdladebugfs/nvdla1/hwpm/ctrl",
.dev_name = "",
},
};
@@ -248,6 +259,7 @@ struct hwpm_ip t234_hwpm_ip_nvdla = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla0_base_r(),
.range_end = addr_map_nvdla1_limit_r(),
.inst_stride = addr_map_nvdla0_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_NVDLA_H
#define T234_HWPM_IP_NVDLA_H
#if defined(CONFIG_T234_HWPM_IP_NVDLA)
#define T234_HWPM_ACTIVE_IP_NVDLA T234_HWPM_IP_NVDLA,
#define T234_HWPM_ACTIVE_IP_NVDLA T234_HWPM_IP_NVDLA,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_NVDLA_NUM_INSTANCES 2U
#define T234_HWPM_IP_NVDLA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_NVDLA_NUM_INSTANCES 2U
#define T234_HWPM_IP_NVDLA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_nvdla;

View File

@@ -127,11 +127,11 @@ static struct hwpm_ip_inst t234_nvenc_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "",
.dev_name = "/dev/nvhost-debug/nvenc_hwpm",
},
};

View File

@@ -127,11 +127,11 @@ static struct hwpm_ip_inst t234_ofa_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "",
.dev_name = "/dev/nvhost-debug/ofa_hwpm",
},
};

View File

@@ -517,7 +517,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -580,7 +580,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -643,7 +643,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -706,7 +706,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -769,7 +769,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -832,7 +832,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -895,7 +895,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -958,7 +958,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -1021,7 +1021,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -1084,7 +1084,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -1147,7 +1147,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,

View File

@@ -128,7 +128,7 @@ static struct hwpm_ip_inst t234_pma_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_pva.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -48,6 +54,7 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -64,6 +71,7 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
@@ -84,6 +92,7 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmux_element_static_array[
T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -116,6 +125,7 @@ static struct hwpm_ip_inst t234_pva_inst_static_array[
T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_pva_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva0_pm_limit_r(),
.element_stride = addr_map_pva0_pm_limit_r() -
@@ -159,11 +169,11 @@ static struct hwpm_ip_inst t234_pva_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = 1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvpvadebugfs/pva0/hwpm",
.dev_name = "",
},
};
@@ -178,6 +188,7 @@ struct hwpm_ip t234_hwpm_ip_pva = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva0_pm_limit_r(),
.inst_stride = addr_map_pva0_pm_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_PVA_H
#define T234_HWPM_IP_PVA_H
#if defined(CONFIG_T234_HWPM_IP_PVA)
#define T234_HWPM_ACTIVE_IP_PVA T234_HWPM_IP_PVA,
#define T234_HWPM_ACTIVE_IP_PVA T234_HWPM_IP_PVA,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T234_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T234_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_pva;

View File

@@ -129,7 +129,7 @@ static struct hwpm_ip_inst t234_rtr_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0x1U,
@@ -191,7 +191,7 @@ static struct hwpm_ip_inst t234_rtr_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -34,8 +34,9 @@
#define T234_HWPM_IP_RTR_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_RTR_STATIC_RTR_INST 0U
#define T234_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX 0U
#define T234_HWPM_IP_RTR_STATIC_PMA_INST 1U
#define T234_HWPM_IP_RTR_PERMUX_INDEX 0U
#define T234_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX 0U
extern struct hwpm_ip t234_hwpm_ip_rtr;

View File

@@ -106,7 +106,7 @@ static struct hwpm_ip_inst t234_scf_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_vi.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_vi_inst0_perfmon_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_vi_inst1_perfmon_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -72,6 +79,7 @@ static struct hwpm_ip_aperture t234_vi_inst0_perfmux_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,6 +99,7 @@ static struct hwpm_ip_aperture t234_vi_inst1_perfmux_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -167,7 +176,7 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -187,6 +196,7 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_vi_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi2_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.element_stride = addr_map_vi2_thi_limit_r() -
@@ -230,7 +240,7 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_VI_H
#define T234_HWPM_IP_VI_H
#if defined(CONFIG_T234_HWPM_IP_VI)
#define T234_HWPM_ACTIVE_IP_VI T234_HWPM_IP_VI,
#define T234_HWPM_ACTIVE_IP_VI T234_HWPM_IP_VI,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_VI_NUM_INSTANCES 2U
#define T234_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_VI_NUM_INSTANCES 2U
#define T234_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_vi;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,19 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_vic.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t234_vic_inst0_perfmon_element_static_array[
T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -52,6 +58,7 @@ static struct hwpm_ip_aperture t234_vic_inst0_perfmux_element_static_array[
T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -84,6 +91,7 @@ static struct hwpm_ip_inst t234_vic_inst_static_array[
T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_vic_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.element_stride = addr_map_vic_limit_r() -
@@ -127,7 +135,7 @@ static struct hwpm_ip_inst t234_vic_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
@@ -146,6 +154,7 @@ struct hwpm_ip t234_hwpm_ip_vic = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.inst_stride = addr_map_vic_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,20 +19,25 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_VIC_H
#define T234_HWPM_IP_VIC_H
#if defined(CONFIG_T234_HWPM_IP_VIC)
#define T234_HWPM_ACTIVE_IP_VIC T234_HWPM_IP_VIC,
#define T234_HWPM_ACTIVE_IP_VIC T234_HWPM_IP_VIC,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T234_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T234_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_vic;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -30,41 +30,56 @@
#include <hal/t234/hw/t234_pmasys_soc_hwpm.h>
#include <hal/t234/hw/t234_pmmsys_soc_hwpm.h>
int t234_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx()];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
if (rtr_perfmux_ptr != NULL) {
*rtr_perfmux_ptr = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX];
}
if (pma_perfmux_ptr != NULL) {
*pma_perfmux_ptr = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX];
}
return 0;
}
int t234_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
/* Check ROUTER state */
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
if (pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) {
tegra_hwpm_err(hwpm, "Router not ready value 0x%x", reg_val);
return -EINVAL;
}
/* Check ROUTER state */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val);
hwpm_assert_print(hwpm,
(pmmsys_sys0router_enginestatus_status_v(reg_val) ==
pmmsys_sys0router_enginestatus_status_empty_v()),
return -EINVAL, "Router not ready value 0x%x", reg_val);
/* Check PMA state */
field_mask = pmasys_enginestatus_status_m() |
@@ -72,19 +87,12 @@ int t234_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
field_val = pmasys_enginestatus_status_empty_f() |
pmasys_enginestatus_rbufempty_empty_f();
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
hwpm_assert_print(hwpm, ((reg_val & field_mask) == field_val),
return -EINVAL, "PMA not ready value 0x%x", reg_val);
if ((reg_val & field_mask) != field_val) {
tegra_hwpm_err(hwpm, "PMA not ready value 0x%x", reg_val);
return -EINVAL;
}
return err;
return 0;
}
int t234_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
@@ -93,116 +101,48 @@ int t234_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
u32 retries = 10U;
u32 sleep_msecs = 100;
struct tegra_hwpm_timeout timeout;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA triggers */
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmasys_trigger_config_user_pma_pulse_m(),
pmasys_trigger_config_user_pma_pulse_disable_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
/* Wait for PERFMONs to idle */
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
do {
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_perfmonstatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while ((pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if (pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMMSYS_SYS0ROUTER_PERFMONSTATUS_MERGED_EMPTY");
return -ETIMEDOUT;
}
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_sys0router_perfmonstatus_r(), &reg_val,
(pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U),
"PMMSYS_SYS0ROUTER_PERFMONSTATUS_MERGED_EMPTY timed out");
/* Wait for ROUTER to idle */
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
do {
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while ((pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if (pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMMSYS_SYS0ROUTER_ENGINESTATUS_STATUS_EMPTY");
return -ETIMEDOUT;
}
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val,
(pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()),
"PMMSYS_SYS0ROUTER_ENGINESTATUS_STATUS_EMPTY timed out");
/* Wait for PMA to idle */
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
field_mask = pmasys_enginestatus_status_m() |
pmasys_enginestatus_rbufempty_m();
field_val = pmasys_enginestatus_status_empty_f() |
pmasys_enginestatus_rbufempty_empty_f();
do {
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while (((reg_val & field_mask) != field_val) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if ((reg_val & field_mask) != field_val) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMASYS_ENGINESTATUS");
return -ETIMEDOUT;
}
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, pma_perfmux,
pmasys_enginestatus_r(), &reg_val,
((reg_val & field_mask) != field_val),
"PMASYS_ENGINESTATUS timed out");
return err;
}
@@ -211,45 +151,27 @@ int t234_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_controlb_r(), &val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_controlb_r(), &val);
val = set_field(val, pmasys_controlb_coalesce_timeout_cycles_m(),
pmasys_controlb_coalesce_timeout_cycles__prod_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_controlb_r(), val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_controlb_r(), val);
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0), &val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
val = set_field(val,
pmasys_channel_config_user_coalesce_timeout_cycles_m(),
pmasys_channel_config_user_coalesce_timeout_cycles__prod_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0), val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
/* CG enable is expected PROD value */
err = hwpm->active_chip->enable_cg(hwpm);
@@ -267,32 +189,20 @@ int t234_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
u32 field_mask = 0U;
u32 field_val = 0U;
u32 reg_val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_cg2_slcg_m(),
pmasys_cg2_slcg_disabled_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
field_mask = pmmsys_sys0router_cg2_slcg_perfmon_m() |
pmmsys_sys0router_cg2_slcg_router_m() |
@@ -300,19 +210,11 @@ int t234_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
field_val = pmmsys_sys0router_cg2_slcg_perfmon_disabled_f() |
pmmsys_sys0router_cg2_slcg_router_disabled_f() |
pmmsys_sys0router_cg2_slcg_disabled_f();
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, field_mask, field_val);
err = tegra_hwpm_writel(hwpm, rtr_perfmux,
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -323,32 +225,20 @@ int t234_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_cg2_slcg_m(),
pmasys_cg2_slcg_enabled_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
field_mask = pmmsys_sys0router_cg2_slcg_perfmon_m() |
pmmsys_sys0router_cg2_slcg_router_m() |
@@ -356,19 +246,11 @@ int t234_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
field_val = pmmsys_sys0router_cg2_slcg_perfmon__prod_f() |
pmmsys_sys0router_cg2_slcg_router__prod_f() |
pmmsys_sys0router_cg2_slcg__prod_f();
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, field_mask, field_val);
err = tegra_hwpm_writel(hwpm, rtr_perfmux,
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -31,6 +31,7 @@
#include <hal/t234/t234_internal.h>
static struct tegra_soc_hwpm_chip t234_chip_info = {
.la_clk_rate = 625000000,
.chip_ips = NULL,
/* HALs */
@@ -46,6 +47,7 @@ static struct tegra_soc_hwpm_chip t234_chip_info = {
.get_rtr_int_idx = t234_get_rtr_int_idx,
.get_ip_max_idx = t234_get_ip_max_idx,
.get_rtr_pma_perfmux_ptr = t234_hwpm_get_rtr_pma_perfmux_ptr,
.extract_ip_ops = t234_hwpm_extract_ip_ops,
.force_enable_ips = t234_hwpm_force_enable_ips,
@@ -56,6 +58,8 @@ static struct tegra_soc_hwpm_chip t234_chip_info = {
.init_prod_values = t234_hwpm_init_prod_values,
.disable_cg = t234_hwpm_disable_cg,
.enable_cg = t234_hwpm_enable_cg,
.credit_program = NULL,
.setup_trigger = NULL,
.reserve_rtr = tegra_hwpm_reserve_rtr,
.release_rtr = tegra_hwpm_release_rtr,
@@ -307,12 +311,12 @@ bool t234_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
u32 t234_get_rtr_int_idx(struct tegra_soc_hwpm *hwpm)
u32 t234_get_rtr_int_idx(void)
{
return T234_HWPM_IP_RTR;
}
u32 t234_get_ip_max_idx(struct tegra_soc_hwpm *hwpm)
u32 t234_get_ip_max_idx(void)
{
return T234_HWPM_IP_MAX;
}

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -84,8 +84,11 @@ bool t234_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
bool t234_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index);
u32 t234_get_rtr_int_idx(struct tegra_soc_hwpm *hwpm);
u32 t234_get_ip_max_idx(struct tegra_soc_hwpm *hwpm);
u32 t234_get_rtr_int_idx(void);
u32 t234_get_ip_max_idx(void);
int t234_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr);
int t234_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
@@ -111,7 +114,9 @@ int t234_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump);
u64 t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm);
bool t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr);
int t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status);
#endif /* T234_HWPM_INTERNAL_H */

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -206,11 +206,12 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
defined(CONFIG_T234_HWPM_IP_MSS_MCF)
struct hwpm_ip *chip_ip = NULL;
struct hwpm_ip_inst *ip_inst = NULL;
u32 inst_idx = 0U;
u32 s_inst_idx = 0U;
u32 element_mask_max = 0U;
#endif
u32 emc_disable_fuse_val = 0U;
u32 emc_disable_fuse_val_mask = 0xFU;
u32 emc_disable_fuse_bit_idx = 0U;
u32 emc_element_floorsweep_mask = 0U;
u32 idx = 0U;
int err;
@@ -235,16 +236,16 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
* Convert floorsweep fuse value to available EMC elements.
*/
do {
if (emc_disable_fuse_val & 0x1U) {
emc_element_floorsweep_mask =
(emc_element_floorsweep_mask << 4U) | 0xFU;
if (!(emc_disable_fuse_val & (0x1U << emc_disable_fuse_bit_idx))) {
emc_element_floorsweep_mask |=
(0xFU << (emc_disable_fuse_bit_idx * 4U));
}
emc_disable_fuse_val = (emc_disable_fuse_val >> 1U);
emc_disable_fuse_bit_idx++;
emc_disable_fuse_val_mask = (emc_disable_fuse_val_mask >> 1U);
} while (emc_disable_fuse_val_mask != 0U);
/* Set fuse value in MSS IP instances */
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
switch (idx) {
#if defined(CONFIG_T234_HWPM_IP_MSS_CHANNEL)
case T234_HWPM_IP_MSS_CHANNEL:
@@ -259,10 +260,11 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
defined(CONFIG_T234_HWPM_IP_MSS_ISO_NISO_HUBS) || \
defined(CONFIG_T234_HWPM_IP_MSS_MCF)
chip_ip = active_chip->chip_ips[idx];
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
for (s_inst_idx = 0U;
s_inst_idx < chip_ip->num_instances;
s_inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
inst_idx];
s_inst_idx];
/*
* Hence use max element mask to get correct
@@ -362,7 +364,7 @@ int t234_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm)
return 0;
}
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
chip_ip = active_chip->chip_ips[idx];
if ((hwpm_global_disable !=

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -33,41 +33,23 @@
int t234_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = tegra_hwpm_writel(hwpm, pma_perfmux,
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0), 0);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0), 0);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0), 0);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0), 0);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -79,66 +61,46 @@ int t234_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
u32 outbase_hi = 0;
u32 outsize = 0;
u64 mem_bytes_addr = 0ULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_hwpm_mem_mgmt *mem_mgmt = hwpm->mem_mgmt;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_fn(hwpm, " ");
outbase_lo = mem_mgmt->stream_buf_va & pmasys_channel_outbase_ptr_m();
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0), outbase_lo);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTBASE = 0x%x", outbase_lo);
outbase_hi = (mem_mgmt->stream_buf_va >> 32) &
pmasys_channel_outbaseupper_ptr_m();
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0), outbase_hi);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTBASEUPPER = 0x%x", outbase_hi);
outsize = mem_mgmt->stream_buf_size &
pmasys_channel_outsize_numbytes_m();
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0), outsize);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTSIZE = 0x%x", outsize);
mem_bytes_addr = mem_mgmt->mem_bytes_buf_va &
pmasys_channel_mem_bytes_addr_ptr_m();
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0), mem_bytes_addr);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"MEM_BYTES_ADDR = 0x%llx", (unsigned long long)mem_bytes_addr);
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_block_r(0),
pmasys_channel_mem_block_valid_f(
pmasys_channel_mem_block_valid_true_v()));
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -146,24 +108,18 @@ int t234_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
int t234_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_channel_mem_block_r(0),
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_channel_mem_block_r(0),
pmasys_channel_mem_block_valid_f(
pmasys_channel_mem_block_valid_false_v()));
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -174,34 +130,24 @@ int t234_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 *mem_bytes_kernel_u32 =
(u32 *)(hwpm->mem_mgmt->mem_bytes_kernel);
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
*mem_bytes_kernel_u32 = TEGRA_HWPM_MEM_BYTES_INVALID;
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_channel_control_user_update_bytes_m(),
pmasys_channel_control_user_update_bytes_doit_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -210,49 +156,31 @@ int t234_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA streaming */
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_trigger_config_user_record_stream_m(),
pmasys_trigger_config_user_record_stream_disable_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_readl(hwpm, pma_perfmux,
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_channel_control_user_stream_m(),
pmasys_channel_control_user_stream_disable_f());
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -261,81 +189,69 @@ int t234_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump)
{
int err = 0;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
if (mem_bump > (u64)U32_MAX) {
tegra_hwpm_err(hwpm, "mem_bump is out of bounds");
return -EINVAL;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bump_r(0), mem_bump);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
u64 t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm)
int t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr)
{
int err = 0;
u32 reg_val = 0U;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return 0ULL;
}
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
return (u64)reg_val;
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0), &reg_val);
*mem_head_ptr = (u64)reg_val;
return err;
}
bool t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm)
int t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status)
{
int err = 0;
u32 reg_val, field_val;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = tegra_hwpm_readl(hwpm, pma_perfmux,
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_secure_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
field_val = pmasys_channel_status_secure_membuf_status_v(
reg_val);
return (field_val ==
pmasys_channel_status_secure_membuf_status_overflowed_v());
*overflow_status = (field_val ==
pmasys_channel_status_secure_membuf_status_overflowed_v()) ?
TEGRA_HWPM_MEMBUF_OVERFLOWED : TEGRA_HWPM_MEMBUF_NOT_OVERFLOWED;
return err;
}

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -252,11 +252,8 @@ struct allowlist t234_pva0_pm_alist[9] = {
{0x00008020, true},
};
struct allowlist t234_nvdla_alist[37] = {
{0x00001088, false},
struct allowlist t234_nvdla_alist[31] = {
{0x000010a8, false},
{0x0001a000, false},
{0x0001a004, false},
{0x0001a008, true},
{0x0001a00c, true},
{0x0001a010, true},
@@ -287,9 +284,6 @@ struct allowlist t234_nvdla_alist[37] = {
{0x0001a074, true},
{0x0001a078, true},
{0x0001a07c, true},
{0x00000008, true},
{0x00000a00, true},
{0x00000a20, true},
};
struct allowlist t234_mgbe_alist[2] = {

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -35,7 +35,7 @@ extern struct allowlist t234_isp_thi_alist[7];
extern struct allowlist t234_vic_alist[9];
extern struct allowlist t234_ofa_alist[8];
extern struct allowlist t234_pva0_pm_alist[9];
extern struct allowlist t234_nvdla_alist[37];
extern struct allowlist t234_nvdla_alist[31];
extern struct allowlist t234_mgbe_alist[2];
extern struct allowlist t234_nvdec_alist[8];
extern struct allowlist t234_nvenc_alist[9];

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -33,7 +33,6 @@
int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
int err = 0;
u32 reg_val;
tegra_hwpm_fn(hwpm, " ");
@@ -44,20 +43,12 @@ int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
err = tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_readl(hwpm, perfmon,
pmmsys_sys0_enginestatus_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmmsys_sys0_enginestatus_enable_m(),
pmmsys_sys0_enginestatus_enable_out_f());
err = tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_writel(hwpm, perfmon,
pmmsys_sys0_enginestatus_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -65,7 +56,6 @@ int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
int t234_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
int err = 0;
u32 reg_val;
tegra_hwpm_fn(hwpm, " ");
@@ -84,18 +74,10 @@ int t234_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
err = tegra_hwpm_readl(hwpm, perfmon, pmmsys_control_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_readl(hwpm, perfmon, pmmsys_control_r(0), &reg_val);
reg_val = set_field(reg_val, pmmsys_control_mode_m(),
pmmsys_control_mode_disable_f());
err = tegra_hwpm_writel(hwpm, perfmon, pmmsys_control_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_writel(hwpm, perfmon, pmmsys_control_r(0), reg_val);
return 0;
}

View File

@@ -0,0 +1,355 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_ADDR_MAP_SOC_HWPM_H
#define T264_ADDR_MAP_SOC_HWPM_H
#define addr_map_rpg_grp_system_base_r() (0x1600000U)
#define addr_map_rpg_grp_system_limit_r() (0x16fffffU)
#define addr_map_rpg_grp_ucf_base_r() (0x8101600000U)
#define addr_map_rpg_grp_ucf_limit_r() (0x81016fffffU)
#define addr_map_rpg_grp_vision_base_r() (0x8181600000U)
#define addr_map_rpg_grp_vision_limit_r() (0x81816fffffU)
#define addr_map_rpg_grp_disp_usb_base_r() (0x8801600000U)
#define addr_map_rpg_grp_disp_usb_limit_r() (0x88016fffffU)
#define addr_map_rpg_grp_uphy0_base_r() (0xa801600000U)
#define addr_map_rpg_grp_uphy0_limit_r() (0xa8016fffffU)
#define addr_map_rpg_pm_hwpm_base_r() (0x1604000U)
#define addr_map_rpg_pm_hwpm_limit_r() (0x1604fffU)
#define addr_map_pma_base_r() (0x1610000U)
#define addr_map_pma_limit_r() (0x1611fffU)
#define addr_map_rtr_base_r() (0x1612000U)
#define addr_map_rtr_limit_r() (0x1612fffU)
#define addr_map_rpg_pm_mss0_base_r() (0x8101621000U)
#define addr_map_rpg_pm_mss0_limit_r() (0x8101621fffU)
#define addr_map_rpg_pm_mss1_base_r() (0x8101622000U)
#define addr_map_rpg_pm_mss1_limit_r() (0x8101622fffU)
#define addr_map_rpg_pm_mss2_base_r() (0x8101623000U)
#define addr_map_rpg_pm_mss2_limit_r() (0x8101623fffU)
#define addr_map_rpg_pm_mss3_base_r() (0x8101624000U)
#define addr_map_rpg_pm_mss3_limit_r() (0x8101624fffU)
#define addr_map_rpg_pm_mss4_base_r() (0x8101625000U)
#define addr_map_rpg_pm_mss4_limit_r() (0x8101625fffU)
#define addr_map_rpg_pm_mss5_base_r() (0x8101626000U)
#define addr_map_rpg_pm_mss5_limit_r() (0x8101626fffU)
#define addr_map_rpg_pm_mss6_base_r() (0x8101627000U)
#define addr_map_rpg_pm_mss6_limit_r() (0x8101627fffU)
#define addr_map_rpg_pm_mss7_base_r() (0x8101628000U)
#define addr_map_rpg_pm_mss7_limit_r() (0x8101628fffU)
#define addr_map_rpg_pm_mss8_base_r() (0x8101629000U)
#define addr_map_rpg_pm_mss8_limit_r() (0x8101629fffU)
#define addr_map_rpg_pm_mss9_base_r() (0x810162a000U)
#define addr_map_rpg_pm_mss9_limit_r() (0x810162afffU)
#define addr_map_rpg_pm_mss10_base_r() (0x810162b000U)
#define addr_map_rpg_pm_mss10_limit_r() (0x810162bfffU)
#define addr_map_rpg_pm_mss11_base_r() (0x810162c000U)
#define addr_map_rpg_pm_mss11_limit_r() (0x810162cfffU)
#define addr_map_rpg_pm_mss12_base_r() (0x810162d000U)
#define addr_map_rpg_pm_mss12_limit_r() (0x810162dfffU)
#define addr_map_rpg_pm_mss13_base_r() (0x810162e000U)
#define addr_map_rpg_pm_mss13_limit_r() (0x810162efffU)
#define addr_map_rpg_pm_mss14_base_r() (0x810162f000U)
#define addr_map_rpg_pm_mss14_limit_r() (0x810162ffffU)
#define addr_map_rpg_pm_mss15_base_r() (0x8101630000U)
#define addr_map_rpg_pm_mss15_limit_r() (0x8101630fffU)
#define addr_map_mcb_base_r() (0x8108020000U)
#define addr_map_mcb_limit_r() (0x810803ffffU)
#define addr_map_mc0_base_r() (0x8108040000U)
#define addr_map_mc0_limit_r() (0x810805ffffU)
#define addr_map_mc1_base_r() (0x8108060000U)
#define addr_map_mc1_limit_r() (0x810807ffffU)
#define addr_map_mc2_base_r() (0x8108080000U)
#define addr_map_mc2_limit_r() (0x810809ffffU)
#define addr_map_mc3_base_r() (0x81080a0000U)
#define addr_map_mc3_limit_r() (0x81080bffffU)
#define addr_map_mc4_base_r() (0x81080c0000U)
#define addr_map_mc4_limit_r() (0x81080dffffU)
#define addr_map_mc5_base_r() (0x81080e0000U)
#define addr_map_mc5_limit_r() (0x81080fffffU)
#define addr_map_mc6_base_r() (0x8108100000U)
#define addr_map_mc6_limit_r() (0x810811ffffU)
#define addr_map_mc7_base_r() (0x8108120000U)
#define addr_map_mc7_limit_r() (0x810813ffffU)
#define addr_map_mc8_base_r() (0x8108140000U)
#define addr_map_mc8_limit_r() (0x810815ffffU)
#define addr_map_mc9_base_r() (0x8108160000U)
#define addr_map_mc9_limit_r() (0x810817ffffU)
#define addr_map_mc10_base_r() (0x8108180000U)
#define addr_map_mc10_limit_r() (0x810819ffffU)
#define addr_map_mc11_base_r() (0x81081a0000U)
#define addr_map_mc11_limit_r() (0x81081bffffU)
#define addr_map_mc12_base_r() (0x81081c0000U)
#define addr_map_mc12_limit_r() (0x81081dffffU)
#define addr_map_mc13_base_r() (0x81081e0000U)
#define addr_map_mc13_limit_r() (0x81081fffffU)
#define addr_map_mc14_base_r() (0x8108200000U)
#define addr_map_mc14_limit_r() (0x810821ffffU)
#define addr_map_mc15_base_r() (0x8108220000U)
#define addr_map_mc15_limit_r() (0x810823ffffU)
#define addr_map_rpg_pm_pvac0_base_r() (0x8181605000U)
#define addr_map_rpg_pm_pvac0_limit_r() (0x8181605fffU)
#define addr_map_rpg_pm_pvav0_base_r() (0x8181606000U)
#define addr_map_rpg_pm_pvav0_limit_r() (0x8181606fffU)
#define addr_map_rpg_pm_pvav1_base_r() (0x8181607000U)
#define addr_map_rpg_pm_pvav1_limit_r() (0x8181607fffU)
#define addr_map_rpg_pm_pvap0_base_r() (0x818160e000U)
#define addr_map_rpg_pm_pvap0_limit_r() (0x818160efffU)
#define addr_map_rpg_pm_pvap1_base_r() (0x818160f000U)
#define addr_map_rpg_pm_pvap1_limit_r() (0x818160ffffU)
#define addr_map_pva0_pm_base_r() (0x818c200000U)
#define addr_map_pva0_pm_limit_r() (0x818c20ffffU)
#define addr_map_pva1_pm_base_r() (0x818cb00000U)
#define addr_map_pva1_pm_limit_r() (0x818cb0ffffU)
#define addr_map_rpg_pm_vic0_base_r() (0x8181604000U)
#define addr_map_rpg_pm_vic0_limit_r() (0x8181604fffU)
#define addr_map_vic_base_r() (0x8188050000U)
#define addr_map_vic_limit_r() (0x818808ffffU)
#define addr_map_rpg_pm_system_msshub0_base_r() (0x1600000U)
#define addr_map_rpg_pm_system_msshub0_limit_r() (0x1600fffU)
#define addr_map_rpg_pm_ucf_msshub0_base_r() (0x810163e000U)
#define addr_map_rpg_pm_ucf_msshub0_limit_r() (0x810163efffU)
#define addr_map_rpg_pm_ucf_msshub1_base_r() (0x810163f000U)
#define addr_map_rpg_pm_ucf_msshub1_limit_r() (0x810163ffffU)
#define addr_map_rpg_pm_ucf_msshub2_base_r() (0x810164f000U)
#define addr_map_rpg_pm_ucf_msshub2_limit_r() (0x810164ffffU)
#define addr_map_rpg_pm_vision_msshub0_base_r() (0x818160b000U)
#define addr_map_rpg_pm_vision_msshub0_limit_r() (0x818160bfffU)
#define addr_map_rpg_pm_vision_msshub1_base_r() (0x818160c000U)
#define addr_map_rpg_pm_vision_msshub1_limit_r() (0x818160cfffU)
#define addr_map_rpg_pm_disp_usb_msshub0_base_r() (0x8801601000U)
#define addr_map_rpg_pm_disp_usb_msshub0_limit_r() (0x8801601fffU)
#define addr_map_rpg_pm_uphy0_msshub0_base_r() (0xa801628000U)
#define addr_map_rpg_pm_uphy0_msshub0_limit_r() (0xa801628fffU)
#define addr_map_rpg_pm_uphy0_msshub1_base_r() (0xa801629000U)
#define addr_map_rpg_pm_uphy0_msshub1_limit_r() (0xa801629fffU)
#define addr_map_rpg_pm_ocu_base_r() (0xa801604000U)
#define addr_map_rpg_pm_ocu_limit_r() (0xa801604fffU)
#define addr_map_ocu_base_r() (0xa808740000U)
#define addr_map_ocu_limit_r() (0xa80874ffffU)
#define addr_map_rpg_pm_ucf_smmu0_base_r() (0x8101642000U)
#define addr_map_rpg_pm_ucf_smmu0_limit_r() (0x8101642fffU)
#define addr_map_rpg_pm_ucf_smmu1_base_r() (0x8101643000U)
#define addr_map_rpg_pm_ucf_smmu1_limit_r() (0x8101643fffU)
#define addr_map_rpg_pm_ucf_smmu3_base_r() (0x810164b000U)
#define addr_map_rpg_pm_ucf_smmu3_limit_r() (0x810164bfffU)
#define addr_map_rpg_pm_ucf_smmu2_base_r() (0x8101653000U)
#define addr_map_rpg_pm_ucf_smmu2_limit_r() (0x8101653fffU)
#define addr_map_rpg_pm_disp_usb_smmu0_base_r() (0x8801602000U)
#define addr_map_rpg_pm_disp_usb_smmu0_limit_r() (0x8801602fffU)
#define addr_map_smmu1_base_r() (0x8105a30000U)
#define addr_map_smmu1_limit_r() (0x8105a3ffffU)
#define addr_map_smmu2_base_r() (0x8106a30000U)
#define addr_map_smmu2_limit_r() (0x8106a3ffffU)
#define addr_map_smmu0_base_r() (0x810aa30000U)
#define addr_map_smmu0_limit_r() (0x810aa3ffffU)
#define addr_map_smmu4_base_r() (0x810ba30000U)
#define addr_map_smmu4_limit_r() (0x810ba3ffffU)
#define addr_map_smmu3_base_r() (0x8806a30000U)
#define addr_map_smmu3_limit_r() (0x8806a3ffffU)
#define addr_map_rpg_pm_ucf_msw0_base_r() (0x8101600000U)
#define addr_map_rpg_pm_ucf_msw0_limit_r() (0x8101600fffU)
#define addr_map_rpg_pm_ucf_msw1_base_r() (0x8101601000U)
#define addr_map_rpg_pm_ucf_msw1_limit_r() (0x8101601fffU)
#define addr_map_rpg_pm_ucf_msw2_base_r() (0x8101602000U)
#define addr_map_rpg_pm_ucf_msw2_limit_r() (0x8101602fffU)
#define addr_map_rpg_pm_ucf_msw3_base_r() (0x8101603000U)
#define addr_map_rpg_pm_ucf_msw3_limit_r() (0x8101603fffU)
#define addr_map_rpg_pm_ucf_msw4_base_r() (0x8101604000U)
#define addr_map_rpg_pm_ucf_msw4_limit_r() (0x8101604fffU)
#define addr_map_rpg_pm_ucf_msw5_base_r() (0x8101605000U)
#define addr_map_rpg_pm_ucf_msw5_limit_r() (0x8101605fffU)
#define addr_map_rpg_pm_ucf_msw6_base_r() (0x8101606000U)
#define addr_map_rpg_pm_ucf_msw6_limit_r() (0x8101606fffU)
#define addr_map_rpg_pm_ucf_msw7_base_r() (0x8101607000U)
#define addr_map_rpg_pm_ucf_msw7_limit_r() (0x8101607fffU)
#define addr_map_rpg_pm_ucf_msw8_base_r() (0x8101608000U)
#define addr_map_rpg_pm_ucf_msw8_limit_r() (0x8101608fffU)
#define addr_map_rpg_pm_ucf_msw9_base_r() (0x8101609000U)
#define addr_map_rpg_pm_ucf_msw9_limit_r() (0x8101609fffU)
#define addr_map_rpg_pm_ucf_msw10_base_r() (0x810160a000U)
#define addr_map_rpg_pm_ucf_msw10_limit_r() (0x810160afffU)
#define addr_map_rpg_pm_ucf_msw11_base_r() (0x810160b000U)
#define addr_map_rpg_pm_ucf_msw11_limit_r() (0x810160bfffU)
#define addr_map_rpg_pm_ucf_msw12_base_r() (0x810160c000U)
#define addr_map_rpg_pm_ucf_msw12_limit_r() (0x810160cfffU)
#define addr_map_rpg_pm_ucf_msw13_base_r() (0x810160d000U)
#define addr_map_rpg_pm_ucf_msw13_limit_r() (0x810160dfffU)
#define addr_map_rpg_pm_ucf_msw14_base_r() (0x810160e000U)
#define addr_map_rpg_pm_ucf_msw14_limit_r() (0x810160efffU)
#define addr_map_rpg_pm_ucf_msw15_base_r() (0x810160f000U)
#define addr_map_rpg_pm_ucf_msw15_limit_r() (0x810160ffffU)
#define addr_map_ucf_msn0_msw_base_r() (0x8128000000U)
#define addr_map_ucf_msn0_msw_limit_r() (0x8128000080U)
#define addr_map_ucf_msn1_msw_base_r() (0x8128200000U)
#define addr_map_ucf_msn1_msw_limit_r() (0x8128200080U)
#define addr_map_ucf_msn2_msw_base_r() (0x8128400000U)
#define addr_map_ucf_msn2_msw_limit_r() (0x8128400080U)
#define addr_map_ucf_msn3_msw_base_r() (0x8128600000U)
#define addr_map_ucf_msn3_msw_limit_r() (0x8128600080U)
#define addr_map_ucf_msn4_msw_base_r() (0x8128800000U)
#define addr_map_ucf_msn4_msw_limit_r() (0x8128800080U)
#define addr_map_ucf_msn5_msw_base_r() (0x8128a00000U)
#define addr_map_ucf_msn5_msw_limit_r() (0x8128a00080U)
#define addr_map_ucf_msn6_msw_base_r() (0x8128c00000U)
#define addr_map_ucf_msn6_msw_limit_r() (0x8128c00080U)
#define addr_map_ucf_msn7_msw_base_r() (0x8128e00000U)
#define addr_map_ucf_msn7_msw_limit_r() (0x8128e00080U)
#define addr_map_ucf_msn0_slice0_base_r() (0x812a040000U)
#define addr_map_ucf_msn0_slice0_limit_r() (0x812a040080U)
#define addr_map_ucf_msn0_slice1_base_r() (0x812a140000U)
#define addr_map_ucf_msn0_slice1_limit_r() (0x812a140080U)
#define addr_map_ucf_msn1_slice0_base_r() (0x812a240000U)
#define addr_map_ucf_msn1_slice0_limit_r() (0x812a240080U)
#define addr_map_ucf_msn1_slice1_base_r() (0x812a340000U)
#define addr_map_ucf_msn1_slice1_limit_r() (0x812a340080U)
#define addr_map_ucf_msn2_slice0_base_r() (0x812a440000U)
#define addr_map_ucf_msn2_slice0_limit_r() (0x812a440080U)
#define addr_map_ucf_msn2_slice1_base_r() (0x812a540000U)
#define addr_map_ucf_msn2_slice1_limit_r() (0x812a540080U)
#define addr_map_ucf_msn3_slice0_base_r() (0x812a640000U)
#define addr_map_ucf_msn3_slice0_limit_r() (0x812a640080U)
#define addr_map_ucf_msn3_slice1_base_r() (0x812a740000U)
#define addr_map_ucf_msn3_slice1_limit_r() (0x812a740080U)
#define addr_map_ucf_msn4_slice0_base_r() (0x812a840000U)
#define addr_map_ucf_msn4_slice0_limit_r() (0x812a840080U)
#define addr_map_ucf_msn4_slice1_base_r() (0x812a940000U)
#define addr_map_ucf_msn4_slice1_limit_r() (0x812a940080U)
#define addr_map_ucf_msn5_slice0_base_r() (0x812aa40000U)
#define addr_map_ucf_msn5_slice0_limit_r() (0x812aa40080U)
#define addr_map_ucf_msn5_slice1_base_r() (0x812ab40000U)
#define addr_map_ucf_msn5_slice1_limit_r() (0x812ab40080U)
#define addr_map_ucf_msn6_slice0_base_r() (0x812ac40000U)
#define addr_map_ucf_msn6_slice0_limit_r() (0x812ac40080U)
#define addr_map_ucf_msn6_slice1_base_r() (0x812ad40000U)
#define addr_map_ucf_msn6_slice1_limit_r() (0x812ad40080U)
#define addr_map_ucf_msn7_slice0_base_r() (0x812ae40000U)
#define addr_map_ucf_msn7_slice0_limit_r() (0x812ae40080U)
#define addr_map_ucf_msn7_slice1_base_r() (0x812af40000U)
#define addr_map_ucf_msn7_slice1_limit_r() (0x812af40080U)
#define addr_map_rpg_pm_ucf_psw0_base_r() (0x8101644000U)
#define addr_map_rpg_pm_ucf_psw0_limit_r() (0x8101644fffU)
#define addr_map_rpg_pm_ucf_psw1_base_r() (0x8101645000U)
#define addr_map_rpg_pm_ucf_psw1_limit_r() (0x8101645fffU)
#define addr_map_rpg_pm_ucf_psw2_base_r() (0x8101646000U)
#define addr_map_rpg_pm_ucf_psw2_limit_r() (0x8101646fffU)
#define addr_map_rpg_pm_ucf_psw3_base_r() (0x8101647000U)
#define addr_map_rpg_pm_ucf_psw3_limit_r() (0x8101647fffU)
#define addr_map_ucf_psn0_psw_base_r() (0x8130080000U)
#define addr_map_ucf_psn0_psw_limit_r() (0x8130080020U)
#define addr_map_ucf_psn1_psw_base_r() (0x8130480000U)
#define addr_map_ucf_psn1_psw_limit_r() (0x8130480020U)
#define addr_map_ucf_psn2_psw_base_r() (0x8130880000U)
#define addr_map_ucf_psn2_psw_limit_r() (0x8130880020U)
#define addr_map_ucf_psn3_psw_base_r() (0x8130c80000U)
#define addr_map_ucf_psn3_psw_limit_r() (0x8130c80020U)
#define addr_map_rpg_pm_ucf_vddmss0_base_r() (0x8101631000U)
#define addr_map_rpg_pm_ucf_vddmss0_limit_r() (0x8101631fffU)
#define addr_map_rpg_pm_ucf_vddmss1_base_r() (0x8101632000U)
#define addr_map_rpg_pm_ucf_vddmss1_limit_r() (0x8101632fffU)
#define addr_map_ucf_csw0_base_r() (0x8122000000U)
#define addr_map_ucf_csw0_limit_r() (0x8122000080U)
#define addr_map_ucf_csw1_base_r() (0x8122400000U)
#define addr_map_ucf_csw1_limit_r() (0x8122400080U)
#define addr_map_rpg_pm_cpu_core_base_r() (0x14100000U)
#define addr_map_rpg_pm_cpu_core_base_width_v() (0x00000014U)
#define addr_map_cpucore0_base_r() (0x8132030000U)
#define addr_map_cpucore0_base_size_v() (0x00001000U)
#define addr_map_cpucore1_base_r() (0x8132130000U)
#define addr_map_cpucore1_base_size_v() (0x00001000U)
#define addr_map_cpucore2_base_r() (0x8132230000U)
#define addr_map_cpucore2_base_size_v() (0x00001000U)
#define addr_map_cpucore3_base_r() (0x8132330000U)
#define addr_map_cpucore3_base_size_v() (0x00001000U)
#define addr_map_cpucore4_base_r() (0x8132430000U)
#define addr_map_cpucore4_base_size_v() (0x00001000U)
#define addr_map_cpucore5_base_r() (0x8132530000U)
#define addr_map_cpucore5_base_size_v() (0x00001000U)
#define addr_map_cpucore6_base_r() (0x8132630000U)
#define addr_map_cpucore6_base_size_v() (0x00001000U)
#define addr_map_cpucore7_base_r() (0x8132730000U)
#define addr_map_cpucore7_base_size_v() (0x00001000U)
#define addr_map_cpucore8_base_r() (0x8132830000U)
#define addr_map_cpucore8_base_size_v() (0x00001000U)
#define addr_map_cpucore9_base_r() (0x8132930000U)
#define addr_map_cpucore9_base_size_v() (0x00001000U)
#define addr_map_cpucore10_base_r() (0x8132a30000U)
#define addr_map_cpucore10_base_size_v() (0x00001000U)
#define addr_map_cpucore11_base_r() (0x8132b30000U)
#define addr_map_cpucore11_base_size_v() (0x00001000U)
#define addr_map_cpucore12_base_r() (0x8132c30000U)
#define addr_map_cpucore12_base_size_v() (0x00001000U)
#define addr_map_cpucore13_base_r() (0x8132d30000U)
#define addr_map_cpucore13_base_size_v() (0x00001000U)
#define addr_map_rpg_pm_vi0_base_r() (0x8181600000U)
#define addr_map_rpg_pm_vi0_limit_r() (0x8181600fffU)
#define addr_map_rpg_pm_vi1_base_r() (0x8181601000U)
#define addr_map_rpg_pm_vi1_limit_r() (0x8181601fffU)
#define addr_map_vi_thi_base_r() (0x8188700000U)
#define addr_map_vi_thi_limit_r() (0x81887fffffU)
#define addr_map_vi2_thi_base_r() (0x8188f00000U)
#define addr_map_vi2_thi_limit_r() (0x8188ffffffU)
#define addr_map_rpg_pm_isp0_base_r() (0x8181602000U)
#define addr_map_rpg_pm_isp0_limit_r() (0x8181602fffU)
#define addr_map_rpg_pm_isp1_base_r() (0x8181603000U)
#define addr_map_rpg_pm_isp1_limit_r() (0x8181603fffU)
#define addr_map_isp_thi_base_r() (0x8188b00000U)
#define addr_map_isp_thi_limit_r() (0x8188bfffffU)
#define addr_map_isp1_thi_base_r() (0x818ab00000U)
#define addr_map_isp1_thi_limit_r() (0x818abfffffU)
#define addr_map_pmc_misc_base_r() (0xc9c0000U)
#endif /* T264_ADDR_MAP_SOC_HWPM_H */

View File

@@ -0,0 +1,192 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_PMASYS_SOC_HWPM_H
#define T264_PMASYS_SOC_HWPM_H
#define pmasys_channel_control_user_r(i,j)\
(0x1610a10U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_control_user_update_bytes_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_control_user_update_bytes_m() (0x1U << 16U)
#define pmasys_channel_control_user_update_bytes_doit_v() (0x00000001U)
#define pmasys_channel_control_user_update_bytes_doit_f() (0x10000U)
#define pmasys_channel_control_user_membuf_clear_status_m() (0x1U << 1U)
#define pmasys_channel_control_user_membuf_clear_status_doit_f() (0x2U)
#define pmasys_channel_mem_bump_r(i,j) (0x1610a14U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbase_r(i,j) (0x1610a28U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbase_ptr_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outbase_ptr_m() (0x7ffffffU << 5U)
#define pmasys_channel_outbase_ptr_v(r) (((r) >> 5U) & 0x7ffffffU)
#define pmasys_channel_outbase_ptr_init_f() (0x0U)
#define pmasys_channel_outbaseupper_r(i,j)\
(0x1610a2cU + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbaseupper_ptr_f(v) (((v) & 0x1ffffffU) << 0U)
#define pmasys_channel_outbaseupper_ptr_m() (0x1ffffffU << 0U)
#define pmasys_channel_outbaseupper_ptr_v(r) (((r) >> 0U) & 0x1ffffffU)
#define pmasys_channel_outbaseupper_ptr_init_f() (0x0U)
#define pmasys_channel_outsize_r(i,j) (0x1610a30U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outsize_numbytes_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outsize_numbytes_m() (0x7ffffffU << 5U)
#define pmasys_channel_outsize_numbytes_init_f() (0x0U)
#define pmasys_channel_mem_head_r(i,j) (0x1610a34U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_head_ptr_m() (0xfffffffU << 4U)
#define pmasys_channel_mem_head_ptr_init_f() (0x0U)
#define pmasys_channel_mem_bytes_r(i,j)\
(0x1610a38U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_bytes_numbytes_m() (0xfffffffU << 4U)
#define pmasys_channel_mem_bytes_numbytes_init_f() (0x0U)
#define pmasys_channel_mem_bytes_addr_r(i,j)\
(0x1610a3cU + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_bytes_addr_ptr_f(v) (((v) & 0x3fffffffU) << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_m() (0x3fffffffU << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_init_f() (0x0U)
#define pmasys_cblock_bpc_mem_block_r(i) (0x1611e04U + ((i)*32U))
#define pmasys_cblock_bpc_mem_block_base_m() (0xffffffffU << 0U)
#define pmasys_cblock_bpc_mem_blockupper_r(i) (0x1611e08U + ((i)*32U))
#define pmasys_cblock_bpc_mem_blockupper_valid_f(v) (((v) & 0x1U) << 31U)
#define pmasys_cblock_bpc_mem_blockupper_valid_false_v() (0x00000000U)
#define pmasys_cblock_bpc_mem_blockupper_valid_true_v() (0x00000001U)
#define pmasys_channel_config_user_r(i,j)\
(0x1610a24U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_config_user_stream_f(v) (((v) & 0x1U) << 0U)
#define pmasys_channel_config_user_stream_m() (0x1U << 0U)
#define pmasys_channel_config_user_stream_disable_f() (0x0U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_v()\
(0x00000004U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_f()\
(0x4000000U)
#define pmasys_channel_status_r(i,j) (0x1610a00U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_status_engine_status_m() (0x7U << 0U)
#define pmasys_channel_status_engine_status_empty_v() (0x00000000U)
#define pmasys_channel_status_engine_status_empty_f() (0x0U)
#define pmasys_channel_status_engine_status_active_v() (0x00000001U)
#define pmasys_channel_status_engine_status_paused_v() (0x00000002U)
#define pmasys_channel_status_engine_status_quiescent_v() (0x00000003U)
#define pmasys_channel_status_engine_status_stalled_v() (0x00000005U)
#define pmasys_channel_status_engine_status_faulted_v() (0x00000006U)
#define pmasys_channel_status_engine_status_halted_v() (0x00000007U)
#define pmasys_channel_status_membuf_status_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_status_membuf_status_m() (0x1U << 16U)
#define pmasys_channel_status_membuf_status_v(r) (((r) >> 16U) & 0x1U)
#define pmasys_channel_status_membuf_status_overflowed_v() (0x00000001U)
#define pmasys_channel_status_membuf_status_init_f() (0x0U)
#define pmasys_command_slice_trigger_start_mask0_r(i) (0x1611128U + ((i)*144U))
#define pmasys_command_slice_trigger_start_mask0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_start_mask0_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_start_mask1_r(i) (0x161112cU + ((i)*144U))
#define pmasys_command_slice_trigger_start_mask1_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_start_mask1_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_stop_mask0_r(i) (0x1611130U + ((i)*144U))
#define pmasys_command_slice_trigger_stop_mask0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_stop_mask0_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_stop_mask1_r(i) (0x1611134U + ((i)*144U))
#define pmasys_command_slice_trigger_stop_mask1_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_stop_mask1_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_r(i) (0x161111cU + ((i)*144U))
#define pmasys_command_slice_trigger_config_user_pma_pulse_f(v)\
(((v) & 0x1U) << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_m() (0x1U << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_record_stream_f(v)\
(((v) & 0x1U) << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_m() (0x1U << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_f()\
(0x0U)
#define pmasys_streaming_capabilities1_r() (0x16109f4U)
#define pmasys_streaming_capabilities1_local_credits_f(v) (((v) & 0x1ffU) << 0U)
#define pmasys_streaming_capabilities1_local_credits_m() (0x1ffU << 0U)
#define pmasys_streaming_capabilities1_local_credits_init_v() (0x00000100U)
#define pmasys_streaming_capabilities1_total_credits_f(v) (((v) & 0x7ffU) << 9U)
#define pmasys_streaming_capabilities1_total_credits_m() (0x7ffU << 9U)
#define pmasys_streaming_capabilities1_total_credits_v(r) (((r) >> 9U) & 0x7ffU)
#define pmasys_streaming_capabilities1_total_credits_init_f() (0x20000U)
#define pmasys_command_slice_trigger_mask_secure0_r(i) (0x1611110U + ((i)*144U))
#define pmasys_command_slice_trigger_mask_secure0_engine_f(v)\
(((v) & 0xffffffffU) << 0U)
#define pmasys_command_slice_trigger_mask_secure0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_record_select_secure_r(i) (0x1611180U + ((i)*144U))
#define pmasys_command_slice_record_select_secure_trigger_select_f(v)\
(((v) & 0x3fU) << 0U)
#define pmasys_command_slice_record_select_secure_trigger_select_m()\
(0x3fU << 0U)
#define pmasys_profiling_cg2_secure_r() (0x1610844U)
#define pmasys_profiling_cg2_secure_slcg_f(v) (((v) & 0x1U) << 0U)
#define pmasys_profiling_cg2_secure_slcg_m() (0x1U << 0U)
#define pmasys_profiling_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmasys_profiling_cg2_secure_slcg_enabled_f() (0x0U)
#define pmasys_profiling_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmasys_profiling_cg2_secure_slcg__prod_f() (0x0U)
#define pmasys_profiling_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmasys_profiling_cg2_secure_slcg_disabled_f() (0x1U)
#define pmasys_profiling_cg1_secure_r() (0x1610848U)
#define pmasys_profiling_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmasys_profiling_cg1_secure_flcg_m() (0x1U << 31U)
#define pmasys_profiling_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmasys_profiling_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmasys_profiling_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmasys_profiling_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmasys_profiling_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmasys_profiling_cg1_secure_flcg_disabled_f() (0x0U)
#endif /* T264_PMASYS_SOC_HWPM_H */

View File

@@ -0,0 +1,170 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_PMMSYS_SOC_HWPM_H
#define T264_PMMSYS_SOC_HWPM_H
#define pmmsys_perdomain_offset_v() (0x00001000U)
#define pmmsys_user_channel_register_stride_v() (0x00000020U)
#define pmmsys_num_user_command_slices_v() (0x00000002U)
#define pmmsys_num_cblocks_v() (0x00000001U)
#define pmmsys_num_streaming_channels_v() (0x00000002U)
#define pmmsys_num_channels_per_cblock_v() (0x00000002U)
#define pmmsys_cblock_stride_v() (0x00000020U)
#define pmmsys_channel_stride_v() (0x00000010U)
#define pmmsys_dg_bitmap_array_size_v() (0x00000008U)
#define pmmsys_control_r(i) (0x160009cU + ((i)*4096U))
#define pmmsys_control_mode_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_control_mode_m() (0x7U << 0U)
#define pmmsys_control_mode_disable_v() (0x00000000U)
#define pmmsys_control_mode_disable_f() (0x0U)
#define pmmsys_control_mode_a_v() (0x00000001U)
#define pmmsys_control_mode_b_v() (0x00000002U)
#define pmmsys_control_mode_c_v() (0x00000003U)
#define pmmsys_control_mode_e_v() (0x00000005U)
#define pmmsys_control_mode_null_v() (0x00000007U)
#define pmmsys_control_o() (0x9cU)
#define pmmsys_enginestatus_r(i) (0x16000c8U + ((i)*4096U))
#define pmmsys_enginestatus_enable_f(v) (((v) & 0x1U) << 8U)
#define pmmsys_enginestatus_enable_m() (0x1U << 8U)
#define pmmsys_enginestatus_enable_out_v() (0x00000001U)
#define pmmsys_enginestatus_enable_out_f() (0x100U)
#define pmmsys_enginestatus_o() (0xc8U)
#define pmmsys_secure_config_r(i) (0x160012cU + ((i)*4096U))
#define pmmsys_secure_config_o() (0x12cU)
#define pmmsys_secure_config_cmd_slice_id_f(v) (((v) & 0x1fU) << 0U)
#define pmmsys_secure_config_cmd_slice_id_m() (0x1fU << 0U)
#define pmmsys_secure_config_channel_id_f(v) (((v) & 0x3U) << 8U)
#define pmmsys_secure_config_channel_id_m() (0x3U << 8U)
#define pmmsys_secure_config_cblock_id_f(v) (((v) & 0xfU) << 11U)
#define pmmsys_secure_config_cblock_id_m() (0xfU << 11U)
#define pmmsys_secure_config_dg_idx_v(r) (((r) >> 16U) & 0xffU)
#define pmmsys_secure_config_mapped_f(v) (((v) & 0x1U) << 28U)
#define pmmsys_secure_config_mapped_m() (0x1U << 28U)
#define pmmsys_secure_config_mapped_false_f() (0x0U)
#define pmmsys_secure_config_mapped_true_f() (0x10000000U)
#define pmmsys_secure_config_use_prog_dg_idx_f(v) (((v) & 0x1U) << 30U)
#define pmmsys_secure_config_use_prog_dg_idx_m() (0x1U << 30U)
#define pmmsys_secure_config_use_prog_dg_idx_false_f() (0x0U)
#define pmmsys_secure_config_use_prog_dg_idx_true_f() (0x40000000U)
#define pmmsys_secure_config_command_pkt_decoder_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_secure_config_command_pkt_decoder_m() (0x1U << 31U)
#define pmmsys_secure_config_command_pkt_decoder_disable_f() (0x0U)
#define pmmsys_secure_config_command_pkt_decoder_enable_f() (0x80000000U)
#define pmmsys_router_user_dgmap_status_secure_r(i) (0x1612050U + ((i)*4U))
#define pmmsys_router_user_dgmap_status_secure__size_1_v() (0x00000008U)
#define pmmsys_router_user_dgmap_status_secure_dg_s() (1U)
#define pmmsys_router_user_dgmap_status_secure_dg_not_mapped_v() (0x00000000U)
#define pmmsys_router_user_dgmap_status_secure_dg_mapped_v() (0x00000001U)
#define pmmsys_router_enginestatus_r() (0x1612080U)
#define pmmsys_router_enginestatus_status_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_router_enginestatus_status_m() (0x7U << 0U)
#define pmmsys_router_enginestatus_status_v(r) (((r) >> 0U) & 0x7U)
#define pmmsys_router_enginestatus_status_empty_v() (0x00000000U)
#define pmmsys_router_enginestatus_status_active_v() (0x00000001U)
#define pmmsys_router_enginestatus_status_paused_v() (0x00000002U)
#define pmmsys_router_enginestatus_status_quiescent_v() (0x00000003U)
#define pmmsys_router_enginestatus_status_stalled_v() (0x00000005U)
#define pmmsys_router_enginestatus_status_faulted_v() (0x00000006U)
#define pmmsys_router_enginestatus_status_halted_v() (0x00000007U)
#define pmmsys_router_enginestatus_merged_perfmon_status_f(v)\
(((v) & 0x7U) << 8U)
#define pmmsys_router_enginestatus_merged_perfmon_status_m() (0x7U << 8U)
#define pmmsys_router_enginestatus_merged_perfmon_status_v(r)\
(((r) >> 8U) & 0x7U)
#define pmmsys_router_profiling_dg_cg1_secure_r() (0x1612094U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_f() (0x0U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmmsys_router_profiling_cg1_secure_r() (0x1612098U)
#define pmmsys_router_profiling_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_cg1_secure_flcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmmsys_router_profiling_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmmsys_router_profiling_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmmsys_router_profiling_cg1_secure_flcg_disabled_f() (0x0U)
#define pmmsys_router_profiling_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmmsys_router_profiling_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmmsys_router_perfmon_cg2_secure_r() (0x161209cU)
#define pmmsys_router_perfmon_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_perfmon_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_router_perfmon_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_router_perfmon_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_router_perfmon_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_router_profiling_cg2_secure_r() (0x1612090U)
#define pmmsys_router_profiling_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_router_profiling_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_router_profiling_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_router_profiling_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_router_profiling_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_router_profiling_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_user_channel_config_secure_r(i,j)\
(0x16120b8U + ((i) * 32U)) + ((j) * 16U)
#define pmmsys_user_channel_config_secure_hs_credits_m() (0x1ffU << 0U)
#define pmmsys_user_channel_config_secure_hs_credits_init_f() (0x0U)
#endif /* T264_PMMSYS_SOC_HWPM_H */

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,107 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_CPU_H
#define T264_HWPM_IP_CPU_H
#if defined(CONFIG_T264_HWPM_IP_CPU)
#define T264_HWPM_ACTIVE_IP_CPU T264_HWPM_IP_CPU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_CPU_NUM_INSTANCES 14U
#define T264_HWPM_IP_CPU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_cpu;
#define addr_map_rpg_pm_cpu_core_size() BIT(0x00000014U)
#define addr_map_rpg_pm_cpu_core0_base_r() \
(addr_map_rpg_pm_cpu_core_base_r())
#define addr_map_rpg_pm_cpu_core0_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x00FFF)
#define addr_map_rpg_pm_cpu_core1_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x10000)
#define addr_map_rpg_pm_cpu_core1_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x10FFF)
#define addr_map_rpg_pm_cpu_core2_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x20000)
#define addr_map_rpg_pm_cpu_core2_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x20FFF)
#define addr_map_rpg_pm_cpu_core3_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x30000)
#define addr_map_rpg_pm_cpu_core3_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x30FFF)
#define addr_map_rpg_pm_cpu_core4_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x40000)
#define addr_map_rpg_pm_cpu_core4_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x40FFF)
#define addr_map_rpg_pm_cpu_core5_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x50000)
#define addr_map_rpg_pm_cpu_core5_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x50FFF)
#define addr_map_rpg_pm_cpu_core6_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x60000)
#define addr_map_rpg_pm_cpu_core6_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x60FFF)
#define addr_map_rpg_pm_cpu_core7_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x70000)
#define addr_map_rpg_pm_cpu_core7_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x70FFF)
#define addr_map_rpg_pm_cpu_core8_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x80000)
#define addr_map_rpg_pm_cpu_core8_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x80FFF)
#define addr_map_rpg_pm_cpu_core9_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x90000)
#define addr_map_rpg_pm_cpu_core9_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x90FFF)
#define addr_map_rpg_pm_cpu_core10_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xa0000)
#define addr_map_rpg_pm_cpu_core10_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xa0FFF)
#define addr_map_rpg_pm_cpu_core11_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xb0000)
#define addr_map_rpg_pm_cpu_core11_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xb0FFF)
#define addr_map_rpg_pm_cpu_core12_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xc0000)
#define addr_map_rpg_pm_cpu_core12_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xc0FFF)
#define addr_map_rpg_pm_cpu_core13_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xd0000)
#define addr_map_rpg_pm_cpu_core13_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xd0FFF)
#else
#define T264_HWPM_ACTIVE_IP_CPU
#endif
#endif /* T264_HWPM_IP_CPU_H */

View File

@@ -0,0 +1,301 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_isp.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_isp_inst0_perfmon_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_isp0",
.device_index = T264_ISP0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_isp0_base_r(),
.end_abs_pa = addr_map_rpg_pm_isp0_limit_r(),
.start_pa = addr_map_rpg_pm_isp0_base_r(),
.end_pa = addr_map_rpg_pm_isp0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst1_perfmon_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_isp1",
.device_index = T264_ISP1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_isp1_base_r(),
.end_abs_pa = addr_map_rpg_pm_isp1_limit_r(),
.start_pa = addr_map_rpg_pm_isp1_base_r(),
.end_pa = addr_map_rpg_pm_isp1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst0_perfmux_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_isp_thi_base_r(),
.end_abs_pa = addr_map_isp_thi_limit_r(),
.start_pa = addr_map_isp_thi_base_r(),
.end_pa = addr_map_isp_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_isp_alist,
.alist_size = ARRAY_SIZE(t264_isp_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst1_perfmux_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_isp1_thi_base_r(),
.end_abs_pa = addr_map_isp1_thi_limit_r(),
.start_pa = addr_map_isp1_thi_base_r(),
.end_pa = addr_map_isp1_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_isp_alist,
.alist_size = ARRAY_SIZE(t264_isp_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_isp_inst_static_array[
T264_HWPM_IP_ISP_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_isp_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.element_stride = addr_map_isp_thi_limit_r() -
addr_map_isp_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST,
.element_static_array =
t264_isp_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_isp0_base_r(),
.range_end = addr_map_rpg_pm_isp0_limit_r(),
.element_stride = addr_map_rpg_pm_isp0_limit_r() -
addr_map_rpg_pm_isp0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_isp_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp1_thi_base_r(),
.range_end = addr_map_isp1_thi_limit_r(),
.element_stride = addr_map_isp1_thi_limit_r() -
addr_map_isp1_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST,
.element_static_array =
t264_isp_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_isp1_base_r(),
.range_end = addr_map_rpg_pm_isp1_limit_r(),
.element_stride = addr_map_rpg_pm_isp1_limit_r() -
addr_map_rpg_pm_isp1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_isp = {
.num_instances = T264_HWPM_IP_ISP_NUM_INSTANCES,
.ip_inst_static_array = t264_isp_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp1_thi_limit_r(),
.inst_stride = addr_map_isp_thi_limit_r() -
addr_map_isp_thi_base_r() + 1ULL,
.inst_slots = 0U,
.islots_overlimit = true,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_isp0_base_r(),
.range_end = addr_map_rpg_pm_isp1_limit_r(),
.inst_stride = addr_map_rpg_pm_isp0_limit_r() -
addr_map_rpg_pm_isp0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_ISP_H
#define T264_HWPM_IP_ISP_H
#if defined(CONFIG_T264_HWPM_IP_ISP)
#define T264_HWPM_ACTIVE_IP_ISP T264_HWPM_IP_ISP,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_ISP_NUM_INSTANCES 2U
#define T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_isp;
#else
#define T264_HWPM_ACTIVE_IP_ISP
#endif
#endif /* T264_HWPM_IP_ISP_H */

View File

@@ -0,0 +1,714 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_mss_channel.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_mss_channel_inst0_perfmon_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta0",
.device_index = T264_MSS_CHANNEL_PARTA0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss0_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss0_limit_r(),
.start_pa = addr_map_rpg_pm_mss0_base_r(),
.end_pa = addr_map_rpg_pm_mss0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta1",
.device_index = T264_MSS_CHANNEL_PARTA1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss1_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss1_limit_r(),
.start_pa = addr_map_rpg_pm_mss1_base_r(),
.end_pa = addr_map_rpg_pm_mss1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta2",
.device_index = T264_MSS_CHANNEL_PARTA2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss2_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss2_limit_r(),
.start_pa = addr_map_rpg_pm_mss2_base_r(),
.end_pa = addr_map_rpg_pm_mss2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta3",
.device_index = T264_MSS_CHANNEL_PARTA3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss3_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss3_limit_r(),
.start_pa = addr_map_rpg_pm_mss3_base_r(),
.end_pa = addr_map_rpg_pm_mss3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb0",
.device_index = T264_MSS_CHANNEL_PARTB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss4_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss4_limit_r(),
.start_pa = addr_map_rpg_pm_mss4_base_r(),
.end_pa = addr_map_rpg_pm_mss4_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb1",
.device_index = T264_MSS_CHANNEL_PARTB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss5_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss5_limit_r(),
.start_pa = addr_map_rpg_pm_mss5_base_r(),
.end_pa = addr_map_rpg_pm_mss5_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb2",
.device_index = T264_MSS_CHANNEL_PARTB2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss6_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss6_limit_r(),
.start_pa = addr_map_rpg_pm_mss6_base_r(),
.end_pa = addr_map_rpg_pm_mss6_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb3",
.device_index = T264_MSS_CHANNEL_PARTB3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss7_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss7_limit_r(),
.start_pa = addr_map_rpg_pm_mss7_base_r(),
.end_pa = addr_map_rpg_pm_mss7_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc0",
.device_index = T264_MSS_CHANNEL_PARTC0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss8_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss8_limit_r(),
.start_pa = addr_map_rpg_pm_mss8_base_r(),
.end_pa = addr_map_rpg_pm_mss8_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc1",
.device_index = T264_MSS_CHANNEL_PARTC1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss9_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss9_limit_r(),
.start_pa = addr_map_rpg_pm_mss9_base_r(),
.end_pa = addr_map_rpg_pm_mss9_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc2",
.device_index = T264_MSS_CHANNEL_PARTC2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss10_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss10_limit_r(),
.start_pa = addr_map_rpg_pm_mss10_base_r(),
.end_pa = addr_map_rpg_pm_mss10_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc3",
.device_index = T264_MSS_CHANNEL_PARTC3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss11_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss11_limit_r(),
.start_pa = addr_map_rpg_pm_mss11_base_r(),
.end_pa = addr_map_rpg_pm_mss11_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd0",
.device_index = T264_MSS_CHANNEL_PARTD0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss12_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss12_limit_r(),
.start_pa = addr_map_rpg_pm_mss12_base_r(),
.end_pa = addr_map_rpg_pm_mss12_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd1",
.device_index = T264_MSS_CHANNEL_PARTD1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss13_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss13_limit_r(),
.start_pa = addr_map_rpg_pm_mss13_base_r(),
.end_pa = addr_map_rpg_pm_mss13_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd2",
.device_index = T264_MSS_CHANNEL_PARTD2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss14_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss14_limit_r(),
.start_pa = addr_map_rpg_pm_mss14_base_r(),
.end_pa = addr_map_rpg_pm_mss14_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd3",
.device_index = T264_MSS_CHANNEL_PARTD3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss15_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss15_limit_r(),
.start_pa = addr_map_rpg_pm_mss15_base_r(),
.end_pa = addr_map_rpg_pm_mss15_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_channel_inst0_perfmux_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc0_base_r(),
.end_abs_pa = addr_map_mc0_limit_r(),
.start_pa = addr_map_mc0_base_r(),
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc1_base_r(),
.end_abs_pa = addr_map_mc1_limit_r(),
.start_pa = addr_map_mc1_base_r(),
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc2_base_r(),
.end_abs_pa = addr_map_mc2_limit_r(),
.start_pa = addr_map_mc2_base_r(),
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc3_base_r(),
.end_abs_pa = addr_map_mc3_limit_r(),
.start_pa = addr_map_mc3_base_r(),
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc4_base_r(),
.end_abs_pa = addr_map_mc4_limit_r(),
.start_pa = addr_map_mc4_base_r(),
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc5_base_r(),
.end_abs_pa = addr_map_mc5_limit_r(),
.start_pa = addr_map_mc5_base_r(),
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc6_base_r(),
.end_abs_pa = addr_map_mc6_limit_r(),
.start_pa = addr_map_mc6_base_r(),
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc7_base_r(),
.end_abs_pa = addr_map_mc7_limit_r(),
.start_pa = addr_map_mc7_base_r(),
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc8_base_r(),
.end_abs_pa = addr_map_mc8_limit_r(),
.start_pa = addr_map_mc8_base_r(),
.end_pa = addr_map_mc8_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc9_base_r(),
.end_abs_pa = addr_map_mc9_limit_r(),
.start_pa = addr_map_mc9_base_r(),
.end_pa = addr_map_mc9_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc10_base_r(),
.end_abs_pa = addr_map_mc10_limit_r(),
.start_pa = addr_map_mc10_base_r(),
.end_pa = addr_map_mc10_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc11_base_r(),
.end_abs_pa = addr_map_mc11_limit_r(),
.start_pa = addr_map_mc11_base_r(),
.end_pa = addr_map_mc11_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc12_base_r(),
.end_abs_pa = addr_map_mc12_limit_r(),
.start_pa = addr_map_mc12_base_r(),
.end_pa = addr_map_mc12_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc13_base_r(),
.end_abs_pa = addr_map_mc13_limit_r(),
.start_pa = addr_map_mc13_base_r(),
.end_pa = addr_map_mc13_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc14_base_r(),
.end_abs_pa = addr_map_mc14_limit_r(),
.start_pa = addr_map_mc14_base_r(),
.end_pa = addr_map_mc14_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc15_base_r(),
.end_abs_pa = addr_map_mc15_limit_r(),
.start_pa = addr_map_mc15_base_r(),
.end_pa = addr_map_mc15_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_channel_inst0_broadcast_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mcb_base_r(),
.end_abs_pa = addr_map_mcb_limit_r(),
.start_pa = addr_map_mcb_base_r(),
.end_pa = addr_map_mcb_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_mss_channel_inst_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_mss_channel_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc15_limit_r(),
.element_stride = addr_map_mc0_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST,
.element_static_array =
t264_mss_channel_inst0_broadcast_element_static_array,
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.element_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST,
.element_static_array =
t264_mss_channel_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_mss0_base_r(),
.range_end = addr_map_rpg_pm_mss15_limit_r(),
.element_stride = addr_map_rpg_pm_mss0_limit_r() -
addr_map_rpg_pm_mss0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_mss_channel = {
.num_instances = T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES,
.ip_inst_static_array = t264_mss_channel_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc15_limit_r(),
.inst_stride = addr_map_mc15_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.inst_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_mss0_base_r(),
.range_end = addr_map_rpg_pm_mss15_limit_r(),
.inst_stride = addr_map_rpg_pm_mss15_limit_r() -
addr_map_rpg_pm_mss0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_MSS_CHANNEL_H
#define T264_HWPM_IP_MSS_CHANNEL_H
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
#define T264_HWPM_ACTIVE_IP_MSS_CHANNEL T264_HWPM_IP_MSS_CHANNEL,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t264_hwpm_ip_mss_channel;
#else
#define T264_HWPM_ACTIVE_IP_MSS_CHANNEL
#endif
#endif /* T264_HWPM_IP_MSS_CHANNEL_H */

View File

@@ -0,0 +1,483 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_mss_hubs.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_mss_hubs_inst0_perfmon_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_system_msshub0",
.device_index = T264_SYSTEM_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_system_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_system_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_system_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_system_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_system_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_disp_usb_msshub0",
.device_index = T264_DISP_USB_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_disp_usb_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_disp_usb_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_disp_usb_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_disp_usb_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_disp_usb_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_vision_msshub0",
.device_index = T264_VISION_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vision_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vision_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_vision_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_vision_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_vision_msshub1",
.device_index = T264_VISION_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vision_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_vision_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_vision_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_vision_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub0",
.device_index = T264_UCF_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub1",
.device_index = T264_UCF_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub2",
.device_index = T264_UCF_MSS_HUB2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = "perfmon_uphy0_msshub0",
.device_index = T264_UPHY0_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_uphy0_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_uphy0_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_uphy0_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_uphy0_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = "perfmon_uphy0_msshub1",
.device_index = T264_UPHY0_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_uphy0_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_uphy0_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_hubs_inst0_perfmux_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc0_base_r(),
.end_abs_pa = addr_map_mc0_limit_r(),
.start_pa = addr_map_mc0_base_r(),
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc1_base_r(),
.end_abs_pa = addr_map_mc1_limit_r(),
.start_pa = addr_map_mc1_base_r(),
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc2_base_r(),
.end_abs_pa = addr_map_mc2_limit_r(),
.start_pa = addr_map_mc2_base_r(),
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc3_base_r(),
.end_abs_pa = addr_map_mc3_limit_r(),
.start_pa = addr_map_mc3_base_r(),
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc4_base_r(),
.end_abs_pa = addr_map_mc4_limit_r(),
.start_pa = addr_map_mc4_base_r(),
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc5_base_r(),
.end_abs_pa = addr_map_mc5_limit_r(),
.start_pa = addr_map_mc5_base_r(),
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc6_base_r(),
.end_abs_pa = addr_map_mc6_limit_r(),
.start_pa = addr_map_mc6_base_r(),
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc7_base_r(),
.end_abs_pa = addr_map_mc7_limit_r(),
.start_pa = addr_map_mc7_base_r(),
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc8_base_r(),
.end_abs_pa = addr_map_mc8_limit_r(),
.start_pa = addr_map_mc8_base_r(),
.end_pa = addr_map_mc8_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_hubs_inst0_broadcast_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mcb_base_r(),
.end_abs_pa = addr_map_mcb_limit_r(),
.start_pa = addr_map_mcb_base_r(),
.end_pa = addr_map_mcb_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_mss_hubs_inst_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc8_limit_r(),
.element_stride = addr_map_mc0_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_broadcast_element_static_array,
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.element_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_system_msshub0_base_r(),
.range_end = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.element_stride = addr_map_rpg_pm_system_msshub0_limit_r() -
addr_map_rpg_pm_system_msshub0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_mss_hubs = {
.num_instances = T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES,
.ip_inst_static_array = t264_mss_hubs_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc8_limit_r(),
.inst_stride = addr_map_mc8_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.inst_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_system_msshub0_base_r(),
.range_end = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.inst_stride = addr_map_rpg_pm_uphy0_msshub1_limit_r() -
addr_map_rpg_pm_system_msshub0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_MSS_HUBS_H
#define T264_HWPM_IP_MSS_HUBS_H
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
#define T264_HWPM_ACTIVE_IP_MSS_HUBS T264_HWPM_IP_MSS_HUBS,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES 1U
#define T264_HWPM_IP_MSS_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t264_hwpm_ip_mss_hubs;
#else
#define T264_HWPM_ACTIVE_IP_MSS_HUBS
#endif
#endif /* T264_HWPM_IP_MSS_HUBS_H */

View File

@@ -0,0 +1,195 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ocu.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ocu_inst0_perfmon_element_static_array[
T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ocu0",
.device_index = T264_OCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ocu_base_r(),
.end_abs_pa = addr_map_rpg_pm_ocu_limit_r(),
.start_pa = addr_map_rpg_pm_ocu_base_r(),
.end_pa = addr_map_rpg_pm_ocu_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ocu_inst0_perfmux_element_static_array[
T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ocu_base_r(),
.end_abs_pa = addr_map_ocu_limit_r(),
.start_pa = addr_map_ocu_base_r(),
.end_pa = addr_map_ocu_limit_r(),
.base_pa = 0ULL,
.alist = t264_ocu_alist,
.alist_size = ARRAY_SIZE(t264_ocu_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ocu_inst_static_array[
T264_HWPM_IP_OCU_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_OCU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ocu_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ocu_base_r(),
.range_end = addr_map_ocu_limit_r(),
.element_stride = addr_map_ocu_limit_r() -
addr_map_ocu_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ocu_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ocu_base_r(),
.range_end = addr_map_rpg_pm_ocu_limit_r(),
.element_stride = addr_map_rpg_pm_ocu_limit_r() -
addr_map_rpg_pm_ocu_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ocu = {
.num_instances = T264_HWPM_IP_OCU_NUM_INSTANCES,
.ip_inst_static_array = t264_ocu_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ocu_base_r(),
.range_end = addr_map_ocu_limit_r(),
.inst_stride = addr_map_ocu_limit_r() -
addr_map_ocu_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ocu_base_r(),
.range_end = addr_map_rpg_pm_ocu_limit_r(),
.inst_stride = addr_map_rpg_pm_ocu_limit_r() -
addr_map_rpg_pm_ocu_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_OCU_H
#define T264_HWPM_IP_OCU_H
#if defined(CONFIG_T264_HWPM_IP_OCU)
#define T264_HWPM_ACTIVE_IP_OCU T264_HWPM_IP_OCU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_OCU_NUM_INSTANCES 1U
#define T264_HWPM_IP_OCU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ocu;
#else
#define T264_HWPM_ACTIVE_IP_OCU
#endif
#endif /* T264_HWPM_IP_OCU_H */

View File

@@ -0,0 +1,190 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_pma.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_pma_inst0_perfmon_element_static_array[
T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_hwpm",
.device_index = T264_HWPM_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_hwpm_base_r(),
.end_abs_pa = addr_map_rpg_pm_hwpm_limit_r(),
.start_pa = addr_map_rpg_pm_hwpm_base_r(),
.end_pa = addr_map_rpg_pm_hwpm_limit_r(),
.base_pa = addr_map_rpg_grp_system_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_pma_inst0_perfmux_element_static_array[
T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "pma",
.device_index = T264_PMA_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_pma_base_r(),
.end_abs_pa = addr_map_pma_limit_r(),
.start_pa = addr_map_pma_base_r(),
.end_pa = addr_map_pma_limit_r(),
.base_pa = addr_map_pma_base_r(),
.alist = t264_pma_res_pma_alist,
.alist_size = ARRAY_SIZE(t264_pma_res_pma_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_pma_inst_static_array[
T264_HWPM_IP_PMA_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_PMA_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_pma_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.element_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST,
.element_static_array =
t264_pma_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_hwpm_base_r(),
.range_end = addr_map_rpg_pm_hwpm_limit_r(),
.element_stride = addr_map_rpg_pm_hwpm_limit_r() -
addr_map_rpg_pm_hwpm_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_pma = {
.num_instances = T264_HWPM_IP_PMA_NUM_INSTANCES,
.ip_inst_static_array = t264_pma_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.inst_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_hwpm_base_r(),
.range_end = addr_map_rpg_pm_hwpm_limit_r(),
.inst_stride = addr_map_rpg_pm_hwpm_limit_r() -
addr_map_rpg_pm_hwpm_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0x1U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_VALID,
.reserved = false,
};

View File

@@ -0,0 +1,38 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_IP_PMA_H
#define T264_HWPM_IP_PMA_H
#define T264_HWPM_ACTIVE_IP_PMA T264_HWPM_IP_PMA,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_PMA_NUM_INSTANCES 1U
#define T264_HWPM_IP_PMA_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_pma;
#endif /* T264_HWPM_IP_PMA_H */

View File

@@ -0,0 +1,280 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_pva.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_pva_inst0_perfmon_element_static_array[
T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_pvac0",
.device_index = T264_PVAC0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvac0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvac0_limit_r(),
.start_pa = addr_map_rpg_pm_pvac0_base_r(),
.end_pa = addr_map_rpg_pm_pvac0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_pvav0",
.device_index = T264_PVAV0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvav0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvav0_limit_r(),
.start_pa = addr_map_rpg_pm_pvav0_base_r(),
.end_pa = addr_map_rpg_pm_pvav0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_pvav1",
.device_index = T264_PVAV1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvav1_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvav1_limit_r(),
.start_pa = addr_map_rpg_pm_pvav1_base_r(),
.end_pa = addr_map_rpg_pm_pvav1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(0),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_pvap0",
.device_index = T264_PVAP0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvap0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvap0_limit_r(),
.start_pa = addr_map_rpg_pm_pvap0_base_r(),
.end_pa = addr_map_rpg_pm_pvap0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(0),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_pvap1",
.device_index = T264_PVAP1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvap1_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvap1_limit_r(),
.start_pa = addr_map_rpg_pm_pvap1_base_r(),
.end_pa = addr_map_rpg_pm_pvap1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_pva_inst0_perfmux_element_static_array[
T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_pva0_pm_base_r(),
.end_abs_pa = addr_map_pva0_pm_limit_r(),
.start_pa = addr_map_pva0_pm_base_r(),
.end_pa = addr_map_pva0_pm_limit_r(),
.base_pa = 0ULL,
.alist = t264_pva_pm_alist,
.alist_size = ARRAY_SIZE(t264_pva_pm_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_pva1_pm_base_r(),
.end_abs_pa = addr_map_pva1_pm_limit_r(),
.start_pa = addr_map_pva1_pm_base_r(),
.end_pa = addr_map_pva1_pm_limit_r(),
.base_pa = 0ULL,
.alist = t264_pva_pm_alist,
.alist_size = ARRAY_SIZE(t264_pva_pm_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_pva_inst_static_array[
T264_HWPM_IP_PVA_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_pva_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva1_pm_limit_r(),
.element_stride = addr_map_pva0_pm_limit_r() -
addr_map_pva0_pm_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST,
.element_static_array =
t264_pva_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_pvac0_base_r(),
.range_end = addr_map_rpg_pm_pvap1_limit_r(),
.element_stride = addr_map_rpg_pm_pvac0_limit_r() -
addr_map_rpg_pm_pvac0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvpvadebugfs/pva0/hwpm",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_pva = {
.num_instances = T264_HWPM_IP_PVA_NUM_INSTANCES,
.ip_inst_static_array = t264_pva_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva1_pm_limit_r(),
.inst_stride = addr_map_pva1_pm_limit_r() -
addr_map_pva0_pm_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_pvac0_base_r(),
.range_end = addr_map_rpg_pm_pvap1_limit_r(),
.inst_stride = addr_map_rpg_pm_pvap1_limit_r() -
addr_map_rpg_pm_pvac0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_PVA_H
#define T264_HWPM_IP_PVA_H
#if defined(CONFIG_T264_HWPM_IP_PVA)
#define T264_HWPM_ACTIVE_IP_PVA T264_HWPM_IP_PVA,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T264_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST 5U
#define T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 2U
#define T264_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_pva;
#else
#define T264_HWPM_ACTIVE_IP_PVA
#endif
#endif /* T264_HWPM_IP_PVA_H */

View File

@@ -0,0 +1,264 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_rtr.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
#include <hal/t264/t264_perfmon_device_index.h>
/* RTR aperture should be placed in instance T264_HWPM_IP_RTR_STATIC_RTR_INST */
static struct hwpm_ip_aperture t264_rtr_inst0_perfmux_element_static_array[
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "rtr",
.device_index = T264_RTR_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rtr_base_r(),
.end_abs_pa = addr_map_rtr_limit_r(),
.start_pa = addr_map_rtr_base_r(),
.end_pa = addr_map_rtr_limit_r(),
.base_pa = addr_map_rtr_base_r(),
.alist = t264_rtr_alist,
.alist_size = ARRAY_SIZE(t264_rtr_alist),
.fake_registers = NULL,
},
};
/* PMA from RTR perspective */
/* PMA aperture should be placed in instance T264_HWPM_IP_RTR_STATIC_PMA_INST */
static struct hwpm_ip_aperture t264_rtr_inst1_perfmux_element_static_array[
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "pma",
.device_index = T264_PMA_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_pma_base_r(),
.end_abs_pa = addr_map_pma_limit_r(),
.start_pa = addr_map_pma_base_r(),
.end_pa = addr_map_pma_limit_r(),
.base_pa = addr_map_pma_base_r(),
.alist = t264_pma_res_cmd_slice_rtr_alist,
.alist_size = ARRAY_SIZE(t264_pma_res_cmd_slice_rtr_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_rtr_inst_static_array[
T264_HWPM_IP_RTR_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_rtr_inst0_perfmux_element_static_array,
.range_start = addr_map_rtr_base_r(),
.range_end = addr_map_rtr_limit_r(),
.element_stride = addr_map_rtr_limit_r() -
addr_map_rtr_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_rtr_inst1_perfmux_element_static_array,
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.element_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_rtr = {
.num_instances = T264_HWPM_IP_RTR_NUM_INSTANCES,
.ip_inst_static_array = t264_rtr_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/*
* PMA block is 0x2000 wide and RTR block is 0x1000 wide
* Expected facts:
* - PMA should be referred as a single entity
* - RTR IP instance array should have 2 slots(PMA, RTR)
*
* To ensure that the inst_slots are computed correctly
* as 2 slots, the instance range for perfmux aperture
* needs to be twice the PMA block.
*/
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r() +
(addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL),
/* Use PMA stride as it is larger block than RTR */
.inst_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = 0U,
.override_enable = false,
/* RTR is defined as 2 instance IP corresponding to router and pma */
/* Set this mask to indicate that instances are available */
.inst_fs_mask = 0x3U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_VALID,
.reserved = false,
};

View File

@@ -0,0 +1,43 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_IP_RTR_H
#define T264_HWPM_IP_RTR_H
#define T264_HWPM_ACTIVE_IP_RTR T264_HWPM_IP_RTR,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_RTR_NUM_INSTANCES 2U
#define T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST 0U
#define T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST 0U
#define T264_HWPM_IP_RTR_STATIC_RTR_INST 0U
#define T264_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX 0U
#define T264_HWPM_IP_RTR_STATIC_PMA_INST 1U
#define T264_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX 0U
extern struct hwpm_ip t264_hwpm_ip_rtr;
#endif /* T264_HWPM_IP_RTR_H */

View File

@@ -0,0 +1,615 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_smmu.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_smmu_inst0_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu0",
.device_index = T264_UCF_TCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst1_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu1",
.device_index = T264_UCF_TCU1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst2_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu3",
.device_index = T264_UCF_TCU3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu3_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu3_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst3_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu2",
.device_index = T264_UCF_TCU2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst4_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_dispusbtcu0",
.device_index = T264_DISP_USB_TCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.end_abs_pa = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.start_pa = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.end_pa = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst0_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu2_base_r(),
.end_abs_pa = addr_map_smmu2_limit_r(),
.start_pa = addr_map_smmu2_base_r(),
.end_pa = addr_map_smmu2_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst1_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu1_base_r(),
.end_abs_pa = addr_map_smmu1_limit_r(),
.start_pa = addr_map_smmu1_base_r(),
.end_pa = addr_map_smmu1_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst2_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu4_base_r(),
.end_abs_pa = addr_map_smmu4_limit_r(),
.start_pa = addr_map_smmu4_base_r(),
.end_pa = addr_map_smmu4_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst3_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu0_base_r(),
.end_abs_pa = addr_map_smmu0_limit_r(),
.start_pa = addr_map_smmu0_base_r(),
.end_pa = addr_map_smmu0_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst4_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu3_base_r(),
.end_abs_pa = addr_map_smmu3_limit_r(),
.start_pa = addr_map_smmu3_base_r(),
.end_pa = addr_map_smmu3_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_smmu_inst_static_array[
T264_HWPM_IP_SMMU_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu2_base_r(),
.range_end = addr_map_smmu2_limit_r(),
.element_stride = addr_map_smmu2_limit_r() -
addr_map_smmu2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu0_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu0_limit_r() -
addr_map_rpg_pm_ucf_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu1_base_r(),
.range_end = addr_map_smmu1_limit_r(),
.element_stride = addr_map_smmu1_limit_r() -
addr_map_smmu1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu1_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu1_limit_r() -
addr_map_rpg_pm_ucf_smmu1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu4_base_r(),
.range_end = addr_map_smmu4_limit_r(),
.element_stride = addr_map_smmu4_limit_r() -
addr_map_smmu4_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst2_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu3_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu3_limit_r() -
addr_map_rpg_pm_ucf_smmu3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu0_base_r(),
.range_end = addr_map_smmu0_limit_r(),
.element_stride = addr_map_smmu0_limit_r() -
addr_map_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst3_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu2_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu2_limit_r() -
addr_map_rpg_pm_ucf_smmu2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(4),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst4_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu3_base_r(),
.range_end = addr_map_smmu3_limit_r(),
.element_stride = addr_map_smmu3_limit_r() -
addr_map_smmu3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst4_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.range_end = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.element_stride = addr_map_rpg_pm_disp_usb_smmu0_limit_r() -
addr_map_rpg_pm_disp_usb_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_smmu = {
.num_instances = T264_HWPM_IP_SMMU_NUM_INSTANCES,
.ip_inst_static_array = t264_smmu_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu1_base_r(),
.range_end = addr_map_smmu3_limit_r(),
.inst_stride = addr_map_smmu1_limit_r() -
addr_map_smmu1_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_smmu0_base_r(),
.range_end = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_smmu0_limit_r() -
addr_map_rpg_pm_ucf_smmu0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_SMMU_H
#define T264_HWPM_IP_SMMU_H
#if defined(CONFIG_T264_HWPM_IP_SMMU)
#define T264_HWPM_ACTIVE_IP_SMMU T264_HWPM_IP_SMMU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_SMMU_NUM_INSTANCES 5U
#define T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_smmu;
#else
#define T264_HWPM_ACTIVE_IP_SMMU
#endif
#endif /* T264_HWPM_IP_SMMU_H */

View File

@@ -0,0 +1,300 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ucf_csw.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ucf_csw_inst0_perfmon_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfcsw0",
.device_index = T264_UCF_CSW0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst1_perfmon_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfcsw1",
.device_index = T264_UCF_CSW1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst0_perfmux_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_csw0_base_r(),
.end_abs_pa = addr_map_ucf_csw0_limit_r(),
.start_pa = addr_map_ucf_csw0_base_r(),
.end_pa = addr_map_ucf_csw0_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_csw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_csw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst1_perfmux_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_csw1_base_r(),
.end_abs_pa = addr_map_ucf_csw1_limit_r(),
.start_pa = addr_map_ucf_csw1_base_r(),
.end_pa = addr_map_ucf_csw1_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_csw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_csw_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ucf_csw_inst_static_array[
T264_HWPM_IP_UCF_CSW_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_csw_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw0_base_r(),
.range_end = addr_map_ucf_csw0_limit_r(),
.element_stride = addr_map_ucf_csw0_limit_r() -
addr_map_ucf_csw0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_csw_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_vddmss0_limit_r() -
addr_map_rpg_pm_ucf_vddmss0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_csw_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw1_base_r(),
.range_end = addr_map_ucf_csw1_limit_r(),
.element_stride = addr_map_ucf_csw1_limit_r() -
addr_map_ucf_csw1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_csw_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_vddmss1_limit_r() -
addr_map_rpg_pm_ucf_vddmss1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ucf_csw = {
.num_instances = T264_HWPM_IP_UCF_CSW_NUM_INSTANCES,
.ip_inst_static_array = t264_ucf_csw_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw0_base_r(),
.range_end = addr_map_ucf_csw1_limit_r(),
.inst_stride = addr_map_ucf_csw0_limit_r() -
addr_map_ucf_csw0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_vddmss0_limit_r() -
addr_map_rpg_pm_ucf_vddmss0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_CSW_H
#define T264_HWPM_IP_UCF_CSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
#define T264_HWPM_ACTIVE_IP_UCF_CSW T264_HWPM_IP_UCF_CSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_CSW_NUM_INSTANCES 2U
#define T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_csw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_CSW
#endif
#endif /* T264_HWPM_IP_UCF_CSW_H */

View File

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_MSW_H
#define T264_HWPM_IP_UCF_MSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
#define T264_HWPM_ACTIVE_IP_UCF_MSW T264_HWPM_IP_UCF_MSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_MSW_NUM_INSTANCES 8U
#define T264_HWPM_IP_UCF_MSW_NUM_CORE_ELEMENT_PER_INST 2U
#define T264_HWPM_IP_UCF_MSW_NUM_PERFMON_PER_INST 2U
#define T264_HWPM_IP_UCF_MSW_NUM_PERFMUX_PER_INST 6U
#define T264_HWPM_IP_UCF_MSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_msw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_MSW
#endif
#endif /* T264_HWPM_IP_UCF_MSW_H */

View File

@@ -0,0 +1,510 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ucf_psw.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ucf_psw_inst0_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw0",
.device_index = T264_UCF_PSW0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst1_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw1",
.device_index = T264_UCF_PSW1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst2_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw2",
.device_index = T264_UCF_PSW2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst3_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw3",
.device_index = T264_UCF_PSW3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw3_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw3_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw3_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst0_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn0_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn0_psw_limit_r(),
.start_pa = addr_map_ucf_psn0_psw_base_r(),
.end_pa = addr_map_ucf_psn0_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst1_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn1_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn1_psw_limit_r(),
.start_pa = addr_map_ucf_psn1_psw_base_r(),
.end_pa = addr_map_ucf_psn1_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst2_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn2_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn2_psw_limit_r(),
.start_pa = addr_map_ucf_psn2_psw_base_r(),
.end_pa = addr_map_ucf_psn2_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst3_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn3_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn3_psw_limit_r(),
.start_pa = addr_map_ucf_psn3_psw_base_r(),
.end_pa = addr_map_ucf_psn3_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ucf_psw_inst_static_array[
T264_HWPM_IP_UCF_PSW_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn0_psw_base_r(),
.range_end = addr_map_ucf_psn0_psw_limit_r(),
.element_stride = addr_map_ucf_psn0_psw_limit_r() -
addr_map_ucf_psn0_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw0_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw0_limit_r() -
addr_map_rpg_pm_ucf_psw0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn1_psw_base_r(),
.range_end = addr_map_ucf_psn1_psw_limit_r(),
.element_stride = addr_map_ucf_psn1_psw_limit_r() -
addr_map_ucf_psn1_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw1_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw1_limit_r() -
addr_map_rpg_pm_ucf_psw1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn2_psw_base_r(),
.range_end = addr_map_ucf_psn2_psw_limit_r(),
.element_stride = addr_map_ucf_psn2_psw_limit_r() -
addr_map_ucf_psn2_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst2_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw2_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw2_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw2_limit_r() -
addr_map_rpg_pm_ucf_psw2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn3_psw_base_r(),
.range_end = addr_map_ucf_psn3_psw_limit_r(),
.element_stride = addr_map_ucf_psn3_psw_limit_r() -
addr_map_ucf_psn3_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst3_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw3_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw3_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw3_limit_r() -
addr_map_rpg_pm_ucf_psw3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ucf_psw = {
.num_instances = T264_HWPM_IP_UCF_PSW_NUM_INSTANCES,
.ip_inst_static_array = t264_ucf_psw_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn0_psw_base_r(),
.range_end = addr_map_ucf_psn3_psw_limit_r(),
.inst_stride = addr_map_ucf_psn0_psw_limit_r() -
addr_map_ucf_psn0_psw_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_psw0_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw3_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_psw0_limit_r() -
addr_map_rpg_pm_ucf_psw0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_PSW_H
#define T264_HWPM_IP_UCF_PSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
#define T264_HWPM_ACTIVE_IP_UCF_PSW T264_HWPM_IP_UCF_PSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_PSW_NUM_INSTANCES 4U
#define T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_psw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_PSW
#endif
#endif /* T264_HWPM_IP_UCF_PSW_H */

View File

@@ -0,0 +1,301 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_vi.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_vi_inst0_perfmon_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vi0",
.device_index = T264_VI0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vi0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vi0_limit_r(),
.start_pa = addr_map_rpg_pm_vi0_base_r(),
.end_pa = addr_map_rpg_pm_vi0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst1_perfmon_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vi1",
.device_index = T264_VI1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vi1_base_r(),
.end_abs_pa = addr_map_rpg_pm_vi1_limit_r(),
.start_pa = addr_map_rpg_pm_vi1_base_r(),
.end_pa = addr_map_rpg_pm_vi1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst0_perfmux_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vi_thi_base_r(),
.end_abs_pa = addr_map_vi_thi_limit_r(),
.start_pa = addr_map_vi_thi_base_r(),
.end_pa = addr_map_vi_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_vi_alist,
.alist_size = ARRAY_SIZE(t264_vi_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst1_perfmux_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vi2_thi_base_r(),
.end_abs_pa = addr_map_vi2_thi_limit_r(),
.start_pa = addr_map_vi2_thi_base_r(),
.end_pa = addr_map_vi2_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_vi_alist,
.alist_size = ARRAY_SIZE(t264_vi_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_vi_inst_static_array[
T264_HWPM_IP_VI_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vi_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi_thi_base_r(),
.range_end = addr_map_vi_thi_limit_r(),
.element_stride = addr_map_vi_thi_limit_r() -
addr_map_vi_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vi_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vi0_base_r(),
.range_end = addr_map_rpg_pm_vi0_limit_r(),
.element_stride = addr_map_rpg_pm_vi0_limit_r() -
addr_map_rpg_pm_vi0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vi_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi2_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.element_stride = addr_map_vi2_thi_limit_r() -
addr_map_vi2_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vi_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vi1_base_r(),
.range_end = addr_map_rpg_pm_vi1_limit_r(),
.element_stride = addr_map_rpg_pm_vi1_limit_r() -
addr_map_rpg_pm_vi1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_vi = {
.num_instances = T264_HWPM_IP_VI_NUM_INSTANCES,
.ip_inst_static_array = t264_vi_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.inst_stride = addr_map_vi_thi_limit_r() -
addr_map_vi_thi_base_r() + 1ULL,
.inst_slots = 0U,
.islots_overlimit = true,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_vi0_base_r(),
.range_end = addr_map_rpg_pm_vi1_limit_r(),
.inst_stride = addr_map_rpg_pm_vi0_limit_r() -
addr_map_rpg_pm_vi0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_VI_H
#define T264_HWPM_IP_VI_H
#if defined(CONFIG_T264_HWPM_IP_VI)
#define T264_HWPM_ACTIVE_IP_VI T264_HWPM_IP_VI,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_VI_NUM_INSTANCES 2U
#define T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_vi;
#else
#define T264_HWPM_ACTIVE_IP_VI
#endif
#endif /* T264_HWPM_IP_VI_H */

View File

@@ -0,0 +1,196 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_vic.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_vic_inst0_perfmon_element_static_array[
T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vica0",
.device_index = T264_VICA0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vic0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vic0_limit_r(),
.start_pa = addr_map_rpg_pm_vic0_base_r(),
.end_pa = addr_map_rpg_pm_vic0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vic_inst0_perfmux_element_static_array[
T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vic_base_r(),
.end_abs_pa = addr_map_vic_limit_r(),
.start_pa = addr_map_vic_base_r(),
.end_pa = addr_map_vic_limit_r(),
.base_pa = 0ULL,
.alist = t264_vic_alist,
.alist_size = ARRAY_SIZE(t264_vic_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_vic_inst_static_array[
T264_HWPM_IP_VIC_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vic_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.element_stride = addr_map_vic_limit_r() -
addr_map_vic_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vic_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vic0_base_r(),
.range_end = addr_map_rpg_pm_vic0_limit_r(),
.element_stride = addr_map_rpg_pm_vic0_limit_r() -
addr_map_rpg_pm_vic0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvhost-debug/vic_hwpm",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_vic = {
.num_instances = T264_HWPM_IP_VIC_NUM_INSTANCES,
.ip_inst_static_array = t264_vic_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.inst_stride = addr_map_vic_limit_r() -
addr_map_vic_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_vic0_base_r(),
.range_end = addr_map_rpg_pm_vic0_limit_r(),
.inst_stride = addr_map_rpg_pm_vic0_limit_r() -
addr_map_rpg_pm_vic0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -0,0 +1,48 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_VIC_H
#define T264_HWPM_IP_VIC_H
#if defined(CONFIG_T264_HWPM_IP_VIC)
#define T264_HWPM_ACTIVE_IP_VIC T264_HWPM_IP_VIC,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T264_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_vic;
#else
#define T264_HWPM_ACTIVE_IP_VIC
#endif
#endif /* T264_HWPM_IP_VIC_H */

View File

@@ -0,0 +1,546 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
#define T264_HWPM_ENGINE_INDEX_GPMA0 3U
#define T264_HWPM_ENGINE_INDEX_GPMA1 4U
#define T264_HWPM_ENGINE_INDEX_PMA 8U
int t264_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx()];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T264_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T264_HWPM_IP_RTR_STATIC_PMA_INST];
if (rtr_perfmux_ptr != NULL) {
*rtr_perfmux_ptr = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T264_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX];
}
if (pma_perfmux_ptr != NULL) {
*pma_perfmux_ptr = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T264_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX];
}
return 0;
}
int t264_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Check ROUTER state */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val);
hwpm_assert_print(hwpm,
pmmsys_router_enginestatus_status_v(reg_val) ==
pmmsys_router_enginestatus_status_empty_v(),
return -EINVAL, "Router not ready value 0x%x", reg_val);
/* Check PMA state */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val);
hwpm_assert_print(hwpm,
(reg_val & pmasys_channel_status_engine_status_m()) ==
pmasys_channel_status_engine_status_empty_f(),
return -EINVAL, "PMA not ready value 0x%x", reg_val);
return 0;
}
int t264_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 retries = 10U;
u32 sleep_msecs = 100U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA triggers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_config_user_pma_pulse_m(),
pmasys_command_slice_trigger_config_user_pma_pulse_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), reg_val);
/* Reset TRIGGER_START_MASK registers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask0_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_start_mask0_engine_m(),
pmasys_command_slice_trigger_start_mask0_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask0_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask1_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_start_mask1_engine_m(),
pmasys_command_slice_trigger_start_mask1_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask1_r(0), reg_val);
/* Reset TRIGGER_STOP_MASK registers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask0_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_stop_mask0_engine_m(),
pmasys_command_slice_trigger_stop_mask0_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask0_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask1_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_stop_mask1_engine_m(),
pmasys_command_slice_trigger_stop_mask1_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask1_r(0), reg_val);
/* Wait for PERFMONs to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val,
(pmmsys_router_enginestatus_merged_perfmon_status_v(
reg_val) != 0U),
"PMMSYS_ROUTER_ENGINESTATUS_PERFMON_STATUS timed out");
/* Wait for ROUTER to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val,
(pmmsys_router_enginestatus_status_v(reg_val) !=
pmmsys_router_enginestatus_status_empty_v()),
"PMMSYS_ROUTER_ENGINESTATUS_STATUS timed out");
/* Wait for PMA to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val,
((reg_val & pmasys_channel_status_engine_status_m()) !=
pmasys_channel_status_engine_status_empty_f()),
"PMASYS_CHANNEL_STATUS timed out");
return err;
}
int t264_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0,0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_config_user_coalesce_timeout_cycles_m(),
pmasys_channel_config_user_coalesce_timeout_cycles__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0,0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_credit_program(struct tegra_soc_hwpm *hwpm,
u32 *num_credits, u8 cblock_idx, u8 pma_channel_idx,
uint16_t credit_cmd)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr, pma perfmux failed");
switch (credit_cmd) {
case TEGRA_HWPM_CMD_SET_HS_CREDITS:
/* Write credits information */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
&reg_val);
reg_val = set_field(reg_val,
pmmsys_user_channel_config_secure_hs_credits_m(),
*num_credits);
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
reg_val);
break;
case TEGRA_HWPM_CMD_GET_HS_CREDITS:
/* Read credits information */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
num_credits);
break;
case TEGRA_HWPM_CMD_GET_TOTAL_HS_CREDITS:
/* read the total HS Credits */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_streaming_capabilities1_r(), &reg_val);
*num_credits = pmasys_streaming_capabilities1_total_credits_v(
reg_val);
break;
case TEGRA_HWPM_CMD_GET_CHIPLET_HS_CREDITS_POOL:
/* Defined for future chips */
tegra_hwpm_err(hwpm,
"TEGRA_SOC_HWPM_CMD_GET_CHIPLET_HS_CREDIT_POOL"
" not supported");
err = -EINVAL;
break;
case TEGRA_HWPM_CMD_GET_HS_CREDITS_MAPPING:
/* Defined for future chips */
tegra_hwpm_err(hwpm,
"TEGRA_SOC_HWPM_CMD_GET_HS_CREDIT_MAPPING"
" not supported");
err = -EINVAL;
break;
default:
tegra_hwpm_err(hwpm, "Invalid Credit Programming State (%d)",
credit_cmd);
err = -EINVAL;
break;
}
return err;
}
int t264_hwpm_setup_trigger(struct tegra_soc_hwpm *hwpm,
u8 enable_cross_trigger, u8 session_type)
{
int err = 0;
u32 trigger_mask_secure0 = 0U;
u32 record_select_secure = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get pma perfmux failed");
/*
* Case 1: profiler, cross-trigger enabled, GPU->SoC
* - Action: enable incoming start-stop trigger from GPU PMA
* - GPU PMA Action: enable outgoing trigger from GPU PMA,
* trigger type doesn't matter on GPU side
*
* Case 2: sampler, cross-trigger enabled, GPU->SoC
* - Action: enable incoming periodic trigger from GPU PMA
* - GPU PMA Action: enable outgoing trigger from GPU PMA,
* trigger type doesn't matter on GPU side
*
* Case 3: profiler, cross-trigger enabled, SoC->GPU
* - Action: enable outgoing trigger from SoC PMA,
* trigger type doesn't matter on SoC side
* - GPU PMA Action: configure incoming start-stop trigger from SoC PMA
*
* Case 4: sampler, cross-trigger enabled, SoC->GPU
* - Action: enable outgoing trigger from SoC PMA,
* trigger type doesn't matter on SoC side
* - GPU PMA Action: configure incoming periodic trigger from SoC PMA
*
* Case 5: profiler, cross-trigger disabled
* - Action: enable own trigger from SoC PMA,
* trigger type doesn't matter
* - GPU PMA Action: enable own trigger from GPU PMA,
* trigger type doesn't matter)
*
* Case 6: sampler, cross-trigger disabled
* - Action: enable own trigger from SoC PMA,
* trigger type doesn't matter
* - GPU PMA Action: enable own trigger from GPU PMA,
* trigger type doesn't matter
*/
if (!enable_cross_trigger) {
/*
* Handle Case-3 to Case-6
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_PMA);
record_select_secure = T264_HWPM_ENGINE_INDEX_PMA;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
return err;
}
switch (session_type) {
case TEGRA_HWPM_CMD_PERIODIC_SESSION:
/*
* Handle Case-1
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_GPMA1);
record_select_secure = T264_HWPM_ENGINE_INDEX_GPMA1;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
break;
case TEGRA_HWPM_CMD_START_STOP_SESSION:
/*
* Handle Case-2
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_GPMA0);
record_select_secure = T264_HWPM_ENGINE_INDEX_GPMA0;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
break;
case TEGRA_HWPM_CMD_INVALID_SESSION:
default:
tegra_hwpm_err(hwpm, "Invalid Session type");
err = -EINVAL;
break;
}
return err;
}

View File

@@ -0,0 +1,31 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_INIT_H
#define T264_HWPM_INIT_H
struct tegra_soc_hwpm;
int t264_hwpm_init_chip_info(struct tegra_soc_hwpm *hwpm);
#endif /* T264_HWPM_INIT_H */

View File

@@ -0,0 +1,360 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_clk_rst.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_kmem.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_init.h>
#include <hal/t264/t264_internal.h>
static struct tegra_soc_hwpm_chip t264_chip_info = {
.la_clk_rate = 648000000,
.chip_ips = NULL,
/* HALs */
.validate_secondary_hals = t264_hwpm_validate_secondary_hals,
/* Clocks-Resets */
.clk_rst_prepare = tegra_hwpm_clk_rst_prepare,
.clk_rst_set_rate_enable = tegra_hwpm_clk_rst_set_rate_enable,
.clk_rst_disable = tegra_hwpm_clk_rst_disable,
.clk_rst_release = tegra_hwpm_clk_rst_release,
/* IP */
.is_ip_active = t264_hwpm_is_ip_active,
.is_resource_active = t264_hwpm_is_resource_active,
.get_rtr_int_idx = t264_get_rtr_int_idx,
.get_ip_max_idx = t264_get_ip_max_idx,
.get_rtr_pma_perfmux_ptr = t264_hwpm_get_rtr_pma_perfmux_ptr,
.extract_ip_ops = t264_hwpm_extract_ip_ops,
.force_enable_ips = t264_hwpm_force_enable_ips,
.validate_current_config = t264_hwpm_validate_current_config,
.get_fs_info = tegra_hwpm_get_fs_info,
.get_resource_info = tegra_hwpm_get_resource_info,
/* Clock gating */
.init_prod_values = t264_hwpm_init_prod_values,
.disable_cg = t264_hwpm_disable_cg,
.enable_cg = t264_hwpm_enable_cg,
/* Secure register programming */
.credit_program = t264_hwpm_credit_program,
.setup_trigger = t264_hwpm_setup_trigger,
/* Resource reservation */
.reserve_rtr = tegra_hwpm_reserve_rtr,
.release_rtr = tegra_hwpm_release_rtr,
/* Aperture */
.perfmon_enable = t264_hwpm_perfmon_enable,
.perfmon_disable = t264_hwpm_perfmon_disable,
.perfmux_disable = tegra_hwpm_perfmux_disable,
.disable_triggers = t264_hwpm_disable_triggers,
.check_status = t264_hwpm_check_status,
/* Memory management */
.disable_mem_mgmt = t264_hwpm_disable_mem_mgmt,
.enable_mem_mgmt = t264_hwpm_enable_mem_mgmt,
.invalidate_mem_config = t264_hwpm_invalidate_mem_config,
.stream_mem_bytes = t264_hwpm_stream_mem_bytes,
.disable_pma_streaming = t264_hwpm_disable_pma_streaming,
.update_mem_bytes_get_ptr = t264_hwpm_update_mem_bytes_get_ptr,
.get_mem_bytes_put_ptr = t264_hwpm_get_mem_bytes_put_ptr,
.membuf_overflow_status = t264_hwpm_membuf_overflow_status,
/* Allowlist */
.get_alist_buf_size = tegra_hwpm_get_alist_buf_size,
.zero_alist_regs = tegra_hwpm_zero_alist_regs,
.copy_alist = tegra_hwpm_copy_alist,
.check_alist = tegra_hwpm_check_alist,
};
bool t264_hwpm_validate_secondary_hals(struct tegra_soc_hwpm *hwpm)
{
tegra_hwpm_fn(hwpm, " ");
if (hwpm->active_chip->clk_rst_prepare == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_prepare HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_set_rate_enable == NULL) {
tegra_hwpm_err(hwpm,
"clk_rst_set_rate_enable HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_disable == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_disable HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_release == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_release HAL uninitialized");
return false;
}
if (hwpm->active_chip->credit_program == NULL) {
tegra_hwpm_err(hwpm, "credit_program HAL uninitialized");
return false;
}
if (hwpm->active_chip->setup_trigger == NULL) {
tegra_hwpm_err(hwpm, "setup_trigger HAL uninitialized");
return false;
}
return true;
}
bool t264_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u32 *config_ip_index)
{
u32 config_ip = TEGRA_HWPM_IP_INACTIVE;
switch (ip_enum) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case TEGRA_HWPM_IP_VIC:
config_ip = T264_HWPM_IP_VIC;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case TEGRA_HWPM_IP_MSS_CHANNEL:
config_ip = T264_HWPM_IP_MSS_CHANNEL;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_PVA)
case TEGRA_HWPM_IP_PVA:
config_ip = T264_HWPM_IP_PVA;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case TEGRA_HWPM_IP_MSS_HUB:
config_ip = T264_HWPM_IP_MSS_HUBS;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_OCU)
case TEGRA_HWPM_IP_MCF_OCU:
config_ip = T264_HWPM_IP_OCU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case TEGRA_HWPM_IP_VI:
config_ip = T264_HWPM_IP_VI;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_ISP)
case TEGRA_HWPM_IP_ISP:
config_ip = T264_HWPM_IP_ISP;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case TEGRA_HWPM_IP_SMMU:
config_ip = T264_HWPM_IP_SMMU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case TEGRA_HWPM_IP_UCF_MSW:
config_ip = T264_HWPM_IP_UCF_MSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case TEGRA_HWPM_IP_UCF_PSW:
config_ip = T264_HWPM_IP_UCF_PSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case TEGRA_HWPM_IP_UCF_CSW:
config_ip = T264_HWPM_IP_UCF_CSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_CPU)
case TEGRA_HWPM_IP_CPU:
config_ip = T264_HWPM_IP_CPU;
#endif
break;
default:
tegra_hwpm_err(hwpm, "Queried enum tegra_hwpm_ip %d invalid",
ip_enum);
break;
}
*config_ip_index = config_ip;
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
bool t264_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index)
{
u32 config_ip = TEGRA_HWPM_IP_INACTIVE;
switch (res_enum) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case TEGRA_HWPM_RESOURCE_VIC:
config_ip = T264_HWPM_IP_VIC;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case TEGRA_HWPM_RESOURCE_MSS_CHANNEL:
config_ip = T264_HWPM_IP_MSS_CHANNEL;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_PVA)
case TEGRA_HWPM_RESOURCE_PVA:
config_ip = T264_HWPM_IP_PVA;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case TEGRA_HWPM_RESOURCE_MSS_HUB:
config_ip = T264_HWPM_IP_MSS_HUBS;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_OCU)
case TEGRA_HWPM_RESOURCE_MCF_OCU:
config_ip = T264_HWPM_IP_OCU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case TEGRA_HWPM_RESOURCE_VI:
config_ip = T264_HWPM_IP_VI;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_ISP)
case TEGRA_HWPM_RESOURCE_ISP:
config_ip = T264_HWPM_IP_ISP;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case TEGRA_HWPM_RESOURCE_SMMU:
config_ip = T264_HWPM_IP_SMMU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case TEGRA_HWPM_RESOURCE_UCF_MSW:
config_ip = T264_HWPM_IP_UCF_MSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case TEGRA_HWPM_RESOURCE_UCF_PSW:
config_ip = T264_HWPM_IP_UCF_PSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case TEGRA_HWPM_RESOURCE_UCF_CSW:
config_ip = T264_HWPM_IP_UCF_CSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_CPU)
case TEGRA_HWPM_RESOURCE_CPU:
config_ip = T264_HWPM_IP_CPU;
#endif
break;
case TEGRA_HWPM_RESOURCE_PMA:
config_ip = T264_HWPM_IP_PMA;
break;
case TEGRA_HWPM_RESOURCE_CMD_SLICE_RTR:
config_ip = T264_HWPM_IP_RTR;
break;
default:
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Queried resource %d invalid",
res_enum);
break;
}
*config_ip_index = config_ip;
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
u32 t264_get_rtr_int_idx(void)
{
return T264_HWPM_IP_RTR;
}
u32 t264_get_ip_max_idx(void)
{
return T264_HWPM_IP_MAX;
}
int t264_hwpm_init_chip_info(struct tegra_soc_hwpm *hwpm)
{
struct hwpm_ip **t264_active_ip_info;
/* Allocate array of pointers to hold active IP structures */
t264_chip_info.chip_ips = tegra_hwpm_kcalloc(hwpm,
T264_HWPM_IP_MAX, sizeof(struct hwpm_ip *));
/* Add active chip structure link to hwpm super-structure */
hwpm->active_chip = &t264_chip_info;
/* Temporary pointer to make below assignments legible */
t264_active_ip_info = t264_chip_info.chip_ips;
t264_active_ip_info[T264_HWPM_IP_PMA] = &t264_hwpm_ip_pma;
t264_active_ip_info[T264_HWPM_IP_RTR] = &t264_hwpm_ip_rtr;
#if defined(CONFIG_T264_HWPM_IP_VIC)
t264_active_ip_info[T264_HWPM_IP_VIC] = &t264_hwpm_ip_vic;
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
t264_active_ip_info[T264_HWPM_IP_MSS_CHANNEL] =
&t264_hwpm_ip_mss_channel;
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
t264_active_ip_info[T264_HWPM_IP_MSS_HUBS] =
&t264_hwpm_ip_mss_hubs;
#endif
#if defined(CONFIG_T264_HWPM_IP_PVA)
t264_active_ip_info[T264_HWPM_IP_PVA] = &t264_hwpm_ip_pva;
#endif
#if defined(CONFIG_T264_HWPM_IP_OCU)
t264_active_ip_info[T264_HWPM_IP_OCU] = &t264_hwpm_ip_ocu;
#endif
#if defined(CONFIG_T264_HWPM_IP_SMMU)
t264_active_ip_info[T264_HWPM_IP_SMMU] = &t264_hwpm_ip_smmu;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
t264_active_ip_info[T264_HWPM_IP_UCF_MSW] = &t264_hwpm_ip_ucf_msw;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
t264_active_ip_info[T264_HWPM_IP_UCF_PSW] = &t264_hwpm_ip_ucf_psw;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
t264_active_ip_info[T264_HWPM_IP_UCF_CSW] = &t264_hwpm_ip_ucf_csw;
#endif
#if defined(CONFIG_T264_HWPM_IP_CPU)
t264_active_ip_info[T264_HWPM_IP_CPU] = &t264_hwpm_ip_cpu;
#endif
#if defined(CONFIG_T264_HWPM_IP_VI)
t264_active_ip_info[T264_HWPM_IP_VI] = &t264_hwpm_ip_vi;
#endif
#if defined(CONFIG_T264_HWPM_IP_ISP)
t264_active_ip_info[T264_HWPM_IP_ISP] = &t264_hwpm_ip_isp;
#endif
if (!tegra_hwpm_validate_primary_hals(hwpm)) {
return -EINVAL;
}
return 0;
}

View File

@@ -0,0 +1,119 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_INTERNAL_H
#define T264_HWPM_INTERNAL_H
#include <hal/t264/ip/vic/t264_vic.h>
#include <hal/t264/ip/pva/t264_pva.h>
#include <hal/t264/ip/mss_channel/t264_mss_channel.h>
#include <hal/t264/ip/mss_hubs/t264_mss_hubs.h>
#include <hal/t264/ip/ocu/t264_ocu.h>
#include <hal/t264/ip/smmu/t264_smmu.h>
#include <hal/t264/ip/ucf_msw/t264_ucf_msw.h>
#include <hal/t264/ip/ucf_psw/t264_ucf_psw.h>
#include <hal/t264/ip/ucf_csw/t264_ucf_csw.h>
#include <hal/t264/ip/cpu/t264_cpu.h>
#include <hal/t264/ip/vi/t264_vi.h>
#include <hal/t264/ip/isp/t264_isp.h>
#include <hal/t264/ip/pma/t264_pma.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#undef DEFINE_SOC_HWPM_ACTIVE_IP
#define DEFINE_SOC_HWPM_ACTIVE_IP(name) name
#define T264_HWPM_ACTIVE_IP_MAX T264_HWPM_IP_MAX
#define T264_ACTIVE_IPS \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_PMA) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_RTR) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_VI) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_ISP) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_VIC) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_PVA) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MSS_CHANNEL) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MSS_HUBS) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_OCU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_SMMU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_MSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_PSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_CSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_CPU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MAX)
enum t264_hwpm_active_ips {
T264_ACTIVE_IPS
};
#undef DEFINE_SOC_HWPM_ACTIVE_IP
enum tegra_soc_hwpm_ip;
enum tegra_soc_hwpm_resource;
struct tegra_soc_hwpm;
struct hwpm_ip_aperture;
bool t264_hwpm_validate_secondary_hals(struct tegra_soc_hwpm *hwpm);
bool t264_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u32 *config_ip_index);
bool t264_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index);
u32 t264_get_rtr_int_idx(void);
u32 t264_get_ip_max_idx(void);
int t264_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr);
int t264_hwpm_check_status(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
struct tegra_hwpm_ip_ops *ip_ops, bool available);
int t264_hwpm_force_enable_ips(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_credit_program(struct tegra_soc_hwpm *hwpm,
u32 *num_credits, u8 cblock_idx, u8 pma_channel_idx,
uint16_t credit_cmd);
int t264_hwpm_setup_trigger(struct tegra_soc_hwpm *hwpm,
u8 enable_cross_trigger, u8 session_type);
int t264_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon);
int t264_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon);
int t264_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump);
int t264_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr);
int t264_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status);
#endif /* T264_HWPM_INTERNAL_H */

View File

@@ -0,0 +1,673 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_static_analysis.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_soc.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
/*
* This function is invoked by register_ip API.
* Convert the external resource enum to internal IP index.
* Extract given ip_ops and update corresponding IP structure.
*/
int t264_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
struct tegra_hwpm_ip_ops *ip_ops, bool available)
{
int ret = 0;
u32 ip_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Extract IP ops for resource enum %d info", resource_enum);
/* Convert tegra_soc_hwpm_resource to internal enum */
if (!(t264_hwpm_is_resource_active(hwpm, resource_enum, &ip_idx))) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"SOC hwpm resource %d (base 0x%llx) is unconfigured",
resource_enum, (unsigned long long)base_address);
goto fail;
}
switch (ip_idx) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case T264_HWPM_IP_VIC:
#endif
#if defined(CONFIG_T264_HWPM_IP_PVA)
case T264_HWPM_IP_PVA:
#endif
#if defined(CONFIG_T264_HWPM_IP_OCU)
case T264_HWPM_IP_OCU:
#endif
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case T264_HWPM_IP_SMMU:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case T264_HWPM_IP_UCF_MSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case T264_HWPM_IP_UCF_PSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case T264_HWPM_IP_UCF_CSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_CPU)
case T264_HWPM_IP_CPU:
#endif
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"Failed to %s fs/ops for IP %d (base 0x%llx)",
available == true ? "set" : "reset",
ip_idx, (unsigned long long)base_address);
goto fail;
}
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case T264_HWPM_IP_VI:
#endif
#if defined(CONFIG_T264_HWPM_IP_ISP)
case T264_HWPM_IP_ISP:
#endif
if (tegra_hwpm_is_hypervisor_mode()) {
/*
* VI and ISP are enabled only on AV+L configuration
* as the camera driver is not supported on L4T.
*/
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"Failed to %s fs/ops for IP %d (base 0x%llx)",
available == true ? "set" : "reset",
ip_idx, (unsigned long long)base_address);
goto fail;
}
} else {
tegra_hwpm_err(hwpm, "Invalid IP %d for ip_ops", ip_idx);
}
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case T264_HWPM_IP_MSS_CHANNEL:
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case T264_HWPM_IP_MSS_HUBS:
#endif
/* MSS channel and MSS hubs share MC channels */
/* Check base address in T264_HWPM_IP_MSS_CHANNEL */
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
ip_idx = T264_HWPM_IP_MSS_CHANNEL;
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
/*
* Return value of ENODEV will indicate that the base
* address doesn't belong to this IP.
*/
if (ret != -ENODEV) {
tegra_hwpm_err(hwpm,
"IP %d base 0x%llx:Failed to %s fs/ops",
ip_idx, (unsigned long long)base_address,
available == true ? "set" : "reset");
goto fail;
}
/*
* ret = -ENODEV indicates given address doesn't belong
* to IP. This means ip_ops will not be set for this IP.
* This shouldn't be a reason to fail this function.
* Hence, reset ret to 0.
*/
ret = 0;
}
#endif
/* Check base address in T264_HWPM_IP_MSS_HUBS */
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
ip_idx = T264_HWPM_IP_MSS_HUBS;
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
/*
* Return value of ENODEV will indicate that the base
* address doesn't belong to this IP.
*/
if (ret != -ENODEV) {
tegra_hwpm_err(hwpm,
"IP %d base 0x%llx:Failed to %s fs/ops",
ip_idx, (unsigned long long)base_address,
available == true ? "set" : "reset");
goto fail;
}
/*
* ret = -ENODEV indicates given address doesn't belong
* to IP. This means ip_ops will not be set for this IP.
* This shouldn't be a reason to fail this function.
* Hence, reset ret to 0.
*/
ret = 0;
}
#endif
break;
case T264_HWPM_IP_PMA:
case T264_HWPM_IP_RTR:
default:
tegra_hwpm_err(hwpm, "Invalid IP %d for ip_ops", ip_idx);
break;
}
fail:
return ret;
}
static int t264_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
# if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
struct hwpm_ip *chip_ip = NULL;
struct hwpm_ip_inst *ip_inst = NULL;
u32 inst_idx = 0U;
u32 element_mask_max = 0U;
#endif
u32 mss_disable_fuse_val = 0U;
u32 mss_disable_fuse_val_mask = 0xFU;
u32 mss_disable_fuse_bit_idx = 0U;
u32 emc_element_floorsweep_mask = 0U;
u32 idx = 0U;
int err;
tegra_hwpm_fn(hwpm, " ");
if (!tegra_hwpm_is_platform_silicon()) {
tegra_hwpm_err(hwpm,
"Fuse readl is not implemented yet. Skip for now ");
return 0;
}
#define TEGRA_FUSE_OPT_MSS_DISABLE 0x8c0U
err = tegra_hwpm_fuse_readl(hwpm,
TEGRA_FUSE_OPT_MSS_DISABLE, &mss_disable_fuse_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "emc_disable fuse read failed");
return err;
}
/*
* In floorsweep fuse value,
* each bit corresponds to 4 elements.
* Bit value 0 indicates those elements are
* available and bit value 1 indicates
* corresponding elements are floorswept.
*
* Convert floorsweep fuse value to available EMC elements.
*/
do {
if (!(mss_disable_fuse_val & (0x1U << mss_disable_fuse_bit_idx))) {
emc_element_floorsweep_mask |=
(0xFU << (mss_disable_fuse_bit_idx * 4U));
}
mss_disable_fuse_bit_idx++;
mss_disable_fuse_val_mask = (mss_disable_fuse_val_mask >> 1U);
} while (mss_disable_fuse_val_mask != 0U);
/* Set fuse value in MSS IP instances */
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
switch (idx) {
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case T264_HWPM_IP_MSS_CHANNEL:
#endif
# if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
chip_ip = active_chip->chip_ips[idx];
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
inst_idx];
/*
* Hence use max element mask to get correct
* fs info to use in HWPM driver.
*/
element_mask_max = tegra_hwpm_safe_sub_u32(
tegra_hwpm_safe_cast_u64_to_u32(BIT(
ip_inst->num_core_elements_per_inst)),
1U);
ip_inst->fuse_fs_mask =
(emc_element_floorsweep_mask &
element_mask_max);
tegra_hwpm_dbg(hwpm, hwpm_info,
"ip %d, fuse_mask 0x%x",
idx, ip_inst->fuse_fs_mask);
}
break;
#endif
default:
continue;
}
}
return 0;
}
int t264_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm)
{
u32 opt_hwpm_disable = 0U;
u32 fa_mode = 0U;
u32 hwpm_global_disable = 0U;
u32 idx = 0U;
int err;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = NULL;
tegra_hwpm_fn(hwpm, " ");
if (!tegra_hwpm_is_platform_silicon()) {
return 0;
}
err = t264_hwpm_validate_emc_config(hwpm);
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to validate emc config");
return err;
}
#define TEGRA_FUSE_OPT_HWPM_DISABLE 0xc18
/* Read fuse_opt_hwpm_disable_0 fuse */
err = tegra_hwpm_fuse_readl(hwpm,
TEGRA_FUSE_OPT_HWPM_DISABLE, &opt_hwpm_disable);
if (err != 0) {
tegra_hwpm_err(hwpm, "opt_hwpm_disable fuse read failed");
return err;
}
#define TEGRA_FUSE_FA_MODE 0x48U
err = tegra_hwpm_fuse_readl(hwpm, TEGRA_FUSE_FA_MODE, &fa_mode);
if (err != 0) {
tegra_hwpm_err(hwpm, "fa mode fuse read failed");
return err;
}
/*
* Configure global control register to disable PCFIFO interlock
* By writing to MSS_HUB_HUBC_CONFIG_0 register
*/
#define TEGRA_HUB_HUBC_CONFIG0_OFFSET 0x6244U
#define TEGRA_HUB_HUBC_PCFIFO_INTERLOCK_DISABLED 0x1U
err = tegra_hwpm_write_sticky_bits(hwpm, addr_map_mcb_base_r(),
TEGRA_HUB_HUBC_CONFIG0_OFFSET,
TEGRA_HUB_HUBC_PCFIFO_INTERLOCK_DISABLED);
hwpm_assert_print(hwpm, err == 0, return err,
"PCFIFO Interlock disable failed");
#define TEGRA_HWPM_GLOBAL_DISABLE_OFFSET 0x300CU
#define TEGRA_HWPM_GLOBAL_DISABLE_DISABLED 0x0U
err = tegra_hwpm_read_sticky_bits(hwpm, addr_map_pmc_misc_base_r(),
TEGRA_HWPM_GLOBAL_DISABLE_OFFSET, &hwpm_global_disable);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm global disable read failed");
return err;
}
/*
* Do not enable override if FA mode fuse is set. FA_MODE fuse enables
* all PERFMONs regardless of level of fuse, sticky bit or secure register
* settings.
*/
if (fa_mode != 0U) {
tegra_hwpm_dbg(hwpm, hwpm_info,
"fa mode fuse enabled, no override required, enable HWPM");
return 0;
}
/* Override enable depends on opt_hwpm_disable and global hwpm disable */
if ((opt_hwpm_disable == 0U) &&
(hwpm_global_disable == TEGRA_HWPM_GLOBAL_DISABLE_DISABLED)) {
tegra_hwpm_dbg(hwpm, hwpm_info,
"OPT_HWPM_DISABLE fuses are disabled, no override required");
return 0;
}
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
chip_ip = active_chip->chip_ips[idx];
if ((hwpm_global_disable !=
TEGRA_HWPM_GLOBAL_DISABLE_DISABLED) ||
(opt_hwpm_disable != 0U)) {
/*
* Both HWPM_GLOBAL_DISABLE and OPT_HWPM_DISABLE disables all
* Perfmons in SOC HWPM. Hence, check for either of them to be set.
*/
if ((chip_ip->dependent_fuse_mask &
TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK) != 0U) {
/*
* check to prevent RTR from being overriden
*/
chip_ip->override_enable = true;
} else {
tegra_hwpm_dbg(hwpm, hwpm_info,
"IP %d not overridden", idx);
}
}
}
return 0;
}
int t264_hwpm_force_enable_ips(struct tegra_soc_hwpm *hwpm)
{
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
/* Force enable MSS channel IP for AV+L/Q */
if (tegra_hwpm_is_hypervisor_mode()) {
/*
* MSS CHANNEL
* MSS channel driver cannot implement HWPM <-> IP interface in AV + L, and
* AV + Q configs. Since MSS channel is part of both POR and non-POR IPs,
* this force enable is not limited by minimal config or force enable flags.
*/
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc0_base_r(),
T264_HWPM_IP_MSS_CHANNEL, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_MSS_CHANNEL force enable failed");
return ret;
}
#endif
} else {
#if defined(CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE)
if (tegra_hwpm_is_platform_vsp()) {
/* Static IP instances as per VSP netlist */
}
if (tegra_hwpm_is_platform_silicon()) {
/* Static IP instances corresponding to silicon */
#if defined(CONFIG_T264_HWPM_IP_OCU)
if (hwpm->ip_config[TEGRA_HWPM_IP_MCF_OCU]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ocu_base_r(),
T264_HWPM_IP_OCU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_OCU force enable failed");
return ret;
}
}
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_PSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn0_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn1_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn2_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn3_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_PSW */
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_CSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_csw0_base_r(),
T264_HWPM_IP_UCF_CSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_CSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_csw1_base_r(),
T264_HWPM_IP_UCF_CSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_CSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_CSW */
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_MSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc0_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc2_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc4_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc6_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc8_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc10_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc12_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc14_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_MSW */
#if defined(CONFIG_T264_HWPM_IP_CPU)
if (hwpm->ip_config[TEGRA_HWPM_IP_CPU]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore0_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore1_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore2_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore3_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore4_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore5_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore6_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore7_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore8_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore9_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore10_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore11_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore12_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore13_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_CPU */
}
#endif /* CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE */
}
return ret;
}

View File

@@ -0,0 +1,338 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_mem_mgmt.h>
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
int t264_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reset_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Reset OUTBASE register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outbase_ptr_m(),
pmasys_channel_outbase_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), reset_val);
/* Reset OUTBASEUPPER register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outbaseupper_ptr_m(),
pmasys_channel_outbaseupper_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), reset_val);
/* Reset OUTSIZE register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outsize_numbytes_m(),
pmasys_channel_outsize_numbytes_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), reset_val);
/* Reset MEM_BYTES_ADDR register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_bytes_addr_ptr_m(),
pmasys_channel_mem_bytes_addr_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), reset_val);
/* Reset MEM_HEAD register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_head_ptr_m(),
pmasys_channel_mem_head_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), reset_val);
/* Reset MEM_BYTES register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_bytes_numbytes_m(),
pmasys_channel_mem_bytes_numbytes_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_r(0, 0), reset_val);
/* Reset MEMBUF_STATUS */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_control_user_membuf_clear_status_m(),
pmasys_channel_control_user_membuf_clear_status_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reset_val);
return 0;
}
int t264_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 outbase_lo = 0U;
u32 outbase_hi = 0U;
u32 outsize = 0U;
u32 mem_bytes_addr = 0U;
u32 membuf_status = 0U;
u32 mem_head = 0U;
u32 bpc_mem_block = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_hwpm_mem_mgmt *mem_mgmt = hwpm->mem_mgmt;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
outbase_lo = mem_mgmt->stream_buf_va & pmasys_channel_outbase_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), outbase_lo);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTBASE = 0x%x", outbase_lo);
outbase_hi = (mem_mgmt->stream_buf_va >> 32) &
pmasys_channel_outbaseupper_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), outbase_hi);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTBASEUPPER = 0x%x", outbase_hi);
outsize = mem_mgmt->stream_buf_size &
pmasys_channel_outsize_numbytes_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), outsize);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTSIZE = 0x%x", outsize);
mem_bytes_addr = mem_mgmt->mem_bytes_buf_va &
pmasys_channel_mem_bytes_addr_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), mem_bytes_addr);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream,
"MEM_BYTES_ADDR = 0x%x", mem_bytes_addr);
/* Update MEM_HEAD to OUTBASE */
mem_head = mem_mgmt->stream_buf_va & pmasys_channel_mem_head_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), mem_head);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "MEM_HEAD = 0x%x", mem_head);
/* Reset MEMBUF_STATUS */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &membuf_status);
membuf_status = set_field(membuf_status,
pmasys_channel_control_user_membuf_clear_status_m(),
pmasys_channel_control_user_membuf_clear_status_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), membuf_status);
/* Update CBLOCK_BPC_MEM_BLOCK to OUTBASE to ensure BPC is bound */
bpc_mem_block = mem_mgmt->stream_buf_va &
pmasys_cblock_bpc_mem_block_base_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_block_r(0), outbase_lo);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "bpc_mem_block = 0x%x",
bpc_mem_block);
/* Mark mem block valid */
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_blockupper_r(0),
pmasys_cblock_bpc_mem_blockupper_valid_f(
pmasys_cblock_bpc_mem_blockupper_valid_true_v()));
return 0;
}
int t264_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_blockupper_r(0),
pmasys_cblock_bpc_mem_blockupper_valid_f(
pmasys_cblock_bpc_mem_blockupper_valid_false_v()));
return 0;
}
int t264_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 *mem_bytes_kernel_u32 =
(u32 *)(hwpm->mem_mgmt->mem_bytes_kernel);
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
*mem_bytes_kernel_u32 = TEGRA_HWPM_MEM_BYTES_INVALID;
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_control_user_update_bytes_m(),
pmasys_channel_control_user_update_bytes_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reg_val);
return 0;
}
int t264_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA streaming */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_config_user_record_stream_m(),
pmasys_command_slice_trigger_config_user_record_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_config_user_stream_m(),
pmasys_channel_config_user_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reg_val);
return 0;
}
int t264_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
if (mem_bump > (u64)U32_MAX) {
tegra_hwpm_err(hwpm, "mem_bump is out of bounds");
return -EINVAL;
}
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bump_r(0, 0), mem_bump);
return 0;
}
int t264_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), &reg_val);
*mem_head_ptr = (u64)reg_val;
return err;
}
int t264_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status)
{
int err = 0;
u32 reg_val, field_val;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val);
field_val = pmasys_channel_status_membuf_status_v(
reg_val);
*overflow_status = (field_val ==
pmasys_channel_status_membuf_status_overflowed_v()) ?
TEGRA_HWPM_MEMBUF_OVERFLOWED : TEGRA_HWPM_MEMBUF_NOT_OVERFLOWED;
return err;
}

View File

@@ -0,0 +1,110 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_PERFMON_DEVICE_INDEX_H
#define T264_HWPM_PERFMON_DEVICE_INDEX_H
enum t264_hwpm_perfmon_device_index {
T264_SYSTEM_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_HWPM_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE0_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE1_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE2_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE3_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE4_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE5_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE6_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE7_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE8_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE9_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE10_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE11_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE12_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE13_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW4_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW5_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW6_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW7_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW8_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW9_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW10_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW11_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW12_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW13_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW14_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW15_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_CSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_CSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU2_PERFMON_DEVICE_NODE_INDEX,
T264_VI0_PERFMON_DEVICE_NODE_INDEX,
T264_VI1_PERFMON_DEVICE_NODE_INDEX,
T264_ISP0_PERFMON_DEVICE_NODE_INDEX,
T264_ISP1_PERFMON_DEVICE_NODE_INDEX,
T264_VICA0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAC0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAV0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAV1_PERFMON_DEVICE_NODE_INDEX,
T264_VISION_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_VISION_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_PVAP0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAP1_PERFMON_DEVICE_NODE_INDEX,
T264_DISP_USB_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_DISP_USB_TCU0_PERFMON_DEVICE_NODE_INDEX,
T264_OCU0_PERFMON_DEVICE_NODE_INDEX,
T264_UPHY0_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_UPHY0_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_PMA_DEVICE_NODE_INDEX,
T264_RTR_DEVICE_NODE_INDEX
};
#endif

View File

@@ -0,0 +1,241 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_regops_allowlist.h"
struct allowlist t264_perfmon_alist[67] = {
{0x00000000, true},
{0x00000004, true},
{0x00000008, true},
{0x0000000c, true},
{0x00000010, true},
{0x00000014, true},
{0x00000020, true},
{0x00000024, true},
{0x00000028, true},
{0x0000002c, true},
{0x00000030, true},
{0x00000034, true},
{0x00000040, true},
{0x00000044, true},
{0x00000048, true},
{0x0000004c, true},
{0x00000050, true},
{0x00000054, true},
{0x00000058, true},
{0x0000005c, true},
{0x00000060, true},
{0x00000064, true},
{0x00000068, true},
{0x0000006c, true},
{0x00000070, true},
{0x00000074, true},
{0x00000078, true},
{0x0000007c, true},
{0x00000080, true},
{0x00000084, true},
{0x00000088, true},
{0x0000008c, true},
{0x00000090, true},
{0x00000098, true},
{0x0000009c, true},
{0x000000a0, true},
{0x000000a4, true},
{0x000000a8, true},
{0x000000ac, true},
{0x000000b0, true},
{0x000000b4, true},
{0x000000b8, true},
{0x000000bc, true},
{0x000000c0, true},
{0x000000c4, true},
{0x000000c8, true},
{0x000000cc, true},
{0x000000d0, true},
{0x000000d4, true},
{0x000000d8, true},
{0x000000dc, true},
{0x000000e0, true},
{0x000000e4, true},
{0x000000e8, true},
{0x000000ec, true},
{0x000000f8, true},
{0x000000fc, true},
{0x00000100, true},
{0x00000108, true},
{0x00000110, true},
{0x00000114, true},
{0x00000118, true},
{0x0000011c, true},
{0x00000120, true},
{0x00000124, true},
{0x00000128, true},
{0x00000130, true},
};
struct allowlist t264_pma_res_cmd_slice_rtr_alist[41] = {
{0x00000858, false},
{0x00000a00, false},
{0x00000a10, false},
{0x00000a14, false},
{0x00000a20, false},
{0x00000a24, false},
{0x00000a28, false},
{0x00000a2c, false},
{0x00000a30, false},
{0x00000a34, false},
{0x00000a38, false},
{0x00000a3c, false},
{0x00001104, false},
{0x00001110, false},
{0x00001114, false},
{0x0000111c, false},
{0x00001120, false},
{0x00001124, false},
{0x00001128, false},
{0x0000112c, false},
{0x00001130, false},
{0x00001134, false},
{0x00001138, false},
{0x0000113c, false},
{0x00001140, false},
{0x00001144, false},
{0x00001148, false},
{0x0000114c, false},
{0x00001150, false},
{0x00001154, false},
{0x00001158, false},
{0x0000115c, false},
{0x00001160, false},
{0x00001164, false},
{0x00001168, false},
{0x0000116c, false},
{0x00001170, false},
{0x00001174, false},
{0x00001178, false},
{0x0000117c, false},
{0x00000818, false},
};
struct allowlist t264_pma_res_pma_alist[1] = {
{0x00000858, true},
};
struct allowlist t264_rtr_alist[2] = {
{0x00000080, false},
{0x000000a4, false},
};
struct allowlist t264_vic_alist[8] = {
{0x00001088, true},
{0x000010a8, true},
{0x0000cb94, true},
{0x0000cb80, true},
{0x0000cb84, true},
{0x0000cb88, true},
{0x0000cb8c, true},
{0x0000cb90, true},
};
struct allowlist t264_pva_pm_alist[10] = {
{0x0000800c, true},
{0x00008010, true},
{0x00008014, true},
{0x00008018, true},
{0x0000801c, true},
{0x00008020, true},
{0x00008024, true},
{0x00008028, true},
{0x0000802c, true},
{0x00008030, true},
};
struct allowlist t264_mss_channel_alist[2] = {
{0x00008914, true},
{0x00008918, true},
};
struct allowlist t264_mss_hub_alist[3] = {
{0x00006f3c, true},
{0x00006f34, true},
{0x00006f38, true},
};
struct allowlist t264_ocu_alist[1] = {
{0x00000058, true},
};
struct allowlist t264_smmu_alist[1] = {
{0x00005000, true},
};
struct allowlist t264_ucf_msw_cbridge_alist[1] = {
{0x0000891c, true},
};
struct allowlist t264_ucf_msn_msw0_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_ucf_msn_msw1_alist[2] = {
{0x00000010, true},
{0x00000018, true},
};
struct allowlist t264_ucf_msw_slc_alist[1] = {
{0x00000000, true},
};
struct allowlist t264_ucf_psn_psw_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_ucf_csw_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_cpucore_alist[4] = {
{0x00000000, true},
{0x00000008, true},
{0x00000010, true},
{0x00000018, true},
};
struct allowlist t264_vi_alist[5] = {
{0x00030008, true},
{0x0003000c, true},
{0x00030010, true},
{0x00030014, true},
{0x00030018, true},
};
struct allowlist t264_isp_alist[5] = {
{0x00030008, true},
{0x0003000c, true},
{0x00030010, true},
{0x00030014, true},
{0x00030018, true},
};

View File

@@ -0,0 +1,49 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_REGOPS_ALLOWLIST_H
#define T264_HWPM_REGOPS_ALLOWLIST_H
#include <tegra_hwpm.h>
extern struct allowlist t264_perfmon_alist[67];
extern struct allowlist t264_pma_res_cmd_slice_rtr_alist[41];
extern struct allowlist t264_pma_res_pma_alist[1];
extern struct allowlist t264_rtr_alist[2];
extern struct allowlist t264_vic_alist[8];
extern struct allowlist t264_pva_pm_alist[10];
extern struct allowlist t264_mss_channel_alist[2];
extern struct allowlist t264_mss_hub_alist[3];
extern struct allowlist t264_ocu_alist[1];
extern struct allowlist t264_smmu_alist[1];
extern struct allowlist t264_ucf_msw_cbridge_alist[1];
extern struct allowlist t264_ucf_msn_msw0_alist[2];
extern struct allowlist t264_ucf_msn_msw1_alist[2];
extern struct allowlist t264_ucf_msw_slc_alist[1];
extern struct allowlist t264_ucf_psn_psw_alist[2];
extern struct allowlist t264_ucf_csw_alist[2];
extern struct allowlist t264_cpucore_alist[4];
extern struct allowlist t264_vi_alist[5];
extern struct allowlist t264_isp_alist[5];
#endif /* T264_HWPM_REGOPS_ALLOWLIST_H */

View File

@@ -0,0 +1,214 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_static_analysis.h>
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
#define TEGRA_HWPM_CBLOCK_CHANNEL_TO_CMD_SLICE(cblock, channel) \
(((cblock) * pmmsys_num_channels_per_cblock_v()) + (channel))
#define TEGRA_HWPM_MAX_SUPPORTED_DGS 256U
#define TEGRA_HWPM_NUM_DG_STATUS_PER_REG \
(TEGRA_HWPM_MAX_SUPPORTED_DGS / \
pmmsys_router_user_dgmap_status_secure__size_1_v())
int t264_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
u32 reg_val;
u32 cblock = 0U;
u32 channel = 0U;
u32 dg_idx = 0U;
u32 config_dgmap = 0U;
u32 dgmap_status_reg_idx = 0U, dgmap_status_reg_dgidx = 0U;
u32 retries = 10U;
u32 sleep_msecs = 10U;
int err = 0;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
NULL);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Enable */
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"Enabling PERFMON(0x%llx - 0x%llx)",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use enginestatus offset + perfmon base_pa as the register
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_enginestatus_o(),
perfmon->base_pa), &reg_val);
reg_val = set_field(reg_val, pmmsys_enginestatus_enable_m(),
pmmsys_enginestatus_enable_out_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_enginestatus_o(),
perfmon->base_pa), reg_val);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use secure_config offset + perfmon base_pa as the register
* The register also contains dg_idx programmed by HW that will be used
* to poll dg mapping in router.
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), &config_dgmap);
dg_idx = pmmsys_secure_config_dg_idx_v(config_dgmap);
/* Configure DG map for this perfmon */
config_dgmap = set_field(config_dgmap,
pmmsys_secure_config_cmd_slice_id_m() |
pmmsys_secure_config_channel_id_m() |
pmmsys_secure_config_cblock_id_m() |
pmmsys_secure_config_mapped_m() |
pmmsys_secure_config_use_prog_dg_idx_m() |
pmmsys_secure_config_command_pkt_decoder_m(),
pmmsys_secure_config_cmd_slice_id_f(
TEGRA_HWPM_CBLOCK_CHANNEL_TO_CMD_SLICE(
cblock, channel)) |
pmmsys_secure_config_channel_id_f(channel) |
pmmsys_secure_config_cblock_id_f(cblock) |
pmmsys_secure_config_mapped_true_f() |
pmmsys_secure_config_use_prog_dg_idx_false_f() |
pmmsys_secure_config_command_pkt_decoder_enable_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), config_dgmap);
/* Make sure that the DG map status is propagated to the router */
dgmap_status_reg_idx = dg_idx / TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
dgmap_status_reg_dgidx = dg_idx % TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_user_dgmap_status_secure_r(dgmap_status_reg_idx),
&reg_val,
(((reg_val >> dgmap_status_reg_dgidx) &
pmmsys_router_user_dgmap_status_secure_dg_s()) !=
pmmsys_router_user_dgmap_status_secure_dg_mapped_v()),
"Perfmon(0x%llx - 0x%llx) dgmap %d status update timed out",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa, dg_idx);
return 0;
}
int t264_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
u32 reg_val;
u32 dg_idx = 0U;
u32 config_dgmap = 0U;
u32 dgmap_status_reg_idx = 0U, dgmap_status_reg_dgidx = 0U;
u32 retries = 10U;
u32 sleep_msecs = 10U;
int err = 0;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
if (perfmon->element_type == HWPM_ELEMENT_PERFMUX) {
/*
* Since HWPM elements use perfmon functions,
* skip disabling HWPM PERFMUX elements
*/
return 0;
}
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
NULL);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable */
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"Disabling PERFMON(0x%llx - 0x%llx)",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use sys0_control offset + perfmon base_pa as the register
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_control_o(),
perfmon->base_pa), &reg_val);
reg_val = set_field(reg_val, pmmsys_control_mode_m(),
pmmsys_control_mode_disable_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_control_o(),
perfmon->base_pa), reg_val);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use secure_config offset + perfmon base_pa as the register
* The register also contains dg_idx programmed by HW that will be used
* to poll dg mapping in router.
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), &config_dgmap);
dg_idx = pmmsys_secure_config_dg_idx_v(config_dgmap);
/* Reset DG map for this perfmon */
config_dgmap = set_field(config_dgmap,
pmmsys_secure_config_mapped_m(),
pmmsys_secure_config_mapped_false_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), config_dgmap);
/* Make sure that the DG map status is propagated to the router */
dgmap_status_reg_idx = dg_idx / TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
dgmap_status_reg_dgidx = dg_idx % TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_user_dgmap_status_secure_r(dgmap_status_reg_idx),
&reg_val,
(((reg_val >> dgmap_status_reg_dgidx) &
pmmsys_router_user_dgmap_status_secure_dg_s()) !=
pmmsys_router_user_dgmap_status_secure_dg_not_mapped_v()),
"Perfmon(0x%llx - 0x%llx) dgmap %d status update timed out",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa, dg_idx);
return 0;
}

View File

@@ -0,0 +1,578 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_ADDR_MAP_SOC_HWPM_H
#define TH500_ADDR_MAP_SOC_HWPM_H
#define addr_map_rpg_pm_base_r() (0x13e00000U)
#define addr_map_rpg_pm_limit_r() (0x13eeffffU)
#define addr_map_rpg_pm_sys0_base_r() (0x13e1e000U)
#define addr_map_rpg_pm_sys0_limit_r() (0x13e1efffU)
#define addr_map_pma_base_r() (0x13ef0000U)
#define addr_map_pma_limit_r() (0x13ef1fffU)
#define addr_map_rtr_base_r() (0x13ef2000U)
#define addr_map_rtr_limit_r() (0x13ef2fffU)
#define addr_map_rpg_pm_msschannel0_base_r() (0x13e1f000U)
#define addr_map_rpg_pm_msschannel0_limit_r() (0x13e1ffffU)
#define addr_map_rpg_pm_msschannel1_base_r() (0x13e20000U)
#define addr_map_rpg_pm_msschannel1_limit_r() (0x13e20fffU)
#define addr_map_rpg_pm_msschannel2_base_r() (0x13e21000U)
#define addr_map_rpg_pm_msschannel2_limit_r() (0x13e21fffU)
#define addr_map_rpg_pm_msschannel3_base_r() (0x13e22000U)
#define addr_map_rpg_pm_msschannel3_limit_r() (0x13e22fffU)
#define addr_map_rpg_pm_msschannel4_base_r() (0x13e23000U)
#define addr_map_rpg_pm_msschannel4_limit_r() (0x13e23fffU)
#define addr_map_rpg_pm_msschannel5_base_r() (0x13e24000U)
#define addr_map_rpg_pm_msschannel5_limit_r() (0x13e24fffU)
#define addr_map_rpg_pm_msschannel6_base_r() (0x13e25000U)
#define addr_map_rpg_pm_msschannel6_limit_r() (0x13e25fffU)
#define addr_map_rpg_pm_msschannel7_base_r() (0x13e26000U)
#define addr_map_rpg_pm_msschannel7_limit_r() (0x13e26fffU)
#define addr_map_rpg_pm_msschannel8_base_r() (0x13e27000U)
#define addr_map_rpg_pm_msschannel8_limit_r() (0x13e27fffU)
#define addr_map_rpg_pm_msschannel9_base_r() (0x13e28000U)
#define addr_map_rpg_pm_msschannel9_limit_r() (0x13e28fffU)
#define addr_map_rpg_pm_msschannel10_base_r() (0x13e29000U)
#define addr_map_rpg_pm_msschannel10_limit_r() (0x13e29fffU)
#define addr_map_rpg_pm_msschannel11_base_r() (0x13e2a000U)
#define addr_map_rpg_pm_msschannel11_limit_r() (0x13e2afffU)
#define addr_map_rpg_pm_msschannel12_base_r() (0x13e2b000U)
#define addr_map_rpg_pm_msschannel12_limit_r() (0x13e2bfffU)
#define addr_map_rpg_pm_msschannel13_base_r() (0x13e2c000U)
#define addr_map_rpg_pm_msschannel13_limit_r() (0x13e2cfffU)
#define addr_map_rpg_pm_msschannel14_base_r() (0x13e2d000U)
#define addr_map_rpg_pm_msschannel14_limit_r() (0x13e2dfffU)
#define addr_map_rpg_pm_msschannel15_base_r() (0x13e2e000U)
#define addr_map_rpg_pm_msschannel15_limit_r() (0x13e2efffU)
#define addr_map_rpg_pm_msschannel16_base_r() (0x13e2f000U)
#define addr_map_rpg_pm_msschannel16_limit_r() (0x13e2ffffU)
#define addr_map_rpg_pm_msschannel17_base_r() (0x13e30000U)
#define addr_map_rpg_pm_msschannel17_limit_r() (0x13e30fffU)
#define addr_map_rpg_pm_msschannel18_base_r() (0x13e31000U)
#define addr_map_rpg_pm_msschannel18_limit_r() (0x13e31fffU)
#define addr_map_rpg_pm_msschannel19_base_r() (0x13e32000U)
#define addr_map_rpg_pm_msschannel19_limit_r() (0x13e32fffU)
#define addr_map_rpg_pm_msschannel20_base_r() (0x13e33000U)
#define addr_map_rpg_pm_msschannel20_limit_r() (0x13e33fffU)
#define addr_map_rpg_pm_msschannel21_base_r() (0x13e34000U)
#define addr_map_rpg_pm_msschannel21_limit_r() (0x13e34fffU)
#define addr_map_rpg_pm_msschannel22_base_r() (0x13e35000U)
#define addr_map_rpg_pm_msschannel22_limit_r() (0x13e35fffU)
#define addr_map_rpg_pm_msschannel23_base_r() (0x13e36000U)
#define addr_map_rpg_pm_msschannel23_limit_r() (0x13e36fffU)
#define addr_map_rpg_pm_msschannel24_base_r() (0x13e37000U)
#define addr_map_rpg_pm_msschannel24_limit_r() (0x13e37fffU)
#define addr_map_rpg_pm_msschannel25_base_r() (0x13e38000U)
#define addr_map_rpg_pm_msschannel25_limit_r() (0x13e38fffU)
#define addr_map_rpg_pm_msschannel26_base_r() (0x13e39000U)
#define addr_map_rpg_pm_msschannel26_limit_r() (0x13e39fffU)
#define addr_map_rpg_pm_msschannel27_base_r() (0x13e3a000U)
#define addr_map_rpg_pm_msschannel27_limit_r() (0x13e3afffU)
#define addr_map_rpg_pm_msschannel28_base_r() (0x13e3b000U)
#define addr_map_rpg_pm_msschannel28_limit_r() (0x13e3bfffU)
#define addr_map_rpg_pm_msschannel29_base_r() (0x13e3c000U)
#define addr_map_rpg_pm_msschannel29_limit_r() (0x13e3cfffU)
#define addr_map_rpg_pm_msschannel30_base_r() (0x13e3d000U)
#define addr_map_rpg_pm_msschannel30_limit_r() (0x13e3dfffU)
#define addr_map_rpg_pm_msschannel31_base_r() (0x13e3e000U)
#define addr_map_rpg_pm_msschannel31_limit_r() (0x13e3efffU)
#define addr_map_mcb_base_r() (0x04020000U)
#define addr_map_mcb_limit_r() (0x0403ffffU)
#define addr_map_mc0_base_r() (0x04040000U)
#define addr_map_mc0_limit_r() (0x0405ffffU)
#define addr_map_mc1_base_r() (0x04060000U)
#define addr_map_mc1_limit_r() (0x0407ffffU)
#define addr_map_mc2_base_r() (0x04080000U)
#define addr_map_mc2_limit_r() (0x0409ffffU)
#define addr_map_mc3_base_r() (0x040a0000U)
#define addr_map_mc3_limit_r() (0x040bffffU)
#define addr_map_mc4_base_r() (0x040c0000U)
#define addr_map_mc4_limit_r() (0x040dffffU)
#define addr_map_mc5_base_r() (0x040e0000U)
#define addr_map_mc5_limit_r() (0x040fffffU)
#define addr_map_mc6_base_r() (0x04100000U)
#define addr_map_mc6_limit_r() (0x0411ffffU)
#define addr_map_mc7_base_r() (0x04120000U)
#define addr_map_mc7_limit_r() (0x0413ffffU)
#define addr_map_mc8_base_r() (0x04140000U)
#define addr_map_mc8_limit_r() (0x0415ffffU)
#define addr_map_mc9_base_r() (0x04160000U)
#define addr_map_mc9_limit_r() (0x0417ffffU)
#define addr_map_mc10_base_r() (0x04180000U)
#define addr_map_mc10_limit_r() (0x0419ffffU)
#define addr_map_mc11_base_r() (0x041a0000U)
#define addr_map_mc11_limit_r() (0x041bffffU)
#define addr_map_mc12_base_r() (0x041c0000U)
#define addr_map_mc12_limit_r() (0x041dffffU)
#define addr_map_mc13_base_r() (0x041e0000U)
#define addr_map_mc13_limit_r() (0x041fffffU)
#define addr_map_mc14_base_r() (0x04200000U)
#define addr_map_mc14_limit_r() (0x0421ffffU)
#define addr_map_mc15_base_r() (0x04220000U)
#define addr_map_mc15_limit_r() (0x0423ffffU)
#define addr_map_mc16_base_r() (0x04240000U)
#define addr_map_mc16_limit_r() (0x0425ffffU)
#define addr_map_mc17_base_r() (0x04260000U)
#define addr_map_mc17_limit_r() (0x0427ffffU)
#define addr_map_mc18_base_r() (0x04280000U)
#define addr_map_mc18_limit_r() (0x0429ffffU)
#define addr_map_mc19_base_r() (0x042a0000U)
#define addr_map_mc19_limit_r() (0x042bffffU)
#define addr_map_mc20_base_r() (0x042c0000U)
#define addr_map_mc20_limit_r() (0x042dffffU)
#define addr_map_mc21_base_r() (0x042e0000U)
#define addr_map_mc21_limit_r() (0x042fffffU)
#define addr_map_mc22_base_r() (0x04300000U)
#define addr_map_mc22_limit_r() (0x0431ffffU)
#define addr_map_mc23_base_r() (0x04320000U)
#define addr_map_mc23_limit_r() (0x0433ffffU)
#define addr_map_mc24_base_r() (0x04340000U)
#define addr_map_mc24_limit_r() (0x0435ffffU)
#define addr_map_mc25_base_r() (0x04360000U)
#define addr_map_mc25_limit_r() (0x0437ffffU)
#define addr_map_mc26_base_r() (0x04380000U)
#define addr_map_mc26_limit_r() (0x0439ffffU)
#define addr_map_mc27_base_r() (0x043a0000U)
#define addr_map_mc27_limit_r() (0x043bffffU)
#define addr_map_mc28_base_r() (0x043c0000U)
#define addr_map_mc28_limit_r() (0x043dffffU)
#define addr_map_mc29_base_r() (0x043e0000U)
#define addr_map_mc29_limit_r() (0x043fffffU)
#define addr_map_mc30_base_r() (0x04400000U)
#define addr_map_mc30_limit_r() (0x0441ffffU)
#define addr_map_mc31_base_r() (0x04420000U)
#define addr_map_mc31_limit_r() (0x0443ffffU)
#define addr_map_rpg_pm_ltc0s0_base_r() (0x13e3f000U)
#define addr_map_rpg_pm_ltc0s0_limit_r() (0x13e3ffffU)
#define addr_map_rpg_pm_ltc0s1_base_r() (0x13e40000U)
#define addr_map_rpg_pm_ltc0s1_limit_r() (0x13e40fffU)
#define addr_map_rpg_pm_ltc1s0_base_r() (0x13e41000U)
#define addr_map_rpg_pm_ltc1s0_limit_r() (0x13e41fffU)
#define addr_map_rpg_pm_ltc1s1_base_r() (0x13e42000U)
#define addr_map_rpg_pm_ltc1s1_limit_r() (0x13e42fffU)
#define addr_map_rpg_pm_ltc2s0_base_r() (0x13e43000U)
#define addr_map_rpg_pm_ltc2s0_limit_r() (0x13e43fffU)
#define addr_map_rpg_pm_ltc2s1_base_r() (0x13e44000U)
#define addr_map_rpg_pm_ltc2s1_limit_r() (0x13e44fffU)
#define addr_map_rpg_pm_ltc3s0_base_r() (0x13e45000U)
#define addr_map_rpg_pm_ltc3s0_limit_r() (0x13e45fffU)
#define addr_map_rpg_pm_ltc3s1_base_r() (0x13e46000U)
#define addr_map_rpg_pm_ltc3s1_limit_r() (0x13e46fffU)
#define addr_map_rpg_pm_ltc4s0_base_r() (0x13e47000U)
#define addr_map_rpg_pm_ltc4s0_limit_r() (0x13e47fffU)
#define addr_map_rpg_pm_ltc4s1_base_r() (0x13e48000U)
#define addr_map_rpg_pm_ltc4s1_limit_r() (0x13e48fffU)
#define addr_map_rpg_pm_ltc5s0_base_r() (0x13e49000U)
#define addr_map_rpg_pm_ltc5s0_limit_r() (0x13e49fffU)
#define addr_map_rpg_pm_ltc5s1_base_r() (0x13e4a000U)
#define addr_map_rpg_pm_ltc5s1_limit_r() (0x13e4afffU)
#define addr_map_rpg_pm_ltc6s0_base_r() (0x13e4b000U)
#define addr_map_rpg_pm_ltc6s0_limit_r() (0x13e4bfffU)
#define addr_map_rpg_pm_ltc6s1_base_r() (0x13e4c000U)
#define addr_map_rpg_pm_ltc6s1_limit_r() (0x13e4cfffU)
#define addr_map_rpg_pm_ltc7s0_base_r() (0x13e4d000U)
#define addr_map_rpg_pm_ltc7s0_limit_r() (0x13e4dfffU)
#define addr_map_rpg_pm_ltc7s1_base_r() (0x13e4e000U)
#define addr_map_rpg_pm_ltc7s1_limit_r() (0x13e4efffU)
#define addr_map_ltc0_base_r() (0x04e10000U)
#define addr_map_ltc0_limit_r() (0x04e1ffffU)
#define addr_map_ltc1_base_r() (0x04e20000U)
#define addr_map_ltc1_limit_r() (0x04e2ffffU)
#define addr_map_ltc2_base_r() (0x04e30000U)
#define addr_map_ltc2_limit_r() (0x04e3ffffU)
#define addr_map_ltc3_base_r() (0x04e40000U)
#define addr_map_ltc3_limit_r() (0x04e4ffffU)
#define addr_map_ltc4_base_r() (0x04e50000U)
#define addr_map_ltc4_limit_r() (0x04e5ffffU)
#define addr_map_ltc5_base_r() (0x04e60000U)
#define addr_map_ltc5_limit_r() (0x04e6ffffU)
#define addr_map_ltc6_base_r() (0x04e70000U)
#define addr_map_ltc6_limit_r() (0x04e7ffffU)
#define addr_map_ltc7_base_r() (0x04e80000U)
#define addr_map_ltc7_limit_r() (0x04e8ffffU)
#define addr_map_rpg_pm_mcfcore0_base_r() (0x13e4f000U)
#define addr_map_rpg_pm_mcfcore0_limit_r() (0x13e4ffffU)
#define addr_map_rpg_pm_mcfcore1_base_r() (0x13e50000U)
#define addr_map_rpg_pm_mcfcore1_limit_r() (0x13e50fffU)
#define addr_map_rpg_pm_mcfcore2_base_r() (0x13e51000U)
#define addr_map_rpg_pm_mcfcore2_limit_r() (0x13e51fffU)
#define addr_map_rpg_pm_mcfcore3_base_r() (0x13e52000U)
#define addr_map_rpg_pm_mcfcore3_limit_r() (0x13e52fffU)
#define addr_map_rpg_pm_mcfcore4_base_r() (0x13e53000U)
#define addr_map_rpg_pm_mcfcore4_limit_r() (0x13e53fffU)
#define addr_map_rpg_pm_mcfcore5_base_r() (0x13e54000U)
#define addr_map_rpg_pm_mcfcore5_limit_r() (0x13e54fffU)
#define addr_map_rpg_pm_mcfcore6_base_r() (0x13e55000U)
#define addr_map_rpg_pm_mcfcore6_limit_r() (0x13e55fffU)
#define addr_map_rpg_pm_mcfcore7_base_r() (0x13e56000U)
#define addr_map_rpg_pm_mcfcore7_limit_r() (0x13e56fffU)
#define addr_map_rpg_pm_mcfcore8_base_r() (0x13e57000U)
#define addr_map_rpg_pm_mcfcore8_limit_r() (0x13e57fffU)
#define addr_map_rpg_pm_mcfcore9_base_r() (0x13e58000U)
#define addr_map_rpg_pm_mcfcore9_limit_r() (0x13e58fffU)
#define addr_map_rpg_pm_mcfcore10_base_r() (0x13e59000U)
#define addr_map_rpg_pm_mcfcore10_limit_r() (0x13e59fffU)
#define addr_map_rpg_pm_mcfcore11_base_r() (0x13e5a000U)
#define addr_map_rpg_pm_mcfcore11_limit_r() (0x13e5afffU)
#define addr_map_rpg_pm_mcfcore12_base_r() (0x13e5b000U)
#define addr_map_rpg_pm_mcfcore12_limit_r() (0x13e5bfffU)
#define addr_map_rpg_pm_mcfcore13_base_r() (0x13e5c000U)
#define addr_map_rpg_pm_mcfcore13_limit_r() (0x13e5cfffU)
#define addr_map_rpg_pm_mcfcore14_base_r() (0x13e5d000U)
#define addr_map_rpg_pm_mcfcore14_limit_r() (0x13e5dfffU)
#define addr_map_rpg_pm_mcfcore15_base_r() (0x13e5e000U)
#define addr_map_rpg_pm_mcfcore15_limit_r() (0x13e5efffU)
#define addr_map_rpg_pm_mcfsys0_base_r() (0x13e5f000U)
#define addr_map_rpg_pm_mcfsys0_limit_r() (0x13e5ffffU)
#define addr_map_rpg_pm_mcfsys1_base_r() (0x13e60000U)
#define addr_map_rpg_pm_mcfsys1_limit_r() (0x13e60fffU)
#define addr_map_rpg_pm_mcfc2c0_base_r() (0x13e61000U)
#define addr_map_rpg_pm_mcfc2c0_limit_r() (0x13e61fffU)
#define addr_map_rpg_pm_mcfc2c1_base_r() (0x13e62000U)
#define addr_map_rpg_pm_mcfc2c1_limit_r() (0x13e62fffU)
#define addr_map_rpg_pm_mcfsoc0_base_r() (0x13e63000U)
#define addr_map_rpg_pm_mcfsoc0_limit_r() (0x13e63fffU)
#define addr_map_rpg_pm_smmu0_base_r() (0x13e64000U)
#define addr_map_rpg_pm_smmu0_limit_r() (0x13e64fffU)
#define addr_map_rpg_pm_smmu1_base_r() (0x13e65000U)
#define addr_map_rpg_pm_smmu1_limit_r() (0x13e65fffU)
#define addr_map_rpg_pm_smmu2_base_r() (0x13e66000U)
#define addr_map_rpg_pm_smmu2_limit_r() (0x13e66fffU)
#define addr_map_rpg_pm_smmu3_base_r() (0x13e67000U)
#define addr_map_rpg_pm_smmu3_limit_r() (0x13e67fffU)
#define addr_map_rpg_pm_smmu4_base_r() (0x13e68000U)
#define addr_map_rpg_pm_smmu4_limit_r() (0x13e68fffU)
#define addr_map_smmu0_base_r() (0x11a30000U)
#define addr_map_smmu0_limit_r() (0x11a3ffffU)
#define addr_map_smmu1_base_r() (0x12a30000U)
#define addr_map_smmu1_limit_r() (0x12a3ffffU)
#define addr_map_smmu2_base_r() (0x15a30000U)
#define addr_map_smmu2_limit_r() (0x15a3ffffU)
#define addr_map_smmu3_base_r() (0x16a30000U)
#define addr_map_smmu3_limit_r() (0x16a3ffffU)
#define addr_map_smmu4_base_r() (0x05a30000U)
#define addr_map_smmu4_limit_r() (0x05a3ffffU)
#define addr_map_rpg_pm_msshub0_base_r() (0x13e69000U)
#define addr_map_rpg_pm_msshub0_limit_r() (0x13e69fffU)
#define addr_map_rpg_pm_msshub1_base_r() (0x13e6a000U)
#define addr_map_rpg_pm_msshub1_limit_r() (0x13e6afffU)
#define addr_map_rpg_pm_msshub2_base_r() (0x13e6b000U)
#define addr_map_rpg_pm_msshub2_limit_r() (0x13e6bfffU)
#define addr_map_rpg_pm_msshub3_base_r() (0x13e6c000U)
#define addr_map_rpg_pm_msshub3_limit_r() (0x13e6cfffU)
#define addr_map_rpg_pm_msshub4_base_r() (0x13e6d000U)
#define addr_map_rpg_pm_msshub4_limit_r() (0x13e6dfffU)
#define addr_map_rpg_pm_msshub5_base_r() (0x13e6e000U)
#define addr_map_rpg_pm_msshub5_limit_r() (0x13e6efffU)
#define addr_map_rpg_pm_msshub6_base_r() (0x13e6f000U)
#define addr_map_rpg_pm_msshub6_limit_r() (0x13e6ffffU)
#define addr_map_rpg_pm_msshub7_base_r() (0x13e70000U)
#define addr_map_rpg_pm_msshub7_limit_r() (0x13e70fffU)
#define addr_map_rpg_pm_nvltx0_base_r() (0x13e71000U)
#define addr_map_rpg_pm_nvltx0_limit_r() (0x13e71fffU)
#define addr_map_rpg_pm_nvltx1_base_r() (0x13e72000U)
#define addr_map_rpg_pm_nvltx1_limit_r() (0x13e72fffU)
#define addr_map_rpg_pm_nvltx2_base_r() (0x13e73000U)
#define addr_map_rpg_pm_nvltx2_limit_r() (0x13e73fffU)
#define addr_map_rpg_pm_nvltx3_base_r() (0x13e74000U)
#define addr_map_rpg_pm_nvltx3_limit_r() (0x13e74fffU)
#define addr_map_rpg_pm_nvltx4_base_r() (0x13e75000U)
#define addr_map_rpg_pm_nvltx4_limit_r() (0x13e75fffU)
#define addr_map_rpg_pm_nvltx5_base_r() (0x13e76000U)
#define addr_map_rpg_pm_nvltx5_limit_r() (0x13e76fffU)
#define addr_map_rpg_pm_nvltx6_base_r() (0x13e77000U)
#define addr_map_rpg_pm_nvltx6_limit_r() (0x13e77fffU)
#define addr_map_rpg_pm_nvltx7_base_r() (0x13e78000U)
#define addr_map_rpg_pm_nvltx7_limit_r() (0x13e78fffU)
#define addr_map_rpg_pm_nvltx8_base_r() (0x13e79000U)
#define addr_map_rpg_pm_nvltx8_limit_r() (0x13e79fffU)
#define addr_map_rpg_pm_nvltx9_base_r() (0x13e7a000U)
#define addr_map_rpg_pm_nvltx9_limit_r() (0x13e7afffU)
#define addr_map_rpg_pm_nvltx10_base_r() (0x13e7b000U)
#define addr_map_rpg_pm_nvltx10_limit_r() (0x13e7bfffU)
#define addr_map_rpg_pm_nvltx11_base_r() (0x13e7c000U)
#define addr_map_rpg_pm_nvltx11_limit_r() (0x13e7cfffU)
#define addr_map_rpg_pm_nvlrx0_base_r() (0x13e7d000U)
#define addr_map_rpg_pm_nvlrx0_limit_r() (0x13e7dfffU)
#define addr_map_rpg_pm_nvlrx1_base_r() (0x13e7e000U)
#define addr_map_rpg_pm_nvlrx1_limit_r() (0x13e7efffU)
#define addr_map_rpg_pm_nvlrx2_base_r() (0x13e7f000U)
#define addr_map_rpg_pm_nvlrx2_limit_r() (0x13e7ffffU)
#define addr_map_rpg_pm_nvlrx3_base_r() (0x13e80000U)
#define addr_map_rpg_pm_nvlrx3_limit_r() (0x13e80fffU)
#define addr_map_rpg_pm_nvlrx4_base_r() (0x13e81000U)
#define addr_map_rpg_pm_nvlrx4_limit_r() (0x13e81fffU)
#define addr_map_rpg_pm_nvlrx5_base_r() (0x13e82000U)
#define addr_map_rpg_pm_nvlrx5_limit_r() (0x13e82fffU)
#define addr_map_rpg_pm_nvlrx6_base_r() (0x13e83000U)
#define addr_map_rpg_pm_nvlrx6_limit_r() (0x13e83fffU)
#define addr_map_rpg_pm_nvlrx7_base_r() (0x13e84000U)
#define addr_map_rpg_pm_nvlrx7_limit_r() (0x13e84fffU)
#define addr_map_rpg_pm_nvlrx8_base_r() (0x13e85000U)
#define addr_map_rpg_pm_nvlrx8_limit_r() (0x13e85fffU)
#define addr_map_rpg_pm_nvlrx9_base_r() (0x13e86000U)
#define addr_map_rpg_pm_nvlrx9_limit_r() (0x13e86fffU)
#define addr_map_rpg_pm_nvlrx10_base_r() (0x13e87000U)
#define addr_map_rpg_pm_nvlrx10_limit_r() (0x13e87fffU)
#define addr_map_rpg_pm_nvlrx11_base_r() (0x13e88000U)
#define addr_map_rpg_pm_nvlrx11_limit_r() (0x13e88fffU)
#define addr_map_rpg_pm_nvlctrl0_base_r() (0x13e8b000U)
#define addr_map_rpg_pm_nvlctrl0_limit_r() (0x13e8bfffU)
#define addr_map_rpg_pm_nvlctrl1_base_r() (0x13e8c000U)
#define addr_map_rpg_pm_nvlctrl1_limit_r() (0x13e8cfffU)
#define addr_map_nvlw0_ctrl_base_r() (0x03b80000U)
#define addr_map_nvlw0_ctrl_limit_r() (0x03b81fffU)
#define addr_map_nvlw1_ctrl_base_r() (0x03bc0000U)
#define addr_map_nvlw1_ctrl_limit_r() (0x03bc1fffU)
#define addr_map_nvlw0_nvldl0_base_r() (0x03b90000U)
#define addr_map_nvlw0_nvldl0_limit_r() (0x03b94fffU)
#define addr_map_nvlw0_nvltlc0_base_r() (0x03b95000U)
#define addr_map_nvlw0_nvltlc0_limit_r() (0x03b96fffU)
#define addr_map_nvlw0_nvldl1_base_r() (0x03b98000U)
#define addr_map_nvlw0_nvldl1_limit_r() (0x03b9cfffU)
#define addr_map_nvlw0_nvltlc1_base_r() (0x03b9d000U)
#define addr_map_nvlw0_nvltlc1_limit_r() (0x03b9efffU)
#define addr_map_nvlw0_nvldl2_base_r() (0x03ba0000U)
#define addr_map_nvlw0_nvldl2_limit_r() (0x03ba4fffU)
#define addr_map_nvlw0_nvltlc2_base_r() (0x03ba5000U)
#define addr_map_nvlw0_nvltlc2_limit_r() (0x03ba6fffU)
#define addr_map_nvlw0_nvldl3_base_r() (0x03ba8000U)
#define addr_map_nvlw0_nvldl3_limit_r() (0x03bacfffU)
#define addr_map_nvlw0_nvltlc3_base_r() (0x03bad000U)
#define addr_map_nvlw0_nvltlc3_limit_r() (0x03baefffU)
#define addr_map_nvlw0_nvldl4_base_r() (0x03bb0000U)
#define addr_map_nvlw0_nvldl4_limit_r() (0x03bb4fffU)
#define addr_map_nvlw0_nvltlc4_base_r() (0x03bb5000U)
#define addr_map_nvlw0_nvltlc4_limit_r() (0x03bb6fffU)
#define addr_map_nvlw0_nvldl5_base_r() (0x03bb8000U)
#define addr_map_nvlw0_nvldl5_limit_r() (0x03bbcfffU)
#define addr_map_nvlw0_nvltlc5_base_r() (0x03bbd000U)
#define addr_map_nvlw0_nvltlc5_limit_r() (0x03bbefffU)
#define addr_map_nvlw1_nvldl0_base_r() (0x03bd0000U)
#define addr_map_nvlw1_nvldl0_limit_r() (0x03bd4fffU)
#define addr_map_nvlw1_nvltlc0_base_r() (0x03bd5000U)
#define addr_map_nvlw1_nvltlc0_limit_r() (0x03bd6fffU)
#define addr_map_nvlw1_nvldl1_base_r() (0x03bd8000U)
#define addr_map_nvlw1_nvldl1_limit_r() (0x03bdcfffU)
#define addr_map_nvlw1_nvltlc1_base_r() (0x03bdd000U)
#define addr_map_nvlw1_nvltlc1_limit_r() (0x03bdefffU)
#define addr_map_nvlw1_nvldl2_base_r() (0x03be0000U)
#define addr_map_nvlw1_nvldl2_limit_r() (0x03be4fffU)
#define addr_map_nvlw1_nvltlc2_base_r() (0x03be5000U)
#define addr_map_nvlw1_nvltlc2_limit_r() (0x03be6fffU)
#define addr_map_nvlw1_nvldl3_base_r() (0x03be8000U)
#define addr_map_nvlw1_nvldl3_limit_r() (0x03becfffU)
#define addr_map_nvlw1_nvltlc3_base_r() (0x03bed000U)
#define addr_map_nvlw1_nvltlc3_limit_r() (0x03beefffU)
#define addr_map_nvlw1_nvldl4_base_r() (0x03bf0000U)
#define addr_map_nvlw1_nvldl4_limit_r() (0x03bf4fffU)
#define addr_map_nvlw1_nvltlc4_base_r() (0x03bf5000U)
#define addr_map_nvlw1_nvltlc4_limit_r() (0x03bf6fffU)
#define addr_map_nvlw1_nvldl5_base_r() (0x03bf8000U)
#define addr_map_nvlw1_nvldl5_limit_r() (0x03bfcfffU)
#define addr_map_nvlw1_nvltlc5_base_r() (0x03bfd000U)
#define addr_map_nvlw1_nvltlc5_limit_r() (0x03bfefffU)
#define addr_map_nvlw0_nvldl_multi_base_r() (0x03b88000U)
#define addr_map_nvlw0_nvldl_multi_limit_r() (0x03b8cfffU)
#define addr_map_nvlw0_nvltlc_multi_base_r() (0x03b8d000U)
#define addr_map_nvlw0_nvltlc_multi_limit_r() (0x03b8efffU)
#define addr_map_nvlw1_nvldl_multi_base_r() (0x03bc8000U)
#define addr_map_nvlw1_nvldl_multi_limit_r() (0x03bccfffU)
#define addr_map_nvlw1_nvltlc_multi_base_r() (0x03bcd000U)
#define addr_map_nvlw1_nvltlc_multi_limit_r() (0x03bcefffU)
#define addr_map_rpg_pm_xalrc0_base_r() (0x13e00000U)
#define addr_map_rpg_pm_xalrc0_limit_r() (0x13e00fffU)
#define addr_map_rpg_pm_xalrc1_base_r() (0x13e01000U)
#define addr_map_rpg_pm_xalrc1_limit_r() (0x13e01fffU)
#define addr_map_rpg_pm_xalrc2_base_r() (0x13e02000U)
#define addr_map_rpg_pm_xalrc2_limit_r() (0x13e02fffU)
#define addr_map_rpg_pm_xalrc3_base_r() (0x13e03000U)
#define addr_map_rpg_pm_xalrc3_limit_r() (0x13e03fffU)
#define addr_map_rpg_pm_xalrc4_base_r() (0x13e04000U)
#define addr_map_rpg_pm_xalrc4_limit_r() (0x13e04fffU)
#define addr_map_rpg_pm_xalrc5_base_r() (0x13e05000U)
#define addr_map_rpg_pm_xalrc5_limit_r() (0x13e05fffU)
#define addr_map_rpg_pm_xalrc6_base_r() (0x13e06000U)
#define addr_map_rpg_pm_xalrc6_limit_r() (0x13e06fffU)
#define addr_map_rpg_pm_xalrc7_base_r() (0x13e07000U)
#define addr_map_rpg_pm_xalrc7_limit_r() (0x13e07fffU)
#define addr_map_rpg_pm_xalrc8_base_r() (0x13e08000U)
#define addr_map_rpg_pm_xalrc8_limit_r() (0x13e08fffU)
#define addr_map_rpg_pm_xalrc9_base_r() (0x13e09000U)
#define addr_map_rpg_pm_xalrc9_limit_r() (0x13e09fffU)
#define addr_map_rpg_pm_xtlrc0_base_r() (0x13e0a000U)
#define addr_map_rpg_pm_xtlrc0_limit_r() (0x13e0afffU)
#define addr_map_rpg_pm_xtlrc1_base_r() (0x13e0b000U)
#define addr_map_rpg_pm_xtlrc1_limit_r() (0x13e0bfffU)
#define addr_map_rpg_pm_xtlrc2_base_r() (0x13e0c000U)
#define addr_map_rpg_pm_xtlrc2_limit_r() (0x13e0cfffU)
#define addr_map_rpg_pm_xtlrc3_base_r() (0x13e0d000U)
#define addr_map_rpg_pm_xtlrc3_limit_r() (0x13e0dfffU)
#define addr_map_rpg_pm_xtlrc4_base_r() (0x13e0e000U)
#define addr_map_rpg_pm_xtlrc4_limit_r() (0x13e0efffU)
#define addr_map_rpg_pm_xtlrc5_base_r() (0x13e0f000U)
#define addr_map_rpg_pm_xtlrc5_limit_r() (0x13e0ffffU)
#define addr_map_rpg_pm_xtlrc6_base_r() (0x13e10000U)
#define addr_map_rpg_pm_xtlrc6_limit_r() (0x13e10fffU)
#define addr_map_rpg_pm_xtlrc7_base_r() (0x13e11000U)
#define addr_map_rpg_pm_xtlrc7_limit_r() (0x13e11fffU)
#define addr_map_rpg_pm_xtlrc8_base_r() (0x13e12000U)
#define addr_map_rpg_pm_xtlrc8_limit_r() (0x13e12fffU)
#define addr_map_rpg_pm_xtlrc9_base_r() (0x13e13000U)
#define addr_map_rpg_pm_xtlrc9_limit_r() (0x13e13fffU)
#define addr_map_rpg_pm_xtlq0_base_r() (0x13e14000U)
#define addr_map_rpg_pm_xtlq0_limit_r() (0x13e14fffU)
#define addr_map_rpg_pm_xtlq1_base_r() (0x13e15000U)
#define addr_map_rpg_pm_xtlq1_limit_r() (0x13e15fffU)
#define addr_map_rpg_pm_xtlq2_base_r() (0x13e16000U)
#define addr_map_rpg_pm_xtlq2_limit_r() (0x13e16fffU)
#define addr_map_rpg_pm_xtlq3_base_r() (0x13e17000U)
#define addr_map_rpg_pm_xtlq3_limit_r() (0x13e17fffU)
#define addr_map_rpg_pm_xtlq4_base_r() (0x13e18000U)
#define addr_map_rpg_pm_xtlq4_limit_r() (0x13e18fffU)
#define addr_map_rpg_pm_xtlq5_base_r() (0x13e19000U)
#define addr_map_rpg_pm_xtlq5_limit_r() (0x13e19fffU)
#define addr_map_rpg_pm_xtlq6_base_r() (0x13e1a000U)
#define addr_map_rpg_pm_xtlq6_limit_r() (0x13e1afffU)
#define addr_map_rpg_pm_xtlq7_base_r() (0x13e1b000U)
#define addr_map_rpg_pm_xtlq7_limit_r() (0x13e1bfffU)
#define addr_map_rpg_pm_xtlq8_base_r() (0x13e1c000U)
#define addr_map_rpg_pm_xtlq8_limit_r() (0x13e1cfffU)
#define addr_map_rpg_pm_xtlq9_base_r() (0x13e1d000U)
#define addr_map_rpg_pm_xtlq9_limit_r() (0x13e1dfffU)
#define addr_map_pcie_c0_ctl0_xalrc_base_r() (0x14080000U)
#define addr_map_pcie_c0_ctl0_xalrc_limit_r() (0x1408ffffU)
#define addr_map_pcie_c0_ctl1_xtlq_base_r() (0x14090000U)
#define addr_map_pcie_c0_ctl1_xtlq_limit_r() (0x1409ffffU)
#define addr_map_pcie_c1_ctl0_xalrc_base_r() (0x140a0000U)
#define addr_map_pcie_c1_ctl0_xalrc_limit_r() (0x140affffU)
#define addr_map_pcie_c1_ctl1_xtlq_base_r() (0x140b0000U)
#define addr_map_pcie_c1_ctl1_xtlq_limit_r() (0x140bffffU)
#define addr_map_pcie_c2_ctl0_xalrc_base_r() (0x140c0000U)
#define addr_map_pcie_c2_ctl0_xalrc_limit_r() (0x140cffffU)
#define addr_map_pcie_c2_ctl1_xtlq_base_r() (0x140d0000U)
#define addr_map_pcie_c2_ctl1_xtlq_limit_r() (0x140dffffU)
#define addr_map_pcie_c3_ctl0_xalrc_base_r() (0x140e0000U)
#define addr_map_pcie_c3_ctl0_xalrc_limit_r() (0x140effffU)
#define addr_map_pcie_c3_ctl1_xtlq_base_r() (0x140f0000U)
#define addr_map_pcie_c3_ctl1_xtlq_limit_r() (0x140fffffU)
#define addr_map_pcie_c4_ctl0_xalrc_base_r() (0x14100000U)
#define addr_map_pcie_c4_ctl0_xalrc_limit_r() (0x1410ffffU)
#define addr_map_pcie_c4_ctl1_xtlq_base_r() (0x14110000U)
#define addr_map_pcie_c4_ctl1_xtlq_limit_r() (0x1411ffffU)
#define addr_map_pcie_c5_ctl0_xalrc_base_r() (0x14120000U)
#define addr_map_pcie_c5_ctl0_xalrc_limit_r() (0x1412ffffU)
#define addr_map_pcie_c5_ctl1_xtlq_base_r() (0x14130000U)
#define addr_map_pcie_c5_ctl1_xtlq_limit_r() (0x1413ffffU)
#define addr_map_pcie_c6_ctl0_xalrc_base_r() (0x14140000U)
#define addr_map_pcie_c6_ctl0_xalrc_limit_r() (0x1414ffffU)
#define addr_map_pcie_c6_ctl1_xtlq_base_r() (0x14150000U)
#define addr_map_pcie_c6_ctl1_xtlq_limit_r() (0x1415ffffU)
#define addr_map_pcie_c7_ctl0_xalrc_base_r() (0x14160000U)
#define addr_map_pcie_c7_ctl0_xalrc_limit_r() (0x1416ffffU)
#define addr_map_pcie_c7_ctl1_xtlq_base_r() (0x14170000U)
#define addr_map_pcie_c7_ctl1_xtlq_limit_r() (0x1417ffffU)
#define addr_map_pcie_c8_ctl0_xalrc_base_r() (0x14180000U)
#define addr_map_pcie_c8_ctl0_xalrc_limit_r() (0x1418ffffU)
#define addr_map_pcie_c8_ctl1_xtlq_base_r() (0x14190000U)
#define addr_map_pcie_c8_ctl1_xtlq_limit_r() (0x1419ffffU)
#define addr_map_pcie_c9_ctl0_xalrc_base_r() (0x141a0000U)
#define addr_map_pcie_c9_ctl0_xalrc_limit_r() (0x141affffU)
#define addr_map_pcie_c9_ctl1_xtlq_base_r() (0x141b0000U)
#define addr_map_pcie_c9_ctl1_xtlq_limit_r() (0x141bffffU)
#define addr_map_pcie_c0_ctl0_xtlrc_base_r() (0x14083000U)
#define addr_map_pcie_c0_ctl0_xtlrc_limit_r() (0x14083fffU)
#define addr_map_pcie_c1_ctl0_xtlrc_base_r() (0x140a3000U)
#define addr_map_pcie_c1_ctl0_xtlrc_limit_r() (0x140a3fffU)
#define addr_map_pcie_c2_ctl0_xtlrc_base_r() (0x140c3000U)
#define addr_map_pcie_c2_ctl0_xtlrc_limit_r() (0x140c3fffU)
#define addr_map_pcie_c3_ctl0_xtlrc_base_r() (0x140e3000U)
#define addr_map_pcie_c3_ctl0_xtlrc_limit_r() (0x140e3fffU)
#define addr_map_pcie_c4_ctl0_xtlrc_base_r() (0x14103000U)
#define addr_map_pcie_c4_ctl0_xtlrc_limit_r() (0x14103fffU)
#define addr_map_pcie_c5_ctl0_xtlrc_base_r() (0x14123000U)
#define addr_map_pcie_c5_ctl0_xtlrc_limit_r() (0x14123fffU)
#define addr_map_pcie_c6_ctl0_xtlrc_base_r() (0x14143000U)
#define addr_map_pcie_c6_ctl0_xtlrc_limit_r() (0x14143fffU)
#define addr_map_pcie_c7_ctl0_xtlrc_base_r() (0x14163000U)
#define addr_map_pcie_c7_ctl0_xtlrc_limit_r() (0x14163fffU)
#define addr_map_pcie_c8_ctl0_xtlrc_base_r() (0x14183000U)
#define addr_map_pcie_c8_ctl0_xtlrc_limit_r() (0x14183fffU)
#define addr_map_pcie_c9_ctl0_xtlrc_base_r() (0x141a3000U)
#define addr_map_pcie_c9_ctl0_xtlrc_limit_r() (0x141a3fffU)
#define addr_map_rpg_pm_ctc0_base_r() (0x13e8d000U)
#define addr_map_rpg_pm_ctc0_limit_r() (0x13e8dfffU)
#define addr_map_rpg_pm_ctc1_base_r() (0x13e8e000U)
#define addr_map_rpg_pm_ctc1_limit_r() (0x13e8efffU)
#define addr_map_c2c0_base_r() (0x13fe2000U)
#define addr_map_c2c0_limit_r() (0x13fe2fffU)
#define addr_map_c2c1_base_r() (0x13fe3000U)
#define addr_map_c2c1_limit_r() (0x13fe3fffU)
#define addr_map_c2c2_base_r() (0x13fe4000U)
#define addr_map_c2c2_limit_r() (0x13fe4fffU)
#define addr_map_c2c3_base_r() (0x13fe5000U)
#define addr_map_c2c3_limit_r() (0x13fe5fffU)
#define addr_map_c2c4_base_r() (0x13fe6000U)
#define addr_map_c2c4_limit_r() (0x13fe6fffU)
#define addr_map_c2c5_base_r() (0x13fe7000U)
#define addr_map_c2c5_limit_r() (0x13fe7fffU)
#define addr_map_c2c6_base_r() (0x13fe8000U)
#define addr_map_c2c6_limit_r() (0x13fe8fffU)
#define addr_map_c2c7_base_r() (0x13fe9000U)
#define addr_map_c2c7_limit_r() (0x13fe9fffU)
#define addr_map_c2c8_base_r() (0x13fea000U)
#define addr_map_c2c8_limit_r() (0x13feafffU)
#define addr_map_c2c9_base_r() (0x13feb000U)
#define addr_map_c2c9_limit_r() (0x13febfffU)
#define addr_map_c2cs0_base_r() (0x13fe0000U)
#define addr_map_c2cs0_limit_r() (0x13fe0fffU)
#define addr_map_c2cs1_base_r() (0x13fe1000U)
#define addr_map_c2cs1_limit_r() (0x13fe1fffU)
#define addr_map_pmc_misc_base_r() (0x0c3a0000U)
#endif

View File

@@ -0,0 +1,147 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_PMASYS_SOC_HWPM_H
#define TH500_PMASYS_SOC_HWPM_H
#define pmasys_cg2_r() (0x13ef1f44U)
#define pmasys_cg2_slcg_f(v) (((v) & 0x1U) << 0U)
#define pmasys_cg2_slcg_m() (0x1U << 0U)
#define pmasys_cg2_slcg_enabled_v() (0x00000000U)
#define pmasys_cg2_slcg_enabled_f() (0x0U)
#define pmasys_cg2_slcg_disabled_v() (0x00000001U)
#define pmasys_cg2_slcg_disabled_f() (0x1U)
#define pmasys_cg2_slcg__prod_v() (0x00000000U)
#define pmasys_cg2_slcg__prod_f() (0x0U)
#define pmasys_channel_control_user_r(i)\
(0x13ef0a20U + ((i)*384U))
#define pmasys_channel_control_user_update_bytes_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_control_user_update_bytes_m() (0x1U << 16U)
#define pmasys_channel_control_user_update_bytes_doit_v() (0x00000001U)
#define pmasys_channel_control_user_update_bytes_doit_f() (0x10000U)
#define pmasys_channel_mem_blockupper_r(i)\
(0x13ef0a3cU + ((i)*384U))
#define pmasys_channel_mem_blockupper_valid_f(v) (((v) & 0x1U) << 31U)
#define pmasys_channel_mem_blockupper_valid_false_v() (0x00000000U)
#define pmasys_channel_mem_blockupper_valid_true_v() (0x00000001U)
#define pmasys_channel_mem_bump_r(i)\
(0x13ef0a24U + ((i)*384U))
#define pmasys_channel_mem_block_r(i)\
(0x13ef0a38U + ((i)*384U))
#define pmasys_channel_mem_block__size_1_v() (0x00000001U)
#define pmasys_channel_mem_block_base_f(v) (((v) & 0xffffffffU) << 0U)
#define pmasys_channel_mem_block_base_m() (0xffffffffU << 0U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles__prod_v() (0x00000004U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles__prod_f() (0x4000000U)
#define pmasys_channel_outbase_r(i)\
(0x13ef0a48U + ((i)*384U))
#define pmasys_channel_outbase_ptr_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outbase_ptr_m() (0x7ffffffU << 5U)
#define pmasys_channel_outbase_ptr_v(r) (((r) >> 5U) & 0x7ffffffU)
#define pmasys_channel_outbaseupper_r(i)\
(0x13ef0a4cU + ((i)*384U))
#define pmasys_channel_outbaseupper_ptr_f(v) (((v) & 0x1ffffffU) << 0U)
#define pmasys_channel_outbaseupper_ptr_m() (0x1ffffffU << 0U)
#define pmasys_channel_outbaseupper_ptr_v(r) (((r) >> 0U) & 0x1ffffffU)
#define pmasys_channel_outsize_r(i)\
(0x13ef0a50U + ((i)*384U))
#define pmasys_channel_outsize_numbytes_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outsize_numbytes_m() (0x7ffffffU << 5U)
#define pmasys_channel_mem_head_r(i)\
(0x13ef0a54U + ((i)*384U))
#define pmasys_channel_mem_bytes_addr_r(i)\
(0x13ef0a5cU + ((i)*384U))
#define pmasys_channel_mem_bytes_addr_ptr_f(v) (((v) & 0x3fffffffU) << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_m() (0x3fffffffU << 2U)
#define pmasys_channel_config_user_r(i)\
(0x13ef0a44U + ((i)*384U))
#define pmasys_channel_config_user_stream_f(v) (((v) & 0x1U) << 0U)
#define pmasys_channel_config_user_stream_m() (0x1U << 0U)
#define pmasys_channel_config_user_stream_disable_f() (0x0U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_v()\
(0x00000004U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_f()\
(0x4000000U)
#define pmasys_channel_status_r(i)\
(0x13ef0a00U + ((i)*384U))
#define pmasys_channel_status_engine_status_m() (0x7U << 0U)
#define pmasys_channel_status_engine_status_empty_v() (0x00000000U)
#define pmasys_channel_status_engine_status_empty_f() (0x0U)
#define pmasys_channel_status_engine_status_active_v() (0x00000001U)
#define pmasys_channel_status_engine_status_paused_v() (0x00000002U)
#define pmasys_channel_status_engine_status_quiescent_v() (0x00000003U)
#define pmasys_channel_status_engine_status_stalled_v() (0x00000005U)
#define pmasys_channel_status_engine_status_faulted_v() (0x00000006U)
#define pmasys_channel_status_engine_status_halted_v() (0x00000007U)
#define pmasys_channel_status_membuf_status_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_status_membuf_status_m() (0x1U << 16U)
#define pmasys_channel_status_membuf_status_v(r) (((r) >> 16U) & 0x1U)
#define pmasys_channel_status_membuf_status_overflowed_v() (0x00000001U)
#define pmasys_command_slice_trigger_config_user_r(i)\
(0x13ef0afcU + ((i)*384U))
#define pmasys_command_slice_trigger_config_user_pma_pulse_f(v)\
(((v) & 0x1U) << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_m() (0x1U << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_record_stream_f(v)\
(((v) & 0x1U) << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_m() (0x1U << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_f()\
(0x0U)
#endif

View File

@@ -0,0 +1,113 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_PMMSYS_SOC_HWPM_H
#define TH500_PMMSYS_SOC_HWPM_H
#define pmmsys_perdomain_offset_v() (0x00001000U)
#define pmmsys_control_r(i)\
(0x13e0009cU + ((i)*4096U))
#define pmmsys_control_mode_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_control_mode_m() (0x7U << 0U)
#define pmmsys_control_mode_disable_v() (0x00000000U)
#define pmmsys_control_mode_disable_f() (0x0U)
#define pmmsys_control_mode_a_v() (0x00000001U)
#define pmmsys_control_mode_b_v() (0x00000002U)
#define pmmsys_control_mode_c_v() (0x00000003U)
#define pmmsys_control_mode_e_v() (0x00000005U)
#define pmmsys_control_mode_null_v() (0x00000007U)
#define pmmsys_sys0_enginestatus_r(i)\
(0x13e000c8U + ((i)*4096U))
#define pmmsys_sys0router_enginestatus_r() (0x13ef2050U)
#define pmmsys_sys0router_enginestatus_status_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_sys0router_enginestatus_status_m() (0x7U << 0U)
#define pmmsys_sys0router_enginestatus_status_v(r) (((r) >> 0U) & 0x7U)
#define pmmsys_sys0router_enginestatus_status_empty_v() (0x00000000U)
#define pmmsys_sys0router_enginestatus_status_active_v() (0x00000001U)
#define pmmsys_sys0router_enginestatus_status_paused_v() (0x00000002U)
#define pmmsys_sys0router_enginestatus_status_quiescent_v() (0x00000003U)
#define pmmsys_sys0router_enginestatus_status_stalled_v() (0x00000005U)
#define pmmsys_sys0router_enginestatus_status_faulted_v() (0x00000006U)
#define pmmsys_sys0router_enginestatus_status_halted_v() (0x00000007U)
#define pmmsys_sys0router_cg1_secure_r() (0x13ef2054U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_m() (0x1U << 31U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon__prod_v() (0x00000001U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon__prod_f() (0x80000000U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_disabled_v() (0x00000000U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_disabled_f() (0x0U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_enabled_v() (0x00000001U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_enabled_f() (0x80000000U)
#define pmmsys_sys0router_cg2_r() (0x13ef2040U)
#define pmmsys_sys0router_cg2_slcg_m() (0x1U << 31U)
#define pmmsys_sys0router_cg2_slcg_disabled_v() (0x00000001U)
#define pmmsys_sys0router_cg2_slcg_disabled_f() (0x80000000U)
#define pmmsys_sys0router_cg2_slcg_enabled_f() (0x0U)
#define pmmsys_sys0router_perfmon_cg2_secure_r() (0x13ef2058U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_sys0_enginestatus_r(i)\
(0x13e000c8U + ((i)*4096U))
#define pmmsys_sys0_enginestatus_enable_f(v) (((v) & 0x1U) << 8U)
#define pmmsys_sys0_enginestatus_enable_m() (0x1U << 8U)
#define pmmsys_sys0_enginestatus_enable_out_v() (0x00000001U)
#define pmmsys_sys0_enginestatus_enable_out_f() (0x100U)
#define pmmsys_sysrouter_enginestatus_r() (0x13ef2050U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_f(v)\
(((v) & 0x7U) << 8U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_m() (0x7U << 8U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_v(r)\
(((r) >> 8U) & 0x7U)
#endif

Some files were not shown because too many files have changed in this diff Show More