Compare commits

..

8 Commits

Author SHA1 Message Date
Jon Hunter
1e751f52f0 tegra: hwpm: Use conftest for get_user_pages
The conftest script already has a test for checking which variant of the
get_user_pages() function is present in the kernel. So use the
definition generated by conftest to select which function variant is
used.

Bug 4276500

Change-Id: I29d216c8cead657c1daca4ce11b3dc3f74928467
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015357
(cherry picked from commit f9360f364f)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3017317
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-20 05:09:26 -08:00
Jon Hunter
ee08de6166 tegra: hwpm: Remove class owner
The owner member of the class structure was removed in upstream Linux
v6.4 because it was never used. Therefore, just remove this from the
HWPM driver completely because it is not needed.

Bug 4276500

Change-Id: I50f7e59e08edbea26f7ceaa701e4abfe5cc71c71
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015339
(cherry picked from commit 13a7312154)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3017316
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
2023-11-20 05:09:21 -08:00
Shardar Mohammed
c7c63cd0fe hwpm: Remove module owner parameter
Remove the module owner from the struct class based
on following change in core kernel

=====
    Upstream commit "6e30a66433af"

    driver core: class: remove struct module owner out of struct class

    The module owner field for a struct class was never actually used, so
    remove it as it is not doing anything at all.

    Cc: "Rafael J. Wysocki" <rafael@kernel.org>
    Link: https://lore.kernel.org/r/20230313181843.1207845-3-gregkh@linuxfoundation.org
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
=====

Bug 4276500

Change-Id: I0b68273e38f79ee6d903172b8f4d9d1807202abe
Signed-off-by: Shardar Mohammed <smohammed@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2978633
(cherry picked from commit f116216688)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015718
Tested-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-17 00:24:27 -08:00
Shardar Mohammed
4f84731a0a hwpm: remove unused vmas parameter from get_user_pages()
Remove unused vmas parameter from get_user_pages() based
on following change in core kernel.

=====
    Upstream commit "54d020692b34"

    mm/gup: remove unused vmas parameter from get_user_pages()

    Patch series "remove the vmas parameter from GUP APIs", v6.

    (pin_/get)_user_pages[_remote]() each provide an optional output parameter
    for an array of VMA objects associated with each page in the input range.

    These provide the means for VMAs to be returned, as long as mm->mmap_lock
    is never released during the GUP operation (i.e.  the internal flag
    FOLL_UNLOCKABLE is not specified).

    In addition, these VMAs can only be accessed with the mmap_lock held and
    become invalidated the moment it is released.

    The vast majority of invocations do not use this functionality and of
    those that do, all but one case retrieve a single VMA to perform checks
    upon.

    It is not egregious in the single VMA cases to simply replace the
    operation with a vma_lookup().  In these cases we duplicate the (fast)
    lookup on a slow path already under the mmap_lock, abstracted to a new
    get_user_page_vma_remote() inline helper function which also performs
    error checking and reference count maintenance.

    The special case is io_uring, where io_pin_pages() specifically needs to
    assert that the VMAs underlying the range do not result in broken
    long-term GUP file-backed mappings.

    As GUP now internally asserts that FOLL_LONGTERM mappings are not
    file-backed in a broken fashion (i.e.  requiring dirty tracking) - as
    implemented in "mm/gup: disallow FOLL_LONGTERM GUP-nonfast writing to
    file-backed mappings" - this logic is no longer required and so we can
    simply remove it altogether from io_uring.

    Eliminating the vmas parameter eliminates an entire class of danging
    pointer errors that might have occured should the lock have been
    incorrectly released.

    In addition, the API is simplified and now clearly expresses what it is
    intended for - applying the specified GUP flags and (if pinning) returning
    pinned pages.

    This change additionally opens the door to further potential improvements
    in GUP and the possible marrying of disparate code paths.

    I have run this series against gup_test with no issues.

    Thanks to Matthew Wilcox for suggesting this refactoring!

    This patch (of 6):

    No invocation of get_user_pages() use the vmas parameter, so remove it.

    The GUP API is confusing and caveated.  Recent changes have done much to
    improve that, however there is more we can do.  Exporting vmas is a prime
    target as the caller has to be extremely careful to preclude their use
    after the mmap_lock has expired or otherwise be left with dangling
    pointers.

    Removing the vmas parameter focuses the GUP functions upon their primary
    purpose - pinning (and outputting) pages as well as performing the actions
    implied by the input flags.

    This is part of a patch series aiming to remove the vmas parameter
    altogether.

    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
=====

Bug 4276500

Change-Id: Ie2833b7aa4e8fef1362694de6e8a27bba553e3d4
Signed-off-by: Shardar Mohammed <smohammed@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2978634
(cherry picked from commit 85732c9084)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/3015717
Tested-by: Jonathan Hunter <jonathanh@nvidia.com>
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-11-17 00:24:21 -08:00
Jon Hunter
4b2fd8250d tegra: hwpm: Use conftest for 'struct class' changes
In Linux v6.2, the 'struct class.devnode()' function was updated to take
a 'const struct device *' instead of a 'struct device *'. A test has
been added to the conftest script to check for this and so instead of
relying on kernel version, use the definition generated by conftest to
select the appropriate function to use.

This is beneficial for working with 3rd party Linux kernels that may
have back-ported upstream changes into their kernel and so the kernel
version checks do not work.

Bug 4119327

Change-Id: I751b7401adee7b337192e255253b974cbd803642
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2991966
Reviewed-by: Vedashree Vidwans <vvidwans@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-10-10 11:54:25 -07:00
Jon Hunter
971645b49c tegra: hwpm: Add compilation flag for iosys-map.h
Determining whether the header file iosys-map.h is present in the kernel
is currently determine by kernel version. However, for Linux v5.15,
iosys-map.h has been backported in order to support simple-framebuffer
for early display. Therefore, we cannot rely on the kernel version to
indicate whether iosys-map is present. This is also true for 3rd party
Linux kernels that backport changes as well. Fix this by adding a
compile time flag, that will be set accordingly by the conftest script
if this header is present.

Bug 4119327
Bug 4228080

Change-Id: I9de07a4615a6c9da504b36750c48e73e200da301
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2974080
(cherry picked from commit 54ce334474)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2946966
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
2023-09-19 06:54:29 -07:00
Vedashree Vidwans
d69752c349 tegra: hwpm: enable video unit profiling
Enable HWPM profiling for VIC, OFA and NVENC video units in external
builds.

Bug 4158291

Change-Id: I09589bbd70de2f1061dc91926f689266f36d062c
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2914401
(cherry picked from commit f8c37a91ff73c951426a679c1b87684c2e38b916)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2928956
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2023-07-06 14:56:59 -07:00
Vedashree Vidwans
6b78463b8a tegra: hwpm: include all ip files
The config flags defined in Kconfig file are not available/used with
OOT kernel builds. To support all kernel versions, HWPM compiles
independent of CONFIG_TEGRA_SOC_HWPM flag. This also applies to
IP config flags which are not supported as well. Hence,
include HWPM IP files irrespective of the IP config flag status.

For OOT builds, use tegra_is_hypervisor_mode() instead of using
static function defined in HWPM driver.

Bug 4061775

Change-Id: Ifab4ad5c7c652a4ad17820a82b363e92280fdd1a
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2918870
(cherry picked from commit 91d75567c0)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-hwpm/+/2928930
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Vasuki Shankar <vasukis@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
2023-07-06 14:56:54 -07:00
198 changed files with 1027 additions and 44163 deletions

View File

@@ -1,15 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
package(
default_visibility = [
"//visibility:public",
],
)
filegroup(
name = "hwpm_headers",
srcs = glob([
"include/**/*.h", ]),
)

View File

@@ -1,15 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# The makefile to install public headers on desired path.
# Get the path of Makefile
ROOT_DIR:=$(shell dirname $(realpath $(firstword $(MAKEFILE_LIST))))
headers_install:
mkdir -p $(INSTALL_HDR_PATH); \
rsync -mrl --include='*/' --include='*\.h' --exclude='*' \
$(ROOT_DIR)/include $(INSTALL_HDR_PATH)
clean:
rm -rf $(INSTALL_HDR_PATH)

View File

@@ -1,41 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
load("//build/kernel/kleaf:kernel.bzl", "kernel_module")
package(
default_visibility = [
"//visibility:public",
],
)
filegroup(
name = "headers",
srcs = glob([
] + [
"Makefile.hwpm.sources",
"Makefile.t234.sources",
"Makefile.t264.sources",
"Makefile.th500.sources",
"Makefile.common.sources",
"Makefile.linux.sources",
"Makefile.th500.soc.sources",
]),
)
kernel_module(
name = "hwpm",
srcs = glob([
"**/*.c",
"**/*.h",
]) + [
":headers",
"//hwpm:hwpm_headers",
"//nvidia-oot/scripts/conftest:conftest_headers",
],
outs = [
"nvhwpm.ko",
],
kernel_build = "//nvidia-build/kleaf:tegra_android",
)

View File

@@ -1,6 +1,6 @@
config TEGRA_SOC_HWPM
bool "Tegra SOC HWPM driver"
default y
tristate "Tegra SOC HWPM driver"
default m
help
The SOC HWPM driver enables performance monitoring for various Tegra
IPs.
@@ -10,18 +10,4 @@ config TEGRA_T234_HWPM
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_23x_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_23x_SOC)
help
T23x performance monitoring driver.
config TEGRA_TH500_HWPM
bool "Tegra TH500 HWPM driver"
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_TH500_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_TH500_SOC)
help
TH500 performance monitoring driver.
config TEGRA_T264_HWPM
bool "Tegra T264 HWPM driver"
depends on TEGRA_SOC_HWPM && ARCH_TEGRA_T264_SOC
default y if (TEGRA_SOC_HWPM && ARCH_TEGRA_T264_SOC)
help
T264 performance monitoring driver.
T23x performance monitoring driver.

View File

@@ -1,24 +1,4 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2022-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
# Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
#
# Tegra SOC HWPM
#
@@ -29,61 +9,41 @@ ifeq ($(origin srctree.hwpm), undefined)
srctree.hwpm := $(abspath $(shell dirname $(lastword $(MAKEFILE_LIST))))/../../..
endif
ifdef CONFIG_TEGRA_KLEAF_BUILD
srctree.nvconftest := $(abspath $(NV_BUILD_KERNEL_NVCONFTEST_OUT))
endif
CONFIG_TEGRA_SOC_HWPM := y
ccflags-y += -DCONFIG_TEGRA_SOC_HWPM
CONFIG_TEGRA_T234_HWPM := y
ccflags-y += -DCONFIG_TEGRA_T234_HWPM
NVHWPM_OBJ = m
# For OOT builds, set required config flags
ifeq ($(CONFIG_TEGRA_OOT_MODULE),m)
NVHWPM_OBJ = m
CONFIG_TEGRA_HWPM_OOT := y
ccflags-y += -DCONFIG_TEGRA_HWPM_OOT
CONFIG_TEGRA_FUSE_UPSTREAM := y
ccflags-y += -DCONFIG_TEGRA_FUSE_UPSTREAM
ifneq ($(srctree.nvconftest),)
ccflags-y += -DCONFIG_TEGRA_HWPM_CONFTEST
ccflags-y += -I$(srctree.nvconftest)
endif
LINUXINCLUDE += -I$(srctree.nvconftest)
LINUXINCLUDE += -I$(srctree.hwpm)/include
LINUXINCLUDE += -I$(srctree.hwpm)/drivers/tegra/hwpm/include
LINUXINCLUDE += -I$(srctree.hwpm)/drivers/tegra/hwpm
else # CONFIG_TEGRA_OOT_MODULE != m
NVHWPM_OBJ = y
endif # CONFIG_TEGRA_OOT_MODULE
# Include paths
else
ccflags-y += -I$(srctree.nvidia)/include
ccflags-y += -I$(srctree.hwpm)/include
ccflags-y += -I$(srctree.hwpm)/drivers/tegra/hwpm/include
ccflags-y += -I$(srctree.hwpm)/drivers/tegra/hwpm
# Validate build config to add HWPM module support
endif
ifeq ($(NV_BUILD_CONFIGURATION_IS_SAFETY),1)
nvhwpm-objs := tegra_hwpm_mock.o
else ifeq ($(CONFIG_TEGRA_LINUX_PROD),1)
nvhwpm-objs := tegra_hwpm_mock.o
else ifneq ($(CONFIG_ARCH_TEGRA),y)
nvhwpm-objs := tegra_hwpm_mock.o
obj-${NVHWPM_OBJ} += tegra_hwpm_mock.o
else
# Add required objects to nvhwpm object variable
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.hwpm.sources
endif
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.sources
obj-${NVHWPM_OBJ} += nvhwpm.o
ifdef CONFIG_TEGRA_KLEAF_BUILD
KERNEL_SRC ?= /lib/modules/$(shell uname -r)/build
M ?= $(shell pwd)
modules modules_install:
make -C $(KERNEL_SRC) M=$(M) $(ccflags) CONFIG_TEGRA_OOT_MODULE=m $(@)
clean:
make -C $(KERNEL_SRC) M=$(M) CONFIG_TEGRA_OOT_MODULE=m clean
else
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) $(ccflags) CONFIG_TEGRA_OOT_MODULE=m modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) CONFIG_TEGRA_OOT_MODULE=m clean
endif

View File

@@ -1,28 +1,9 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Common sources
#
# SPDX-License-Identifier: GPL-2.0
nvhwpm-common-objs += common/allowlist.o
nvhwpm-common-objs += common/aperture.o
nvhwpm-common-objs += common/ip.o

View File

@@ -1,98 +0,0 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Sources
#
# Based on build config, set HWPM flags
# Flag indicates internal build config
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL), 0)
CONFIG_HWPM_BUILD_INTERNAL := y
endif
# T234 supported on all valid platforms
CONFIG_TEGRA_HWPM_T234 := y
# TH500 supported only on OOT config
ifeq ($(CONFIG_TEGRA_HWPM_OOT),y)
ifeq ($(NV_BUILD_CONFIGURATION_EXPOSING_TH50X), 1)
CONFIG_TEGRA_HWPM_TH500 := y
endif
endif
# Set HWPM next path and include sources as per build config
ifeq ($(CONFIG_TEGRA_HWPM_OOT),y)
srctree.hwpm-next := ${srctree.hwpm}
# Include next sources only if Makefile.hwpm-next.sources exists
ifneq ($(wildcard ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources),)
include ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources
nvhwpm-objs += ${nvhwpm-next-objs}
endif
else # Non-OOT kernel
ifeq ($(origin NV_SOURCE), undefined)
ifeq ($(origin TEGRA_TOP), undefined)
# No reference to hwpm-next repo
else
srctree.hwpm-next := ${TEGRA_TOP}/kernel/hwpm-next
endif
else
srctree.hwpm-next := ${NV_SOURCE}/kernel/hwpm-next
endif
ifneq ($(origin srctree.hwpm-next), undefined)
ifneq ($(wildcard ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources),)
include ${srctree.hwpm-next}/drivers/tegra/hwpm/Makefile.hwpm-next.sources
nvhwpm-objs += ${nvhwpm-next-objs}
endif
endif
endif # CONFIG_TEGRA_HWPM_OOT
# Include common files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.common.sources
nvhwpm-objs += ${nvhwpm-common-objs}
# Include linux files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.linux.sources
nvhwpm-objs += ${nvhwpm-linux-objs}
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
ccflags-y += -DCONFIG_TEGRA_HWPM_T234
# Include T234 files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.t234.sources
nvhwpm-objs += ${nvhwpm-t234-objs}
endif
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
ccflags-y += -DCONFIG_TEGRA_HWPM_TH500
# Include TH500 files
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.th500.sources
nvhwpm-objs += ${nvhwpm-th500-objs}
endif
# Include T264 files
CONFIG_TEGRA_T264_HWPM := y
ccflags-y += -DCONFIG_TEGRA_T264_HWPM
include ${srctree.hwpm}/drivers/tegra/hwpm/Makefile.t264.sources
nvhwpm-objs += ${nvhwpm-t264-objs}

View File

@@ -1,28 +1,9 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM Linux Sources
#
# SPDX-License-Identifier: GPL-2.0
nvhwpm-linux-objs += os/linux/aperture_utils.o
nvhwpm-linux-objs += os/linux/clk_rst_utils.o
nvhwpm-linux-objs += os/linux/driver.o

View File

@@ -0,0 +1,18 @@
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Tegra SOC HWPM Sources
#
# Include common files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.common.sources
nvhwpm-objs += ${nvhwpm-common-objs}
# Include linux files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.linux.sources
nvhwpm-objs += ${nvhwpm-linux-objs}
ifeq ($(CONFIG_TEGRA_T234_HWPM),y)
# Include T234 files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.t234.sources
nvhwpm-objs += ${nvhwpm-t234-objs}
endif

View File

@@ -1,29 +1,10 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM T234 sources
#
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
# SPDX-License-Identifier: GPL-2.0
ifeq ($(CONFIG_TEGRA_T234_HWPM),y)
nvhwpm-t234-objs += hal/t234/t234_aperture.o
nvhwpm-t234-objs += hal/t234/t234_interface.o
nvhwpm-t234-objs += hal/t234/t234_ip.o
@@ -50,18 +31,47 @@ nvhwpm-t234-objs += hal/t234/ip/pma/t234_pma.o
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
# implement the interface. Such IPs can be force enabled from HWPM driver
# perspective (option 2). Marking an IP available forcefully requires the user
# to unpowergate the IP before running any HWPM experiments.
#
# Enable CONFIG_T234_HWPM_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enabling IP.
# Note: We should work towards removing force enable flag dependency.
#
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL),0)
ccflags-y += -DCONFIG_T234_HWPM_ALLOW_FORCE_ENABLE
endif
# Include non-prod IPs if minimal build is not enabled for validation
#
# Currently, PVA, DLA and MSS channel are the IPs supported
# for performance metrics in external builds.
# Define CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE flag.
#
ifeq ($(NV_BUILD_CONFIGURATION_IS_EXTERNAL),1)
CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE=y
ccflags-y += -DCONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE
endif
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDLA
nvhwpm-t234-objs += hal/t234/ip/nvdla/t234_nvdla.o
ccflags-y += -DCONFIG_T234_HWPM_IP_PVA
nvhwpm-t234-objs += hal/t234/ip/pva/t234_pva.o
ccflags-y += -DCONFIG_T234_HWPM_IP_MSS_CHANNEL
nvhwpm-t234-objs += hal/t234/ip/mss_channel/t234_mss_channel.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVENC
nvhwpm-t234-objs += hal/t234/ip/nvenc/t234_nvenc.o
ccflags-y += -DCONFIG_T234_HWPM_IP_OFA
nvhwpm-t234-objs += hal/t234/ip/ofa/t234_ofa.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VIC
nvhwpm-t234-objs += hal/t234/ip/vic/t234_vic.o
# Include other IPs if minimal build is not enabled.
ifneq ($(CONFIG_TEGRA_HWPM_MINIMAL_IP_ENABLE),y)
ccflags-y += -DCONFIG_T234_HWPM_IP_DISPLAY
nvhwpm-t234-objs += hal/t234/ip/display/t234_display.o
@@ -91,29 +101,7 @@ nvhwpm-t234-objs += hal/t234/ip/scf/t234_scf.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VI
nvhwpm-t234-objs += hal/t234/ip/vi/t234_vi.o
endif
# Below IPs are enabled for all builds
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDLA
nvhwpm-t234-objs += hal/t234/ip/nvdla/t234_nvdla.o
ccflags-y += -DCONFIG_T234_HWPM_IP_PVA
nvhwpm-t234-objs += hal/t234/ip/pva/t234_pva.o
ccflags-y += -DCONFIG_T234_HWPM_IP_MSS_CHANNEL
nvhwpm-t234-objs += hal/t234/ip/mss_channel/t234_mss_channel.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVENC
nvhwpm-t234-objs += hal/t234/ip/nvenc/t234_nvenc.o
ccflags-y += -DCONFIG_T234_HWPM_IP_OFA
nvhwpm-t234-objs += hal/t234/ip/ofa/t234_ofa.o
ccflags-y += -DCONFIG_T234_HWPM_IP_VIC
nvhwpm-t234-objs += hal/t234/ip/vic/t234_vic.o
ccflags-y += -DCONFIG_T234_HWPM_IP_NVDEC
nvhwpm-t234-objs += hal/t234/ip/nvdec/t234_nvdec.o
endif
endif

View File

@@ -1,97 +0,0 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM T264 sources
#
ifeq ($(CONFIG_TEGRA_HWPM_T234),y)
nvhwpm-t264-objs += hal/t264/t264_aperture.o
nvhwpm-t264-objs += hal/t264/t264_interface.o
nvhwpm-t264-objs += hal/t264/t264_ip.o
nvhwpm-t264-objs += hal/t264/t264_mem_mgmt.o
nvhwpm-t264-objs += hal/t264/t264_resource.o
nvhwpm-t264-objs += hal/t264/t264_regops_allowlist.o
#
# RTR/PMA are HWPM IPs and can be enabled by default
#
nvhwpm-t264-objs += hal/t264/ip/pma/t264_pma.o
nvhwpm-t264-objs += hal/t264/ip/rtr/t264_rtr.o
#
# One of the HWPM components is a perfmux. Perfmux registers belong to the
# IP domain. There are 2 ways of accessing perfmux registers
# - option 1: implement HWPM <-> IP interface. IP drivers register with HWPM
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
#
# Enable CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enabling IP.
#
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ccflags-y += -DCONFIG_T264_HWPM_ALLOW_FORCE_ENABLE
endif
# Below IPs are enabled for all builds
ccflags-y += -DCONFIG_T264_HWPM_IP_PVA
nvhwpm-t264-objs += hal/t264/ip/pva/t264_pva.o
ccflags-y += -DCONFIG_T264_HWPM_IP_MSS_CHANNEL
nvhwpm-t264-objs += hal/t264/ip/mss_channel/t264_mss_channel.o
ccflags-y += -DCONFIG_T264_HWPM_IP_VIC
nvhwpm-t264-objs += hal/t264/ip/vic/t264_vic.o
ccflags-y += -DCONFIG_T264_HWPM_IP_MSS_HUBS
nvhwpm-t264-objs += hal/t264/ip/mss_hubs/t264_mss_hubs.o
ccflags-y += -DCONFIG_T264_HWPM_IP_OCU
nvhwpm-t264-objs += hal/t264/ip/ocu/t264_ocu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_SMMU
nvhwpm-t264-objs += hal/t264/ip/smmu/t264_smmu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_MSW
nvhwpm-t264-objs += hal/t264/ip/ucf_msw/t264_ucf_msw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_PSW
nvhwpm-t264-objs += hal/t264/ip/ucf_psw/t264_ucf_psw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_UCF_CSW
nvhwpm-t264-objs += hal/t264/ip/ucf_csw/t264_ucf_csw.o
ccflags-y += -DCONFIG_T264_HWPM_IP_CPU
nvhwpm-t264-objs += hal/t264/ip/cpu/t264_cpu.o
ccflags-y += -DCONFIG_T264_HWPM_IP_VI
nvhwpm-t264-objs += hal/t264/ip/vi/t264_vi.o
ccflags-y += -DCONFIG_T264_HWPM_IP_ISP
nvhwpm-t264-objs += hal/t264/ip/isp/t264_isp.o
endif

View File

@@ -1,101 +0,0 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM TH500 SOC sources
#
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_aperture.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_mem_mgmt.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_regops_allowlist.o
nvhwpm-th500-soc-objs += hal/th500/soc/th500_soc_resource.o
#
# Control IP config
# To disable an IP config in compilation, add condition for both
# IP config flag and IP specific .o file.
#
#
# RTR/PMA are HWPM IPs and can be enabled by default
#
nvhwpm-th500-soc-objs += hal/th500/soc/ip/rtr/th500_rtr.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pma/th500_pma.o
#
# One of the HWPM components is a perfmux. Perfmux registers belong to the
# IP domain. There are 2 ways of accessing perfmux registers
# - option 1: implement HWPM <-> IP interface. IP drivers register with HWPM
# driver and share required function pointers
# - option 2: map perfmux register address in HWPM driver
# Option 1 is the preferred solution. However, IP drivers have yet to
# implement the interface. HWPM driver implements option 2 for validation of suc IPs.
# If an IP is forced to available status from HWPM driver perspective, it is user's
# responsibility to ensure that the IP is infact present on the SOC config and
# unpowergated before running any HWPM experiments.
#
# Enable CONFIG_HWPM_TH500_ALLOW_FORCE_ENABLE for internal builds.
# Note: We should work towards removing force enabling IP.
#
ifeq ($(CONFIG_HWPM_BUILD_INTERNAL),y)
ccflags-y += -DCONFIG_TH500_HWPM_ALLOW_FORCE_ENABLE
ccflags-y += -DCONFIG_TH500_HWPM_IP_MSS_CHANNEL
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mss_channel/th500_mss_channel.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_C2C
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c2c/th500_c2c.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_SMMU
nvhwpm-th500-soc-objs += hal/th500/soc/ip/smmu/th500_smmu.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_CL2
nvhwpm-th500-soc-objs += hal/th500/soc/ip/cl2/th500_cl2.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_C_NVLINK
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvlrx.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvltx.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/c_nvlink/th500_nvlctrl.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MSS_HUB
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mss_hub/th500_mss_hub.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_SOC
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_soc/th500_mcf_soc.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_C2C
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_c2c/th500_mcf_c2c.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_CLINK
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_clink/th500_mcf_clink.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_MCF_CORE
nvhwpm-th500-soc-objs += hal/th500/soc/ip/mcf_core/th500_mcf_core.o
ccflags-y += -DCONFIG_TH500_HWPM_IP_PCIE
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xalrc.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xtlrc.o
nvhwpm-th500-soc-objs += hal/th500/soc/ip/pcie/th500_pcie_xtlq.o
endif # CONFIG_HWPM_BUILD_INTERNAL=y
endif

View File

@@ -1,34 +0,0 @@
# -*- mode: makefile -*-
#
# Copyright (c) 2023-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
# Tegra SOC HWPM TH500 sources
#
ifeq ($(CONFIG_TEGRA_HWPM_TH500),y)
nvhwpm-th500-objs += hal/th500/th500_interface.o
nvhwpm-th500-objs += hal/th500/th500_ip.o
# Include TH500 SOC files
include $(srctree.hwpm)/drivers/tegra/hwpm/Makefile.th500.soc.sources
nvhwpm-th500-objs += $(nvhwpm-th500-soc-objs)
endif

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -213,35 +213,12 @@ static int tegra_hwpm_alloc_dynamic_inst_element_array(
return 0;
}
/* This is for IP that is pre-configured with instance overlimit. */
if (inst_a_info->islots_overlimit == true) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP inst range(0x%llx-0x%llx) a_type = %d inst_slots %d"
"forced over limit, skip allocating dynamic array",
(unsigned long long)inst_a_info->range_start,
(unsigned long long)inst_a_info->range_end,
a_type, inst_a_info->inst_slots);
return 0;
}
ip_element_range = tegra_hwpm_safe_add_u64(
tegra_hwpm_safe_sub_u64(inst_a_info->range_end,
inst_a_info->range_start), 1ULL);
inst_a_info->inst_slots = tegra_hwpm_safe_cast_u64_to_u32(
ip_element_range / inst_a_info->inst_stride);
if (inst_a_info->inst_slots > TEGRA_HWPM_APERTURE_SLOTS_LIMIT) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP inst range(0x%llx-0x%llx) a_type = %d inst_slots %d"
"over limit, skip allocating dynamic array",
(unsigned long long)inst_a_info->range_start,
(unsigned long long)inst_a_info->range_end,
a_type, inst_a_info->inst_slots);
inst_a_info->islots_overlimit = true;
/* This is a valid case */
return 0;
}
inst_a_info->inst_arr = tegra_hwpm_kcalloc(
hwpm, inst_a_info->inst_slots, sizeof(struct hwpm_ip_inst *));
if (inst_a_info->inst_arr == NULL) {
@@ -291,14 +268,14 @@ fail:
static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 s_inst_idx, u32 a_type, u32 s_element_idx)
u32 static_inst_idx, u32 a_type, u32 static_aperture_idx)
{
int err = 0, ret = 0;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
&chip_ip->ip_inst_static_array[static_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element =
&e_info->element_static_array[s_element_idx];
&e_info->element_static_array[static_aperture_idx];
u64 element_offset = 0ULL;
u32 idx = 0U;
u32 reg_val = 0U;
@@ -307,14 +284,6 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
switch (iia_func) {
case TEGRA_HWPM_INIT_IP_STRUCTURES:
if (e_info->eslots_overlimit) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d s_inst_idx %d a_type %u s_element_idx %u"
"Skip using dynamic element array",
ip_idx, s_inst_idx, a_type, s_element_idx);
break;
}
/* Compute element offset from element range start */
element_offset = tegra_hwpm_safe_sub_u64(
element->start_abs_pa, e_info->range_start);
@@ -326,10 +295,9 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx static idx %d == dynamic idx %d",
ip_idx, s_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa,
s_element_idx, idx);
ip_idx, static_inst_idx, a_type,
element->element_type, (unsigned long long)element->start_abs_pa,
static_aperture_idx, idx);
/* Set element slot pointer */
e_info->element_arr[idx] = element;
@@ -352,24 +320,10 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
}
}
if (hwpm->fake_registers_enabled) {
/*
* In this case, HWPM will allocate memory to simulate
* IP perfmux address space. Hence, the perfmux will
* always be available.
* Indicate this by setting ret = 0.
*/
ret = 0;
} else {
/*
* Validate perfmux availability by reading 1st alist offset
*/
ret = tegra_hwpm_regops_readl(hwpm, ip_inst, element,
tegra_hwpm_safe_add_u64(element->start_abs_pa,
element->alist[0U].reg_offset),
&reg_val);
}
/* Validate perfmux availability by reading 1st alist offset */
ret = tegra_hwpm_regops_readl(hwpm, ip_inst, element,
tegra_hwpm_safe_add_u64(element->start_abs_pa,
element->alist[0U].reg_offset), &reg_val);
if (ret != 0) {
/*
* If an IP element is unavailable, perfmux register
@@ -395,7 +349,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_allowlist,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -408,7 +362,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
} else {
tegra_hwpm_err(hwpm, "IP %d"
" element type %d static_idx %d NULL alist",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
}
break;
case TEGRA_HWPM_COMBINE_ALIST:
@@ -417,7 +371,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_allowlist,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -428,7 +382,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_err(hwpm,
"IP %d element type %d static_idx %d"
" alist copy failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
return err;
}
break;
@@ -438,7 +392,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_reserve_resource,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reservable",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -447,7 +401,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d static_idx %d reserve failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
goto fail;
}
break;
@@ -458,7 +412,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -467,7 +421,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (ret != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d release failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
}
break;
case TEGRA_HWPM_BIND_RESOURCES:
@@ -476,7 +430,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -486,7 +440,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d zero regs failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
goto fail;
}
@@ -494,7 +448,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d enable failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
goto fail;
}
break;
@@ -504,7 +458,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -513,8 +467,8 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
err = tegra_hwpm_element_disable(hwpm, element);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d disable failed",
ip_idx, a_type, s_element_idx);
" type %d idx %d enable failed",
ip_idx, a_type, static_aperture_idx);
goto fail;
}
@@ -523,7 +477,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d element"
" type %d idx %d zero regs failed",
ip_idx, a_type, s_element_idx);
ip_idx, a_type, static_aperture_idx);
goto fail;
}
break;
@@ -533,7 +487,7 @@ static int tegra_hwpm_func_single_element(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_release,
"IP %d inst %d a_type %d element type %d"
" start_addr 0x%llx not reserved",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
element->element_type,
(unsigned long long)element->start_abs_pa);
return 0;
@@ -553,13 +507,13 @@ fail:
static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 s_inst_idx, u32 a_type)
u32 static_inst_idx, u32 a_type)
{
u32 static_idx = 0U, idx = 0U;
u64 inst_element_range = 0ULL;
int err = 0;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
&chip_ip->ip_inst_static_array[static_inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
tegra_hwpm_fn(hwpm, " ");
@@ -569,23 +523,7 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
/* no a_type elements in this IP */
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"No a_type = %d elements in IP %d stat inst %d",
a_type, ip_idx, s_inst_idx);
return 0;
}
/**
* This is for IP instance that is pre-configured with element
* overlimit.
*/
if (e_info->eslots_overlimit == true) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"force over limit, skip allocating dynamic array",
iia_func, ip_idx, s_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots);
a_type, ip_idx, static_inst_idx);
return 0;
}
@@ -595,20 +533,6 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
e_info->element_slots = tegra_hwpm_safe_cast_u64_to_u32(
inst_element_range / e_info->element_stride);
if (e_info->element_slots > TEGRA_HWPM_APERTURE_SLOTS_LIMIT) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"over limit, skip allocating dynamic array",
iia_func, ip_idx, s_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots);
e_info->eslots_overlimit = true;
/* This is a valid case */
return 0;
}
e_info->element_arr = tegra_hwpm_kcalloc(
hwpm, e_info->element_slots,
sizeof(struct hwpm_ip_aperture *));
@@ -626,7 +550,7 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
"iia_func %d IP %d static inst %d a_type %d"
" element range(0x%llx-0x%llx) element_slots %d "
"num_element_per_inst %d",
iia_func, ip_idx, s_inst_idx, a_type,
iia_func, ip_idx, static_inst_idx, a_type,
(unsigned long long)e_info->range_start,
(unsigned long long)e_info->range_end,
e_info->element_slots, e_info->num_element_per_inst);
@@ -643,11 +567,11 @@ static int tegra_hwpm_func_all_elements_of_type(struct tegra_soc_hwpm *hwpm,
static_idx++) {
err = tegra_hwpm_func_single_element(
hwpm, func_args, iia_func, ip_idx,
chip_ip, s_inst_idx, a_type, static_idx);
chip_ip, static_inst_idx, a_type, static_idx);
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d a_type %d idx %d func %d failed",
ip_idx, s_inst_idx, a_type,
ip_idx, static_inst_idx, a_type,
static_idx, iia_func);
goto fail;
}
@@ -667,7 +591,7 @@ fail:
static int tegra_hwpm_func_all_elements(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 s_inst_idx)
u32 static_inst_idx)
{
u32 a_type;
int err = 0;
@@ -676,11 +600,11 @@ static int tegra_hwpm_func_all_elements(struct tegra_soc_hwpm *hwpm,
for (a_type = 0U; a_type < TEGRA_HWPM_APERTURE_TYPE_MAX; a_type++) {
err = tegra_hwpm_func_all_elements_of_type(hwpm, func_args,
iia_func, ip_idx, chip_ip, s_inst_idx, a_type);
iia_func, ip_idx, chip_ip, static_inst_idx, a_type);
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d a_type %d func %d failed",
ip_idx, s_inst_idx, a_type, iia_func);
ip_idx, static_inst_idx, a_type, iia_func);
goto fail;
}
}
@@ -693,13 +617,13 @@ fail:
static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_func_args *func_args,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip,
u32 s_inst_idx)
u32 static_inst_idx)
{
int err = 0;
u32 a_type, idx = 0U;
u64 inst_offset = 0ULL;
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
&chip_ip->ip_inst_static_array[static_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info = NULL;
struct hwpm_ip_element_info *e_info = NULL;
@@ -713,15 +637,8 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (inst_a_info->range_end == 0ULL) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"No a_type = %d elements in IP %d",
a_type, ip_idx);
continue;
}
if (inst_a_info->islots_overlimit) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_driver_init,
"IP %d s_inst_idx %d Skip using dynamic instance array",
ip_idx, s_inst_idx);
"No a_type = %d elements in IP %d",
a_type, ip_idx);
continue;
}
@@ -737,10 +654,8 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
"IP %d a_type %d inst range start 0x%llx"
"element range start 0x%llx"
" static inst idx %d == dynamic idx %d",
ip_idx, a_type,
(unsigned long long)inst_a_info->range_start,
(unsigned long long)e_info->range_start,
s_inst_idx, idx);
ip_idx, a_type, (unsigned long long)inst_a_info->range_start,
(unsigned long long)e_info->range_start, static_inst_idx, idx);
/* Set perfmux slot pointer */
inst_a_info->inst_arr[idx] = ip_inst;
@@ -759,17 +674,17 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d power mgmt disable failed",
ip_idx, s_inst_idx);
ip_idx, static_inst_idx);
goto fail;
}
}
/* Continue functionality for all apertures */
err = tegra_hwpm_func_all_elements(hwpm, func_args, iia_func,
ip_idx, chip_ip, s_inst_idx);
ip_idx, chip_ip, static_inst_idx);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d inst %d func 0x%x failed",
ip_idx, s_inst_idx, iia_func);
ip_idx, static_inst_idx, iia_func);
goto fail;
}
@@ -792,7 +707,7 @@ static int tegra_hwpm_func_single_inst(struct tegra_soc_hwpm *hwpm,
if (err != 0) {
tegra_hwpm_err(hwpm,
"IP %d inst %d power mgmt enable failed",
ip_idx, s_inst_idx);
ip_idx, static_inst_idx);
goto fail;
}
}
@@ -806,22 +721,22 @@ static int tegra_hwpm_func_all_inst(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func, u32 ip_idx, struct hwpm_ip *chip_ip)
{
int err = 0, ret = 0;
u32 s_inst_idx = 0U;
u32 inst_idx = 0U;
unsigned long reserved_insts = 0UL, idx = 0UL;
tegra_hwpm_fn(hwpm, " ");
for (s_inst_idx = 0U; s_inst_idx < chip_ip->num_instances; s_inst_idx++) {
for (inst_idx = 0U; inst_idx < chip_ip->num_instances; inst_idx++) {
err = tegra_hwpm_func_single_inst(hwpm, func_args, iia_func,
ip_idx, chip_ip, s_inst_idx);
ip_idx, chip_ip, inst_idx);
if (err != 0) {
tegra_hwpm_err(hwpm, "IP %d inst %d func 0x%x failed",
ip_idx, s_inst_idx, iia_func);
ip_idx, inst_idx, iia_func);
goto fail;
}
if (iia_func == TEGRA_HWPM_RESERVE_GIVEN_RESOURCE) {
reserved_insts |= BIT(s_inst_idx);
reserved_insts |= BIT(inst_idx);
}
}
@@ -912,7 +827,7 @@ int tegra_hwpm_func_single_ip(struct tegra_soc_hwpm *hwpm,
}
break;
case TEGRA_HWPM_RELEASE_RESOURCES:
if (ip_idx == active_chip->get_rtr_int_idx()) {
if (ip_idx == active_chip->get_rtr_int_idx(hwpm)) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"Router will be released later");
return 0;
@@ -1004,7 +919,7 @@ int tegra_hwpm_func_all_ip(struct tegra_soc_hwpm *hwpm,
func_args->full_alist_idx = 0ULL;
}
for (ip_idx = 0U; ip_idx < active_chip->get_ip_max_idx();
for (ip_idx = 0U; ip_idx < active_chip->get_ip_max_idx(hwpm);
ip_idx++) {
err = tegra_hwpm_func_single_ip(

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -30,15 +30,12 @@
#include <tegra_hwpm.h>
#include <hal/t234/t234_init.h>
#include <hal/th500/th500_init.h>
#include <hal/t264/t264_init.h>
#ifdef CONFIG_TEGRA_NEXT1_HWPM
#include <tegra_hwpm_next1_init.h>
#endif
#ifdef CONFIG_TEGRA_NEXT4_HWPM
#include <tegra_hwpm_next4_init.h>
#ifdef CONFIG_TEGRA_NEXT2_HWPM
#include <tegra_hwpm_next2_init.h>
#endif
static int tegra_hwpm_init_chip_ip_structures(struct tegra_soc_hwpm *hwpm,
@@ -65,48 +62,13 @@ static int tegra_hwpm_init_chip_ip_structures(struct tegra_soc_hwpm *hwpm,
break;
}
break;
#ifdef CONFIG_TEGRA_HWPM_TH500
case 0x50:
switch (chip_id_rev) {
case 0x0:
err = th500_hwpm_init_chip_info(hwpm);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
#ifdef CONFIG_TEGRA_T264_HWPM
case 0x26:
switch (chip_id_rev) {
case 0x4:
err = t264_hwpm_init_chip_info(hwpm);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
#ifdef CONFIG_TEGRA_NEXT4_HWPM
case 0x41:
switch (chip_id_rev) {
case 0x0:
err = tegra_hwpm_next4_init_chip_ip_structures(
hwpm, chip_id, chip_id_rev);
break;
default:
tegra_hwpm_err(hwpm, "Chip 0x%x rev 0x%x not supported",
chip_id, chip_id_rev);
break;
}
break;
#endif
default:
#ifdef CONFIG_TEGRA_NEXT2_HWPM
err = tegra_hwpm_next2_init_chip_ip_structures(
hwpm, chip_id, chip_id_rev);
#else
tegra_hwpm_err(hwpm, "Chip 0x%x not supported", chip_id);
#endif
break;
}
@@ -132,7 +94,6 @@ int tegra_hwpm_init_sw_components(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
hwpm->dbg_mask = TEGRA_HWPM_DEFAULT_DBG_MASK;
hwpm->dbg_skip_alist = false;
err = tegra_hwpm_init_chip_ip_structures(hwpm, chip_id, chip_id_rev);
if (err != 0) {
@@ -154,13 +115,6 @@ int tegra_hwpm_setup_sw(struct tegra_soc_hwpm *hwpm)
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
ret = hwpm->active_chip->force_enable_ips(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to force enable IPs");
/* Do not fail because of force enable failure */
return 0;
}
ret = hwpm->active_chip->validate_current_config(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to validate current conifg");
@@ -304,11 +258,6 @@ bool tegra_hwpm_validate_primary_hals(struct tegra_soc_hwpm *hwpm)
return false;
}
if (hwpm->active_chip->get_rtr_pma_perfmux_ptr == NULL) {
tegra_hwpm_err(hwpm, "get_rtr_pma_perfmux_ptr HAL uninitialized");
return false;
}
if (hwpm->active_chip->extract_ip_ops == NULL) {
tegra_hwpm_err(hwpm, "extract_ip_ops uninitialized");
return false;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -26,7 +26,6 @@
#include <tegra_hwpm_ip.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_aperture.h>
#include <tegra_hwpm_static_analysis.h>
int tegra_hwpm_ip_handle_power_mgmt(struct tegra_soc_hwpm *hwpm,
@@ -57,12 +56,13 @@ int tegra_hwpm_ip_handle_power_mgmt(struct tegra_soc_hwpm *hwpm,
}
int tegra_hwpm_update_ip_inst_fs_mask(struct tegra_soc_hwpm *hwpm,
u32 ip_idx, u32 a_type, u32 s_inst_idx, bool available)
u32 ip_idx, u32 a_type, u32 inst_idx, bool available)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[ip_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[inst_idx];
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
@@ -100,12 +100,13 @@ int tegra_hwpm_update_ip_inst_fs_mask(struct tegra_soc_hwpm *hwpm,
static int tegra_hwpm_update_ip_ops_info(struct tegra_soc_hwpm *hwpm,
struct tegra_hwpm_ip_ops *ip_ops,
u32 ip_idx, u32 a_type, u32 s_inst_idx, bool available)
u32 ip_idx, u32 a_type, u32 inst_idx, bool available)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[ip_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[inst_idx];
/* Update IP ops info for the instance */
struct tegra_hwpm_ip_ops *ops = &ip_inst->ip_ops;
@@ -134,7 +135,7 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
int ret = 0;
bool found = false;
u32 idx = ip_idx;
u32 s_inst_idx = 0U, s_element_idx = 0U;
u32 inst_idx = 0U, element_idx = 0U;
u32 a_type = 0U;
enum tegra_hwpm_element_type element_type = HWPM_ELEMENT_INVALID;
@@ -143,7 +144,7 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
/* Find IP aperture containing phys_addr in allowlist */
found = tegra_hwpm_aperture_for_address(hwpm,
TEGRA_HWPM_MATCH_BASE_ADDRESS, base_address,
&idx, &s_inst_idx, &s_element_idx, &element_type);
&idx, &inst_idx, &element_idx, &element_type);
if (!found) {
tegra_hwpm_err(hwpm, "Base addr 0x%llx not in IP %d",
(unsigned long long)base_address, idx);
@@ -151,10 +152,9 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
}
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Found addr 0x%llx IP %d s_inst_idx %d "
"s_element_idx %d e_type %d",
(unsigned long long)base_address, idx, s_inst_idx,
s_element_idx, element_type);
"Found addr 0x%llx IP %d inst_idx %d element_idx %d e_type %d",
(unsigned long long)base_address, idx, inst_idx,
element_idx, element_type);
switch (element_type) {
case HWPM_ELEMENT_PERFMON:
@@ -175,21 +175,21 @@ int tegra_hwpm_set_fs_info_ip_ops(struct tegra_soc_hwpm *hwpm,
if (ip_ops != NULL) {
/* Update IP ops */
ret = tegra_hwpm_update_ip_ops_info(hwpm, ip_ops,
ip_idx, a_type, s_inst_idx, available);
ip_idx, a_type, inst_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"IP %d s_inst_idx %d: Failed to update ip_ops",
ip_idx, s_inst_idx);
"IP %d inst_idx %d: Failed to update ip_ops",
ip_idx, inst_idx);
goto fail;
}
}
ret = tegra_hwpm_update_ip_inst_fs_mask(hwpm, ip_idx, a_type,
s_inst_idx, available);
inst_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"IP %d s_inst_idx %d: Failed to update fs_info",
ip_idx, s_inst_idx);
"IP %d inst_idx %d: Failed to update fs_info",
ip_idx, inst_idx);
goto fail;
}
@@ -200,7 +200,7 @@ fail:
int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u64 *fs_mask, u8 *ip_status)
{
u32 ip_idx = 0U, s_inst_idx = 0U, element_mask_shift = 0U;
u32 ip_idx = 0U, inst_idx = 0U, element_mask_shift = 0U;
u64 floorsweep = 0ULL;
struct tegra_soc_hwpm_chip *active_chip = NULL;
struct hwpm_ip *chip_ip = NULL;
@@ -213,13 +213,12 @@ int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
active_chip = hwpm->active_chip;
chip_ip = active_chip->chip_ips[ip_idx];
if (!(chip_ip->override_enable) && chip_ip->inst_fs_mask) {
element_mask_shift = 0U;
for (s_inst_idx = 0U;
s_inst_idx < chip_ip->num_instances;
s_inst_idx++) {
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
s_inst_idx];
inst_idx];
element_mask_shift = (inst_idx == 0U ? 0U :
ip_inst->num_core_elements_per_inst);
if (ip_inst->hw_inst_mask &
chip_ip->inst_fs_mask) {
@@ -227,15 +226,10 @@ int tegra_hwpm_get_fs_info(struct tegra_soc_hwpm *hwpm,
ip_inst->element_fs_mask <<
element_mask_shift);
}
element_mask_shift += ip_inst->num_core_elements_per_inst;
}
*fs_mask = floorsweep;
*ip_status = TEGRA_HWPM_IP_STATUS_VALID;
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm IP %d is available", ip_enum);
return 0;
}
}
@@ -265,17 +259,12 @@ int tegra_hwpm_get_resource_info(struct tegra_soc_hwpm *hwpm,
if (!(chip_ip->override_enable)) {
*status = tegra_hwpm_safe_cast_u32_to_u8(
chip_ip->resource_status);
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm Resource %d is %d",
resource_enum, chip_ip->resource_status);
return 0;
}
}
*status = tegra_hwpm_safe_cast_u32_to_u8(
TEGRA_HWPM_RESOURCE_STATUS_INVALID);
tegra_hwpm_dbg(hwpm, hwpm_dbg_floorsweep_info,
"SOC hwpm Resource %d is unavailable", resource_enum);
return 0;
}
@@ -304,30 +293,35 @@ int tegra_hwpm_finalize_chip_info(struct tegra_soc_hwpm *hwpm)
return ret;
}
ret = hwpm->active_chip->force_enable_ips(hwpm);
if (ret != 0) {
tegra_hwpm_err(hwpm, "Failed to force enable IPs");
/* Do not fail because of force enable failure */
return 0;
}
return 0;
}
static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element =
&e_info->element_static_array[*s_element_idx];
tegra_hwpm_fn(hwpm, " ");
struct hwpm_ip_aperture *element = e_info->element_arr[*element_idx];
if (element == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d: s_element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, *s_inst_idx,
a_type, *s_element_idx);
"IP %d addr 0x%llx inst_idx %d "
"a_type %d: element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
return false;
}
@@ -336,21 +330,20 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
if ((element->element_index_mask &
ip_inst->element_fs_mask) == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d: s_element_idx %d: not available",
"IP %d addr 0x%llx inst_idx %d "
"a_type %d: element_idx %d: not available",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
*inst_idx, a_type, *element_idx);
return false;
}
/* Make sure phys addr belongs to this element */
if ((find_addr < element->start_abs_pa) ||
(find_addr > element->end_abs_pa)) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d: out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d: out of bounds",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
return false;
}
@@ -360,18 +353,10 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
}
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d address not in alist",
"IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d address not in alist",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
if (hwpm->dbg_skip_alist) {
*element_type = element->element_type;
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"skipping allowlist check");
return true;
}
*inst_idx, a_type, *element_idx);
return false;
}
@@ -379,10 +364,10 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
/* Confirm that given addr is base address of this element */
if (find_addr != element->start_abs_pa) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"IP %d addr 0x%llx s_inst_idx %d "
"a_type %d s_element_idx %d addr != start addr",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, *s_element_idx);
"IP %d addr 0x%llx inst_idx %d "
"a_type %d element_idx %d: addr != start addr",
*ip_idx, (unsigned long long)find_addr, *inst_idx,
a_type, *element_idx);
return false;
}
*element_type = element->element_type;
@@ -395,118 +380,73 @@ static bool tegra_hwpm_addr_in_single_element(struct tegra_soc_hwpm *hwpm,
static bool tegra_hwpm_addr_in_all_elements(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
struct hwpm_ip_element_info *e_info = &ip_inst->element_info[a_type];
struct hwpm_ip_aperture *element = NULL;
u64 element_offset = 0ULL;
u32 idx = 0U;
u32 dyn_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
u32 idx;
/* Make sure address falls in elements of a_type */
if (e_info->num_element_per_inst == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx: s_inst_idx %d no type %d elements",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type);
"IP %d addr 0x%llx: inst_idx %d no type %d elements",
*ip_idx, (unsigned long long)find_addr, *inst_idx, a_type);
return false;
}
if ((find_addr < e_info->range_start) ||
(find_addr > e_info->range_end)) {
/* Address not in this instance corresponding to a_type */
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d s_inst_idx %d: "
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d inst_idx %d: "
"addr 0x%llx not in type %d elements",
*ip_idx, *s_inst_idx,
(unsigned long long)find_addr, a_type);
*ip_idx, *inst_idx, (unsigned long long)find_addr, a_type);
return false;
}
if (e_info->eslots_overlimit) {
/* Use brute force approach to find element index */
for (idx = 0U; idx < e_info->num_element_per_inst; idx++) {
element = &e_info->element_static_array[idx];
if ((find_addr >= element->start_abs_pa) &&
(find_addr <= element->end_abs_pa)) {
/* Found element with given address */
break;
}
}
/* Find element index to which address belongs to */
element_offset = tegra_hwpm_safe_sub_u64(
find_addr, e_info->range_start);
idx = tegra_hwpm_safe_cast_u64_to_u32(
element_offset / e_info->element_stride);
/* Make sure element index is valid */
if (idx >= e_info->num_element_per_inst) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"s_element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, idx);
return false;
}
} else {
/* Find element index to which address belongs to */
element_offset = tegra_hwpm_safe_sub_u64(
find_addr, e_info->range_start);
dyn_idx = tegra_hwpm_safe_cast_u64_to_u32(
element_offset / e_info->element_stride);
/* Make sure element index is valid */
if (dyn_idx >= e_info->element_slots) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"dynamic element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, dyn_idx);
return false;
}
/* Convert dynamic index to static index */
element = e_info->element_arr[dyn_idx];
if (element == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx s_inst_idx %d a_type %d: "
"dynamic element_idx %d not populated",
*ip_idx, (unsigned long long)find_addr,
*s_inst_idx, a_type, dyn_idx);
return false;
}
idx = element->aperture_index;
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"find_addr 0x%llx element dyn_idx %u static idx %u",
(unsigned long long)find_addr, dyn_idx, idx);
/* Make sure element index is valid */
if (idx >= e_info->element_slots) {
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx inst_idx %d a_type %d: "
"element_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr, *inst_idx, a_type, idx);
return false;
}
*s_element_idx = idx;
*element_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_element(hwpm, iia_func,
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
find_addr, ip_idx, inst_idx, element_idx, element_type, a_type);
}
static bool tegra_hwpm_addr_in_single_instance(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst *ip_inst =
&chip_ip->ip_inst_static_array[*s_inst_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = inst_a_info->inst_arr[*inst_idx];
tegra_hwpm_fn(hwpm, " ");
if (ip_inst == NULL) {
tegra_hwpm_dbg(hwpm, hwpm_verbose, "IP %d addr 0x%llx: "
"a_type %d s_inst_idx %d not populated",
*ip_idx, (unsigned long long)find_addr,
a_type, *s_inst_idx);
"a_type %d inst_idx %d not populated",
*ip_idx, (unsigned long long)find_addr, a_type, *inst_idx);
return false;
}
@@ -515,103 +455,56 @@ static bool tegra_hwpm_addr_in_single_instance(struct tegra_soc_hwpm *hwpm,
if ((chip_ip->inst_fs_mask & ip_inst->hw_inst_mask) == 0U) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_regops,
"IP %d addr 0x%llx: "
"a_type %d s_inst_idx %d not available",
*ip_idx, (unsigned long long)find_addr,
a_type, *s_inst_idx);
"a_type %d inst_idx %d not available",
*ip_idx, (unsigned long long)find_addr, a_type, *inst_idx);
return false;
}
}
/* Process further and return */
return tegra_hwpm_addr_in_all_elements(hwpm, iia_func,
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
find_addr, ip_idx, inst_idx, element_idx, element_type, a_type);
}
static bool tegra_hwpm_addr_in_all_instances(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type, u32 a_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[*ip_idx];
struct hwpm_ip_inst_per_aperture_info *inst_a_info =
&chip_ip->inst_aperture_info[a_type];
struct hwpm_ip_inst *ip_inst = NULL;
struct hwpm_ip_element_info *e_info = NULL;
bool found = false;
u64 inst_offset = 0ULL;
u32 idx = 0U;
u32 dyn_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
if (inst_a_info->islots_overlimit) {
/* Use brute force approach to find instance index */
for (idx = 0U; idx < chip_ip->num_instances; idx++) {
ip_inst = &chip_ip->ip_inst_static_array[idx];
e_info = &ip_inst->element_info[a_type];
if ((find_addr >= e_info->range_start) &&
(find_addr <= e_info->range_end)) {
*s_inst_idx = idx;
/* Found element with given address */
found = tegra_hwpm_addr_in_single_instance(
hwpm, iia_func, find_addr, ip_idx,
s_inst_idx, s_element_idx,
element_type, a_type);
if (found) {
return found;
}
}
}
/* Find instance to which address belongs to */
inst_offset = tegra_hwpm_safe_sub_u64(
find_addr, inst_a_info->range_start);
idx = tegra_hwpm_safe_cast_u64_to_u32(
inst_offset / inst_a_info->inst_stride);
/* Make sure instance index is valid */
if (idx >= chip_ip->num_instances) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"Addr 0x%llx not in IP %d a_type %d",
(unsigned long long)find_addr, *ip_idx, a_type);
return false;
}
} else {
/* Find instance to which address belongs to */
inst_offset = tegra_hwpm_safe_sub_u64(
find_addr, inst_a_info->range_start);
dyn_idx = tegra_hwpm_safe_cast_u64_to_u32(
inst_offset / inst_a_info->inst_stride);
/* Make sure instance index is valid */
if (dyn_idx >= inst_a_info->inst_slots) {
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx a_type %d: "
"dynamic inst_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr,
a_type, dyn_idx);
return false;
}
/* Convert dynamic inst index to static inst index */
ip_inst = inst_a_info->inst_arr[dyn_idx];
idx = tegra_hwpm_ffs(hwpm, ip_inst->hw_inst_mask);
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"IP %d find_addr 0x%llx inst dyn_idx %u static idx %u",
*ip_idx, (unsigned long long)find_addr, dyn_idx, idx);
*s_inst_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_instance(hwpm, iia_func,
find_addr, ip_idx, s_inst_idx, s_element_idx,
element_type, a_type);
/* Make sure instance index is valid */
if (idx >= inst_a_info->inst_slots) {
tegra_hwpm_err(hwpm, "IP %d addr 0x%llx a_type %d: "
"inst_idx %d out of bounds",
*ip_idx, (unsigned long long)find_addr, a_type, idx);
return false;
}
/* Execution shouldn't reach here */
tegra_hwpm_err(hwpm, "Execution shouldn't reach here");
return false;
*inst_idx = idx;
/* Process further and return */
return tegra_hwpm_addr_in_single_instance(hwpm, iia_func,
find_addr, ip_idx, inst_idx, element_idx,
element_type, a_type);
}
static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
@@ -622,8 +515,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
if (chip_ip == NULL) {
tegra_hwpm_err(hwpm,
"IP %d not populated as expected", *ip_idx);
tegra_hwpm_err(hwpm, "IP %d not populated as expected", *ip_idx);
return false;
}
@@ -657,7 +549,6 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
if ((find_addr < inst_a_info->range_start) ||
(find_addr > inst_a_info->range_end)) {
/* Address not in this IP for this a_type */
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"IP %d addr 0x%llx not in a_type %d elements",
@@ -667,7 +558,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
/* Process further and return */
found = tegra_hwpm_addr_in_all_instances(hwpm, iia_func,
find_addr, ip_idx, s_inst_idx, s_element_idx,
find_addr, ip_idx, inst_idx, element_idx,
element_type, a_type);
if (found) {
break;
@@ -685,7 +576,7 @@ static bool tegra_hwpm_addr_in_single_ip(struct tegra_soc_hwpm *hwpm,
static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type)
{
u32 idx;
@@ -694,7 +585,7 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
tegra_hwpm_fn(hwpm, " ");
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
struct hwpm_ip *chip_ip = active_chip->chip_ips[idx];
if (chip_ip == NULL) {
@@ -710,7 +601,7 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
}
found = tegra_hwpm_addr_in_single_ip(hwpm, iia_func, find_addr,
&idx, s_inst_idx, s_element_idx, element_type);
&idx, inst_idx, element_idx, element_type);
if (found) {
*ip_idx = idx;
return true;
@@ -722,15 +613,15 @@ static bool tegra_hwpm_addr_in_all_ip(struct tegra_soc_hwpm *hwpm,
bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
enum tegra_hwpm_funcs iia_func,
u64 find_addr, u32 *ip_idx, u32 *s_inst_idx, u32 *s_element_idx,
u64 find_addr, u32 *ip_idx, u32 *inst_idx, u32 *element_idx,
enum tegra_hwpm_element_type *element_type)
{
bool found = false;
tegra_hwpm_fn(hwpm, " ");
if ((ip_idx == NULL) || (s_inst_idx == NULL) ||
(s_element_idx == NULL) || (element_type == NULL)) {
if ((ip_idx == NULL) || (inst_idx == NULL) ||
(element_idx == NULL) || (element_type == NULL)) {
tegra_hwpm_err(hwpm, "NULL index pointer");
return false;
}
@@ -738,7 +629,7 @@ bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
if (iia_func == TEGRA_HWPM_FIND_GIVEN_ADDRESS) {
/* IP index is not known, search in all IPs */
found = tegra_hwpm_addr_in_all_ip(hwpm, iia_func, find_addr,
ip_idx, s_inst_idx, s_element_idx, element_type);
ip_idx, inst_idx, element_idx, element_type);
if (!found) {
tegra_hwpm_err(hwpm,
"Address 0x%llx not in any IP",
@@ -749,7 +640,7 @@ bool tegra_hwpm_aperture_for_address(struct tegra_soc_hwpm *hwpm,
if (iia_func == TEGRA_HWPM_MATCH_BASE_ADDRESS) {
found = tegra_hwpm_addr_in_single_ip(hwpm, iia_func, find_addr,
ip_idx, s_inst_idx, s_element_idx, element_type);
ip_idx, inst_idx, element_idx, element_type);
if (!found) {
tegra_hwpm_err(hwpm, "Address 0x%llx not in IP %d",
(unsigned long long)find_addr, *ip_idx);

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -44,10 +44,10 @@ int tegra_hwpm_reserve_rtr(struct tegra_soc_hwpm *hwpm)
err = tegra_hwpm_func_single_ip(hwpm, NULL,
TEGRA_HWPM_RESERVE_GIVEN_RESOURCE,
active_chip->get_rtr_int_idx());
active_chip->get_rtr_int_idx(hwpm));
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to reserve IP %d",
active_chip->get_rtr_int_idx());
active_chip->get_rtr_int_idx(hwpm));
return err;
}
return err;
@@ -62,10 +62,10 @@ int tegra_hwpm_release_rtr(struct tegra_soc_hwpm *hwpm)
err = tegra_hwpm_func_single_ip(hwpm, NULL,
TEGRA_HWPM_RELEASE_ROUTER,
active_chip->get_rtr_int_idx());
active_chip->get_rtr_int_idx(hwpm));
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to release IP %d",
active_chip->get_rtr_int_idx());
active_chip->get_rtr_int_idx(hwpm));
return err;
}
return err;
@@ -79,7 +79,7 @@ int tegra_hwpm_reserve_resource(struct tegra_soc_hwpm *hwpm, u32 resource)
tegra_hwpm_fn(hwpm, " ");
tegra_hwpm_dbg(hwpm, hwpm_info | hwpm_dbg_reserve_resource,
tegra_hwpm_dbg(hwpm, hwpm_info,
"User requesting to reserve resource %d", resource);
/* Translate resource to ip_idx */

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_display.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_display_inst0_perfmon_element_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_display_inst0_perfmux_element_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,10 +84,10 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_display_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_disp_base_r(),
.range_end = addr_map_disp_limit_r(),
.element_stride = addr_map_disp_limit_r() -
.element_stride =
addr_map_disp_limit_r() -
addr_map_disp_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
@@ -124,7 +117,8 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
t234_display_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_disp_base_r(),
.range_end = addr_map_rpg_pm_disp_limit_r(),
.element_stride = addr_map_rpg_pm_disp_limit_r() -
.element_stride =
addr_map_rpg_pm_disp_limit_r() -
addr_map_rpg_pm_disp_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
@@ -135,7 +129,7 @@ static struct hwpm_ip_inst t234_display_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -154,7 +148,6 @@ struct hwpm_ip t234_hwpm_ip_display = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_disp_base_r(),
.range_end = addr_map_disp_limit_r(),
.inst_stride = addr_map_disp_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_DISPLAY_H
#define T234_HWPM_IP_DISPLAY_H
#if defined(CONFIG_T234_HWPM_IP_DISPLAY)
#define T234_HWPM_ACTIVE_IP_DISPLAY T234_HWPM_IP_DISPLAY,
#define T234_HWPM_ACTIVE_IP_DISPLAY T234_HWPM_IP_DISPLAY,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_DISPLAY_NUM_INSTANCES 1U
#define T234_HWPM_IP_DISPLAY_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_DISPLAY_NUM_INSTANCES 1U
#define T234_HWPM_IP_DISPLAY_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_DISPLAY_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_display;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_isp.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_isp_inst0_perfmon_element_static_array[
T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_isp_inst0_perfmux_element_static_array[
T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,7 +84,6 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_isp_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.element_stride = addr_map_isp_thi_limit_r() -
@@ -135,7 +127,7 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -143,7 +135,6 @@ static struct hwpm_ip_inst t234_isp_inst_static_array[
},
};
/* IP structure */
struct hwpm_ip t234_hwpm_ip_isp = {
.num_instances = T234_HWPM_IP_ISP_NUM_INSTANCES,
.ip_inst_static_array = t234_isp_inst_static_array,
@@ -154,7 +145,6 @@ struct hwpm_ip t234_hwpm_ip_isp = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.inst_stride = addr_map_isp_thi_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_ISP_H
#define T234_HWPM_IP_ISP_H
#if defined(CONFIG_T234_HWPM_IP_ISP)
#define T234_HWPM_ACTIVE_IP_ISP T234_HWPM_IP_ISP,
#define T234_HWPM_ACTIVE_IP_ISP T234_HWPM_IP_ISP,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_ISP_NUM_INSTANCES 1U
#define T234_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_ISP_NUM_INSTANCES 1U
#define T234_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_isp;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mgbe.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_mgbe_inst0_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst1_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -79,7 +72,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst2_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -100,7 +92,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst3_perfmon_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -121,7 +112,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst0_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -141,7 +131,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst1_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -161,7 +150,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst2_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -181,7 +169,6 @@ static struct hwpm_ip_aperture t234_mgbe_inst3_perfmux_element_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -214,7 +201,6 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe0_mac_rm_base_r(),
.range_end = addr_map_mgbe0_mac_rm_limit_r(),
.element_stride = addr_map_mgbe0_mac_rm_limit_r() -
@@ -258,7 +244,7 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -278,7 +264,6 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe1_mac_rm_base_r(),
.range_end = addr_map_mgbe1_mac_rm_limit_r(),
.element_stride = addr_map_mgbe1_mac_rm_limit_r() -
@@ -322,11 +307,9 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
@@ -342,7 +325,6 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe2_mac_rm_base_r(),
.range_end = addr_map_mgbe2_mac_rm_limit_r(),
.element_stride = addr_map_mgbe2_mac_rm_limit_r() -
@@ -386,11 +368,9 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
@@ -406,7 +386,6 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mgbe_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe3_mac_rm_base_r(),
.range_end = addr_map_mgbe3_mac_rm_limit_r(),
.element_stride = addr_map_mgbe3_mac_rm_limit_r() -
@@ -450,11 +429,9 @@ static struct hwpm_ip_inst t234_mgbe_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
@@ -469,7 +446,6 @@ struct hwpm_ip t234_hwpm_ip_mgbe = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mgbe0_mac_rm_base_r(),
.range_end = addr_map_mgbe3_mac_rm_limit_r(),
.inst_stride = addr_map_mgbe0_mac_rm_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MGBE_H
#define T234_HWPM_IP_MGBE_H
#if defined(CONFIG_T234_HWPM_IP_MGBE)
#define T234_HWPM_ACTIVE_IP_MGBE T234_HWPM_IP_MGBE,
#define T234_HWPM_ACTIVE_IP_MGBE T234_HWPM_IP_MGBE,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MGBE_NUM_INSTANCES 4U
#define T234_HWPM_IP_MGBE_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_MGBE_NUM_INSTANCES 4U
#define T234_HWPM_IP_MGBE_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_MGBE_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_mgbe;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_channel.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -54,7 +48,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -71,7 +64,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -88,7 +80,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -105,7 +96,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -122,7 +112,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -139,7 +128,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -156,7 +144,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -173,7 +160,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -190,7 +176,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
@@ -207,7 +192,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
@@ -224,7 +208,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
@@ -241,7 +224,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
@@ -258,7 +240,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
@@ -275,7 +256,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
@@ -292,7 +272,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmon_element_static_arr
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
@@ -313,7 +292,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -329,7 +307,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -345,7 +322,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -361,7 +337,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -377,7 +352,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -393,7 +367,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -409,7 +382,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -425,7 +397,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -441,7 +412,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -457,7 +427,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
@@ -473,7 +442,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
@@ -489,7 +457,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
@@ -505,7 +472,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
@@ -521,7 +487,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
@@ -537,7 +502,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
@@ -553,7 +517,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
@@ -573,7 +536,6 @@ static struct hwpm_ip_aperture t234_mss_channel_inst0_broadcast_element_static_a
T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -652,7 +614,7 @@ static struct hwpm_ip_inst t234_mss_channel_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_CHANNEL_H
#define T234_HWPM_IP_MSS_CHANNEL_H
#if defined(CONFIG_T234_HWPM_IP_MSS_CHANNEL)
#define T234_HWPM_ACTIVE_IP_MSS_CHANNEL T234_HWPM_IP_MSS_CHANNEL,
#define T234_HWPM_ACTIVE_IP_MSS_CHANNEL T234_HWPM_IP_MSS_CHANNEL,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T234_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_channel;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_gpu_hub.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,9 +52,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_1_base_r(),
@@ -74,9 +67,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 1U,
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_2_base_r(),
@@ -90,9 +82,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 2U,
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_3_base_r(),
@@ -106,9 +97,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 3U,
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_4_base_r(),
@@ -122,9 +112,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 4U,
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_5_base_r(),
@@ -138,9 +127,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 5U,
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_6_base_r(),
@@ -154,9 +142,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 6U,
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_7_base_r(),
@@ -170,9 +157,8 @@ static struct hwpm_ip_aperture t234_mss_gpu_hub_inst0_perfmux_element_static_arr
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 7U,
.element_index = 8U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mss_nvlink_8_base_r(),
@@ -203,7 +189,6 @@ static struct hwpm_ip_inst t234_mss_gpu_hub_inst_static_array[
T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_mss_gpu_hub_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mss_nvlink_8_base_r(),
.range_end = addr_map_mss_nvlink_7_limit_r(),
.element_stride = addr_map_mss_nvlink_8_limit_r() -
@@ -247,7 +232,7 @@ static struct hwpm_ip_inst t234_mss_gpu_hub_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -266,7 +251,6 @@ struct hwpm_ip t234_hwpm_ip_mss_gpu_hub = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mss_nvlink_8_base_r(),
.range_end = addr_map_mss_nvlink_7_limit_r(),
.inst_stride = addr_map_mss_nvlink_7_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_GPU_HUB_H
#define T234_HWPM_IP_MSS_GPU_HUB_H
#if defined(CONFIG_T234_HWPM_IP_MSS_GPU_HUB)
#define T234_HWPM_ACTIVE_IP_MSS_GPU_HUB T234_HWPM_IP_MSS_GPU_HUB,
#define T234_HWPM_ACTIVE_IP_MSS_GPU_HUB T234_HWPM_IP_MSS_GPU_HUB,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_GPU_HUB_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_mss_gpu_hub;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,27 +19,21 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_iso_niso_hubs.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_msshub0",
.device_index = T234_MSSHUB0_PERFMON_DEVICE_NODE_INDEX,
@@ -54,9 +48,8 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmon_element_stati
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_msshub1",
.device_index = T234_MSSHUB1_PERFMON_DEVICE_NODE_INDEX,
@@ -75,7 +68,6 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -86,12 +78,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -102,12 +94,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -118,12 +110,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -134,12 +126,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -150,12 +142,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -166,12 +158,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -182,12 +174,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -198,12 +190,12 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_perfmux_element_stati
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t234_mc0to7_res_mss_iso_niso_hub_alist,
.alist_size = ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.alist_size =
ARRAY_SIZE(t234_mc0to7_res_mss_iso_niso_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
@@ -223,7 +215,6 @@ static struct hwpm_ip_aperture t234_mss_iso_niso_hub_inst0_broadcast_element_sta
T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -302,7 +293,7 @@ static struct hwpm_ip_inst t234_mss_iso_niso_hub_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -361,3 +352,4 @@ struct hwpm_ip t234_hwpm_ip_mss_iso_niso_hubs = {
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,11 +19,6 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_ISO_NISO_HUBS_H
@@ -33,11 +28,11 @@
#define T234_HWPM_ACTIVE_IP_MSS_ISO_NISO_HUBS T234_HWPM_IP_MSS_ISO_NISO_HUBS,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST 2U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMON_PER_INST 2U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_PERFMUX_PER_INST 9U
#define T234_HWPM_IP_MSS_ISO_NISO_HUBS_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_iso_niso_hubs;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,11 +19,6 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_mss_mcf.h"
@@ -37,7 +32,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -54,7 +48,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -71,7 +64,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
@@ -92,7 +84,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -108,7 +99,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
@@ -124,7 +114,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
@@ -140,7 +129,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
@@ -156,7 +144,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
@@ -172,7 +159,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
@@ -188,7 +174,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
@@ -204,7 +189,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_perfmux_element_static_array[
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
@@ -224,7 +208,6 @@ static struct hwpm_ip_aperture t234_mss_mcf_inst0_broadcast_element_static_array
T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -303,7 +286,7 @@ static struct hwpm_ip_inst t234_mss_mcf_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_MSS_MCF_H
#define T234_HWPM_IP_MSS_MCF_H
#if defined(CONFIG_T234_HWPM_IP_MSS_MCF)
#define T234_HWPM_ACTIVE_IP_MSS_MCF T234_HWPM_IP_MSS_MCF,
#define T234_HWPM_ACTIVE_IP_MSS_MCF T234_HWPM_IP_MSS_MCF,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_MSS_MCF_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_MCF_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST 1U
#define T234_HWPM_IP_MSS_MCF_NUM_INSTANCES 1U
#define T234_HWPM_IP_MSS_MCF_NUM_CORE_ELEMENT_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_MSS_MCF_NUM_PERFMUX_PER_INST 8U
#define T234_HWPM_IP_MSS_MCF_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t234_hwpm_ip_mss_mcf;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_nvdec.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_nvdec_inst0_perfmon_element_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_nvdec_inst0_perfmux_element_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,7 +84,6 @@ static struct hwpm_ip_inst t234_nvdec_inst_static_array[
T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdec_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdec_base_r(),
.range_end = addr_map_nvdec_limit_r(),
.element_stride = addr_map_nvdec_limit_r() -
@@ -135,7 +127,7 @@ static struct hwpm_ip_inst t234_nvdec_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -154,7 +146,6 @@ struct hwpm_ip t234_hwpm_ip_nvdec = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdec_base_r(),
.range_end = addr_map_nvdec_limit_r(),
.inst_stride = addr_map_nvdec_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_NVDEC_H
#define T234_HWPM_IP_NVDEC_H
#if defined(CONFIG_T234_HWPM_IP_NVDEC)
#define T234_HWPM_ACTIVE_IP_NVDEC T234_HWPM_IP_NVDEC,
#define T234_HWPM_ACTIVE_IP_NVDEC T234_HWPM_IP_NVDEC,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_NVDEC_NUM_INSTANCES 1U
#define T234_HWPM_IP_NVDEC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_NVDEC_NUM_INSTANCES 1U
#define T234_HWPM_IP_NVDEC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDEC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_nvdec;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_nvdla.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_nvdla_inst0_perfmon_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_nvdla_inst1_perfmon_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -79,7 +72,6 @@ static struct hwpm_ip_aperture t234_nvdla_inst0_perfmux_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -99,7 +91,6 @@ static struct hwpm_ip_aperture t234_nvdla_inst1_perfmux_element_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -132,7 +123,6 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdla_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla0_base_r(),
.range_end = addr_map_nvdla0_limit_r(),
.element_stride = addr_map_nvdla0_limit_r() -
@@ -176,11 +166,11 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = 1,
},
.element_fs_mask = 0U,
.dev_name = "",
.dev_name = "/dev/nvdladebugfs/nvdla0/hwpm/ctrl",
},
{
.hw_inst_mask = BIT(1),
@@ -196,7 +186,6 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_nvdla_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla1_base_r(),
.range_end = addr_map_nvdla1_limit_r(),
.element_stride = addr_map_nvdla1_limit_r() -
@@ -240,11 +229,11 @@ static struct hwpm_ip_inst t234_nvdla_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = 1,
},
.element_fs_mask = 0U,
.dev_name = "",
.dev_name = "/dev/nvdladebugfs/nvdla1/hwpm/ctrl",
},
};
@@ -259,7 +248,6 @@ struct hwpm_ip t234_hwpm_ip_nvdla = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_nvdla0_base_r(),
.range_end = addr_map_nvdla1_limit_r(),
.inst_stride = addr_map_nvdla0_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_NVDLA_H
#define T234_HWPM_IP_NVDLA_H
#if defined(CONFIG_T234_HWPM_IP_NVDLA)
#define T234_HWPM_ACTIVE_IP_NVDLA T234_HWPM_IP_NVDLA,
#define T234_HWPM_ACTIVE_IP_NVDLA T234_HWPM_IP_NVDLA,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_NVDLA_NUM_INSTANCES 2U
#define T234_HWPM_IP_NVDLA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_NVDLA_NUM_INSTANCES 2U
#define T234_HWPM_IP_NVDLA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_NVDLA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_nvdla;

View File

@@ -127,11 +127,11 @@ static struct hwpm_ip_inst t234_nvenc_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
.fd = -1,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvhost-debug/nvenc_hwpm",
.dev_name = "",
},
};

View File

@@ -127,11 +127,11 @@ static struct hwpm_ip_inst t234_ofa_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
.fd = -1,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvhost-debug/ofa_hwpm",
.dev_name = "",
},
};

View File

@@ -517,7 +517,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -580,7 +580,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -643,7 +643,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -706,7 +706,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -769,7 +769,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -832,7 +832,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -895,7 +895,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -958,7 +958,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -1021,7 +1021,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -1084,7 +1084,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -1147,7 +1147,7 @@ static struct hwpm_ip_inst t234_pcie_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,

View File

@@ -128,7 +128,7 @@ static struct hwpm_ip_inst t234_pma_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_pva.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -54,7 +48,6 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
@@ -71,7 +64,6 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmon_element_static_array[
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
@@ -92,7 +84,6 @@ static struct hwpm_ip_aperture t234_pva_inst0_perfmux_element_static_array[
T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -125,7 +116,6 @@ static struct hwpm_ip_inst t234_pva_inst_static_array[
T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_pva_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva0_pm_limit_r(),
.element_stride = addr_map_pva0_pm_limit_r() -
@@ -169,11 +159,11 @@ static struct hwpm_ip_inst t234_pva_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = 1,
},
.element_fs_mask = 0U,
.dev_name = "",
.dev_name = "/dev/nvpvadebugfs/pva0/hwpm",
},
};
@@ -188,7 +178,6 @@ struct hwpm_ip t234_hwpm_ip_pva = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva0_pm_limit_r(),
.inst_stride = addr_map_pva0_pm_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_PVA_H
#define T234_HWPM_IP_PVA_H
#if defined(CONFIG_T234_HWPM_IP_PVA)
#define T234_HWPM_ACTIVE_IP_PVA T234_HWPM_IP_PVA,
#define T234_HWPM_ACTIVE_IP_PVA T234_HWPM_IP_PVA,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T234_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T234_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_PERFMON_PER_INST 3U
#define T234_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_pva;

View File

@@ -129,7 +129,7 @@ static struct hwpm_ip_inst t234_rtr_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0x1U,
@@ -191,7 +191,7 @@ static struct hwpm_ip_inst t234_rtr_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -34,9 +34,8 @@
#define T234_HWPM_IP_RTR_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_RTR_STATIC_RTR_INST 0U
#define T234_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX 0U
#define T234_HWPM_IP_RTR_STATIC_PMA_INST 1U
#define T234_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX 0U
#define T234_HWPM_IP_RTR_PERMUX_INDEX 0U
extern struct hwpm_ip t234_hwpm_ip_rtr;

View File

@@ -106,7 +106,7 @@ static struct hwpm_ip_inst t234_scf_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0x1U,

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_vi.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_vi_inst0_perfmon_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_vi_inst1_perfmon_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -79,7 +72,6 @@ static struct hwpm_ip_aperture t234_vi_inst0_perfmux_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -99,7 +91,6 @@ static struct hwpm_ip_aperture t234_vi_inst1_perfmux_element_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -176,7 +167,7 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -196,7 +187,6 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_vi_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi2_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.element_stride = addr_map_vi2_thi_limit_r() -
@@ -240,7 +230,7 @@ static struct hwpm_ip_inst t234_vi_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_VI_H
#define T234_HWPM_IP_VI_H
#if defined(CONFIG_T234_HWPM_IP_VI)
#define T234_HWPM_ACTIVE_IP_VI T234_HWPM_IP_VI,
#define T234_HWPM_ACTIVE_IP_VI T234_HWPM_IP_VI,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_VI_NUM_INSTANCES 2U
#define T234_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_VI_NUM_INSTANCES 2U
#define T234_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_vi;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,19 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t234_vic.h"
#include <tegra_hwpm.h>
#include <hal/t234/t234_regops_allowlist.h>
#include <hal/t234/t234_perfmon_device_index.h>
#include <hal/t234/hw/t234_addr_map_soc_hwpm.h>
#include <hal/t234/t234_perfmon_device_index.h>
static struct hwpm_ip_aperture t234_vic_inst0_perfmon_element_static_array[
T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -58,7 +52,6 @@ static struct hwpm_ip_aperture t234_vic_inst0_perfmux_element_static_array[
T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
@@ -91,7 +84,6 @@ static struct hwpm_ip_inst t234_vic_inst_static_array[
T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST,
.element_static_array =
t234_vic_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.element_stride = addr_map_vic_limit_r() -
@@ -135,7 +127,7 @@ static struct hwpm_ip_inst t234_vic_inst_static_array[
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
.fd = -1,
},
.element_fs_mask = 0U,
@@ -154,7 +146,6 @@ struct hwpm_ip t234_hwpm_ip_vic = {
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.inst_stride = addr_map_vic_limit_r() -

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -19,25 +19,20 @@
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T234_HWPM_IP_VIC_H
#define T234_HWPM_IP_VIC_H
#if defined(CONFIG_T234_HWPM_IP_VIC)
#define T234_HWPM_ACTIVE_IP_VIC T234_HWPM_IP_VIC,
#define T234_HWPM_ACTIVE_IP_VIC T234_HWPM_IP_VIC,
/* This data should ideally be available in HW headers */
#define T234_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T234_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
#define T234_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T234_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T234_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t234_hwpm_ip_vic;

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -30,56 +30,41 @@
#include <hal/t234/hw/t234_pmasys_soc_hwpm.h>
#include <hal/t234/hw/t234_pmmsys_soc_hwpm.h>
int t234_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx()];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
if (rtr_perfmux_ptr != NULL) {
*rtr_perfmux_ptr = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX];
}
if (pma_perfmux_ptr != NULL) {
*pma_perfmux_ptr = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX];
}
return 0;
}
int t234_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Check ROUTER state */
tegra_hwpm_readl(hwpm, rtr_perfmux,
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val);
hwpm_assert_print(hwpm,
(pmmsys_sys0router_enginestatus_status_v(reg_val) ==
pmmsys_sys0router_enginestatus_status_empty_v()),
return -EINVAL, "Router not ready value 0x%x", reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
if (pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) {
tegra_hwpm_err(hwpm, "Router not ready value 0x%x", reg_val);
return -EINVAL;
}
/* Check PMA state */
field_mask = pmasys_enginestatus_status_m() |
@@ -87,12 +72,19 @@ int t234_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
field_val = pmasys_enginestatus_status_empty_f() |
pmasys_enginestatus_rbufempty_empty_f();
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_enginestatus_r(), &reg_val);
hwpm_assert_print(hwpm, ((reg_val & field_mask) == field_val),
return -EINVAL, "PMA not ready value 0x%x", reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
return 0;
if ((reg_val & field_mask) != field_val) {
tegra_hwpm_err(hwpm, "PMA not ready value 0x%x", reg_val);
return -EINVAL;
}
return err;
}
int t234_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
@@ -101,48 +93,116 @@ int t234_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
u32 retries = 10U;
u32 sleep_msecs = 100;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_hwpm_timeout timeout;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA triggers */
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmasys_trigger_config_user_pma_pulse_m(),
pmasys_trigger_config_user_pma_pulse_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
/* Wait for PERFMONs to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_sys0router_perfmonstatus_r(), &reg_val,
(pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U),
"PMMSYS_SYS0ROUTER_PERFMONSTATUS_MERGED_EMPTY timed out");
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
do {
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_perfmonstatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while ((pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if (pmmsys_sys0router_perfmonstatus_merged_v(reg_val) != 0U) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMMSYS_SYS0ROUTER_PERFMONSTATUS_MERGED_EMPTY");
return -ETIMEDOUT;
}
/* Wait for ROUTER to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val,
(pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()),
"PMMSYS_SYS0ROUTER_ENGINESTATUS_STATUS_EMPTY timed out");
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
do {
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while ((pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if (pmmsys_sys0router_enginestatus_status_v(reg_val) !=
pmmsys_sys0router_enginestatus_status_empty_v()) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMMSYS_SYS0ROUTER_ENGINESTATUS_STATUS_EMPTY");
return -ETIMEDOUT;
}
/* Wait for PMA to idle */
err = tegra_hwpm_timeout_init(hwpm, &timeout, 10U);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm timeout init failed");
return err;
}
field_mask = pmasys_enginestatus_status_m() |
pmasys_enginestatus_rbufempty_m();
field_val = pmasys_enginestatus_status_empty_f() |
pmasys_enginestatus_rbufempty_empty_f();
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, pma_perfmux,
pmasys_enginestatus_r(), &reg_val,
((reg_val & field_mask) != field_val),
"PMASYS_ENGINESTATUS timed out");
do {
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_enginestatus_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
tegra_hwpm_msleep(sleep_msecs);
} while (((reg_val & field_mask) != field_val) &&
(tegra_hwpm_timeout_expired(hwpm, &timeout) == 0));
if ((reg_val & field_mask) != field_val) {
tegra_hwpm_err(hwpm, "Timeout expired for "
"NV_PERF_PMASYS_ENGINESTATUS");
return -ETIMEDOUT;
}
return err;
}
@@ -151,27 +211,45 @@ int t234_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_controlb_r(), &val);
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_controlb_r(), &val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
val = set_field(val, pmasys_controlb_coalesce_timeout_cycles_m(),
pmasys_controlb_coalesce_timeout_cycles__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_controlb_r(), val);
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_controlb_r(), val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0), &val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
val = set_field(val,
pmasys_channel_config_user_coalesce_timeout_cycles_m(),
pmasys_channel_config_user_coalesce_timeout_cycles__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0), val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
/* CG enable is expected PROD value */
err = hwpm->active_chip->enable_cg(hwpm);
@@ -189,20 +267,32 @@ int t234_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
u32 field_mask = 0U;
u32 field_val = 0U;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmasys_cg2_slcg_m(),
pmasys_cg2_slcg_disabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
field_mask = pmmsys_sys0router_cg2_slcg_perfmon_m() |
pmmsys_sys0router_cg2_slcg_router_m() |
@@ -210,11 +300,19 @@ int t234_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
field_val = pmmsys_sys0router_cg2_slcg_perfmon_disabled_f() |
pmmsys_sys0router_cg2_slcg_router_disabled_f() |
pmmsys_sys0router_cg2_slcg_disabled_f();
tegra_hwpm_readl(hwpm, rtr_perfmux,
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, field_mask, field_val);
tegra_hwpm_writel(hwpm, rtr_perfmux,
err = tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -225,20 +323,32 @@ int t234_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 field_mask = 0U;
u32 field_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *rtr_perfmux = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
err = tegra_hwpm_readl(hwpm, pma_perfmux, pmasys_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmasys_cg2_slcg_m(),
pmasys_cg2_slcg_enabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
field_mask = pmmsys_sys0router_cg2_slcg_perfmon_m() |
pmmsys_sys0router_cg2_slcg_router_m() |
@@ -246,11 +356,19 @@ int t234_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
field_val = pmmsys_sys0router_cg2_slcg_perfmon__prod_f() |
pmmsys_sys0router_cg2_slcg_router__prod_f() |
pmmsys_sys0router_cg2_slcg__prod_f();
tegra_hwpm_readl(hwpm, rtr_perfmux,
err = tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, field_mask, field_val);
tegra_hwpm_writel(hwpm, rtr_perfmux,
err = tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_sys0router_cg2_r(), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -31,7 +31,6 @@
#include <hal/t234/t234_internal.h>
static struct tegra_soc_hwpm_chip t234_chip_info = {
.la_clk_rate = 625000000,
.chip_ips = NULL,
/* HALs */
@@ -47,7 +46,6 @@ static struct tegra_soc_hwpm_chip t234_chip_info = {
.get_rtr_int_idx = t234_get_rtr_int_idx,
.get_ip_max_idx = t234_get_ip_max_idx,
.get_rtr_pma_perfmux_ptr = t234_hwpm_get_rtr_pma_perfmux_ptr,
.extract_ip_ops = t234_hwpm_extract_ip_ops,
.force_enable_ips = t234_hwpm_force_enable_ips,
@@ -58,8 +56,6 @@ static struct tegra_soc_hwpm_chip t234_chip_info = {
.init_prod_values = t234_hwpm_init_prod_values,
.disable_cg = t234_hwpm_disable_cg,
.enable_cg = t234_hwpm_enable_cg,
.credit_program = NULL,
.setup_trigger = NULL,
.reserve_rtr = tegra_hwpm_reserve_rtr,
.release_rtr = tegra_hwpm_release_rtr,
@@ -311,12 +307,12 @@ bool t234_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
u32 t234_get_rtr_int_idx(void)
u32 t234_get_rtr_int_idx(struct tegra_soc_hwpm *hwpm)
{
return T234_HWPM_IP_RTR;
}
u32 t234_get_ip_max_idx(void)
u32 t234_get_ip_max_idx(struct tegra_soc_hwpm *hwpm)
{
return T234_HWPM_IP_MAX;
}

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -84,11 +84,8 @@ bool t234_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
bool t234_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index);
u32 t234_get_rtr_int_idx(void);
u32 t234_get_ip_max_idx(void);
int t234_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr);
u32 t234_get_rtr_int_idx(struct tegra_soc_hwpm *hwpm);
u32 t234_get_ip_max_idx(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
@@ -114,9 +111,7 @@ int t234_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm);
int t234_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump);
int t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr);
int t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status);
u64 t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm);
bool t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm);
#endif /* T234_HWPM_INTERNAL_H */

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -206,12 +206,11 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
defined(CONFIG_T234_HWPM_IP_MSS_MCF)
struct hwpm_ip *chip_ip = NULL;
struct hwpm_ip_inst *ip_inst = NULL;
u32 s_inst_idx = 0U;
u32 inst_idx = 0U;
u32 element_mask_max = 0U;
#endif
u32 emc_disable_fuse_val = 0U;
u32 emc_disable_fuse_val_mask = 0xFU;
u32 emc_disable_fuse_bit_idx = 0U;
u32 emc_element_floorsweep_mask = 0U;
u32 idx = 0U;
int err;
@@ -236,16 +235,16 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
* Convert floorsweep fuse value to available EMC elements.
*/
do {
if (!(emc_disable_fuse_val & (0x1U << emc_disable_fuse_bit_idx))) {
emc_element_floorsweep_mask |=
(0xFU << (emc_disable_fuse_bit_idx * 4U));
if (emc_disable_fuse_val & 0x1U) {
emc_element_floorsweep_mask =
(emc_element_floorsweep_mask << 4U) | 0xFU;
}
emc_disable_fuse_bit_idx++;
emc_disable_fuse_val = (emc_disable_fuse_val >> 1U);
emc_disable_fuse_val_mask = (emc_disable_fuse_val_mask >> 1U);
} while (emc_disable_fuse_val_mask != 0U);
/* Set fuse value in MSS IP instances */
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
switch (idx) {
#if defined(CONFIG_T234_HWPM_IP_MSS_CHANNEL)
case T234_HWPM_IP_MSS_CHANNEL:
@@ -260,11 +259,10 @@ static int t234_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
defined(CONFIG_T234_HWPM_IP_MSS_ISO_NISO_HUBS) || \
defined(CONFIG_T234_HWPM_IP_MSS_MCF)
chip_ip = active_chip->chip_ips[idx];
for (s_inst_idx = 0U;
s_inst_idx < chip_ip->num_instances;
s_inst_idx++) {
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
s_inst_idx];
inst_idx];
/*
* Hence use max element mask to get correct
@@ -364,7 +362,7 @@ int t234_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm)
return 0;
}
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
for (idx = 0U; idx < active_chip->get_ip_max_idx(hwpm); idx++) {
chip_ip = active_chip->chip_ips[idx];
if ((hwpm_global_disable !=

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -33,23 +33,41 @@
int t234_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0), 0);
tegra_hwpm_writel(hwpm, pma_perfmux,
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0), 0);
tegra_hwpm_writel(hwpm, pma_perfmux,
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0), 0);
tegra_hwpm_writel(hwpm, pma_perfmux,
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0), 0);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -61,46 +79,66 @@ int t234_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
u32 outbase_hi = 0;
u32 outsize = 0;
u64 mem_bytes_addr = 0ULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
struct tegra_hwpm_mem_mgmt *mem_mgmt = hwpm->mem_mgmt;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_fn(hwpm, " ");
outbase_lo = mem_mgmt->stream_buf_va & pmasys_channel_outbase_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0), outbase_lo);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTBASE = 0x%x", outbase_lo);
outbase_hi = (mem_mgmt->stream_buf_va >> 32) &
pmasys_channel_outbaseupper_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0), outbase_hi);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTBASEUPPER = 0x%x", outbase_hi);
outsize = mem_mgmt->stream_buf_size &
pmasys_channel_outsize_numbytes_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0), outsize);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose, "OUTSIZE = 0x%x", outsize);
mem_bytes_addr = mem_mgmt->mem_bytes_buf_va &
pmasys_channel_mem_bytes_addr_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0), mem_bytes_addr);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_dbg(hwpm, hwpm_verbose,
"MEM_BYTES_ADDR = 0x%llx", (unsigned long long)mem_bytes_addr);
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_block_r(0),
pmasys_channel_mem_block_valid_f(
pmasys_channel_mem_block_valid_true_v()));
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -108,18 +146,24 @@ int t234_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
int t234_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_channel_mem_block_r(0),
err = tegra_hwpm_writel(hwpm, pma_perfmux, pmasys_channel_mem_block_r(0),
pmasys_channel_mem_block_valid_f(
pmasys_channel_mem_block_valid_false_v()));
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -130,24 +174,34 @@ int t234_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm)
u32 reg_val = 0U;
u32 *mem_bytes_kernel_u32 =
(u32 *)(hwpm->mem_mgmt->mem_bytes_kernel);
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
*mem_bytes_kernel_u32 = TEGRA_HWPM_MEM_BYTES_INVALID;
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_channel_control_user_update_bytes_m(),
pmasys_channel_control_user_update_bytes_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -156,31 +210,49 @@ int t234_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA streaming */
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_trigger_config_user_record_stream_m(),
pmasys_trigger_config_user_record_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_trigger_config_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val,
pmasys_channel_control_user_stream_m(),
pmasys_channel_control_user_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -189,69 +261,81 @@ int t234_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
if (mem_bump > (u64)U32_MAX) {
tegra_hwpm_err(hwpm, "mem_bump is out of bounds");
return -EINVAL;
}
tegra_hwpm_writel(hwpm, pma_perfmux,
err = tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bump_r(0), mem_bump);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
int t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr)
u64 t234_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return 0ULL;
}
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0), &reg_val);
*mem_head_ptr = (u64)reg_val;
return err;
return (u64)reg_val;
}
int t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status)
bool t234_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val, field_val;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx(hwpm)];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T234_HWPM_IP_RTR_STATIC_PMA_INST];
struct hwpm_ip_aperture *pma_perfmux = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T234_HWPM_IP_RTR_PERMUX_INDEX];
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
err = tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_secure_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
field_val = pmasys_channel_status_secure_membuf_status_v(
reg_val);
*overflow_status = (field_val ==
pmasys_channel_status_secure_membuf_status_overflowed_v()) ?
TEGRA_HWPM_MEMBUF_OVERFLOWED : TEGRA_HWPM_MEMBUF_NOT_OVERFLOWED;
return err;
return (field_val ==
pmasys_channel_status_secure_membuf_status_overflowed_v());
}

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -252,8 +252,11 @@ struct allowlist t234_pva0_pm_alist[9] = {
{0x00008020, true},
};
struct allowlist t234_nvdla_alist[31] = {
struct allowlist t234_nvdla_alist[37] = {
{0x00001088, false},
{0x000010a8, false},
{0x0001a000, false},
{0x0001a004, false},
{0x0001a008, true},
{0x0001a00c, true},
{0x0001a010, true},
@@ -284,6 +287,9 @@ struct allowlist t234_nvdla_alist[31] = {
{0x0001a074, true},
{0x0001a078, true},
{0x0001a07c, true},
{0x00000008, true},
{0x00000a00, true},
{0x00000a20, true},
};
struct allowlist t234_mgbe_alist[2] = {

View File

@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -35,7 +35,7 @@ extern struct allowlist t234_isp_thi_alist[7];
extern struct allowlist t234_vic_alist[9];
extern struct allowlist t234_ofa_alist[8];
extern struct allowlist t234_pva0_pm_alist[9];
extern struct allowlist t234_nvdla_alist[31];
extern struct allowlist t234_nvdla_alist[37];
extern struct allowlist t234_mgbe_alist[2];
extern struct allowlist t234_nvdec_alist[8];
extern struct allowlist t234_nvenc_alist[9];

View File

@@ -1,6 +1,6 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -33,6 +33,7 @@
int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
int err = 0;
u32 reg_val;
tegra_hwpm_fn(hwpm, " ");
@@ -43,12 +44,20 @@ int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
tegra_hwpm_readl(hwpm, perfmon,
err = tegra_hwpm_readl(hwpm, perfmon,
pmmsys_sys0_enginestatus_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmmsys_sys0_enginestatus_enable_m(),
pmmsys_sys0_enginestatus_enable_out_f());
tegra_hwpm_writel(hwpm, perfmon,
err = tegra_hwpm_writel(hwpm, perfmon,
pmmsys_sys0_enginestatus_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}
@@ -56,6 +65,7 @@ int t234_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
int t234_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
int err = 0;
u32 reg_val;
tegra_hwpm_fn(hwpm, " ");
@@ -74,10 +84,18 @@ int t234_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
tegra_hwpm_readl(hwpm, perfmon, pmmsys_control_r(0), &reg_val);
err = tegra_hwpm_readl(hwpm, perfmon, pmmsys_control_r(0), &reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm read failed");
return err;
}
reg_val = set_field(reg_val, pmmsys_control_mode_m(),
pmmsys_control_mode_disable_f());
tegra_hwpm_writel(hwpm, perfmon, pmmsys_control_r(0), reg_val);
err = tegra_hwpm_writel(hwpm, perfmon, pmmsys_control_r(0), reg_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm write failed");
return err;
}
return 0;
}

View File

@@ -1,355 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_ADDR_MAP_SOC_HWPM_H
#define T264_ADDR_MAP_SOC_HWPM_H
#define addr_map_rpg_grp_system_base_r() (0x1600000U)
#define addr_map_rpg_grp_system_limit_r() (0x16fffffU)
#define addr_map_rpg_grp_ucf_base_r() (0x8101600000U)
#define addr_map_rpg_grp_ucf_limit_r() (0x81016fffffU)
#define addr_map_rpg_grp_vision_base_r() (0x8181600000U)
#define addr_map_rpg_grp_vision_limit_r() (0x81816fffffU)
#define addr_map_rpg_grp_disp_usb_base_r() (0x8801600000U)
#define addr_map_rpg_grp_disp_usb_limit_r() (0x88016fffffU)
#define addr_map_rpg_grp_uphy0_base_r() (0xa801600000U)
#define addr_map_rpg_grp_uphy0_limit_r() (0xa8016fffffU)
#define addr_map_rpg_pm_hwpm_base_r() (0x1604000U)
#define addr_map_rpg_pm_hwpm_limit_r() (0x1604fffU)
#define addr_map_pma_base_r() (0x1610000U)
#define addr_map_pma_limit_r() (0x1611fffU)
#define addr_map_rtr_base_r() (0x1612000U)
#define addr_map_rtr_limit_r() (0x1612fffU)
#define addr_map_rpg_pm_mss0_base_r() (0x8101621000U)
#define addr_map_rpg_pm_mss0_limit_r() (0x8101621fffU)
#define addr_map_rpg_pm_mss1_base_r() (0x8101622000U)
#define addr_map_rpg_pm_mss1_limit_r() (0x8101622fffU)
#define addr_map_rpg_pm_mss2_base_r() (0x8101623000U)
#define addr_map_rpg_pm_mss2_limit_r() (0x8101623fffU)
#define addr_map_rpg_pm_mss3_base_r() (0x8101624000U)
#define addr_map_rpg_pm_mss3_limit_r() (0x8101624fffU)
#define addr_map_rpg_pm_mss4_base_r() (0x8101625000U)
#define addr_map_rpg_pm_mss4_limit_r() (0x8101625fffU)
#define addr_map_rpg_pm_mss5_base_r() (0x8101626000U)
#define addr_map_rpg_pm_mss5_limit_r() (0x8101626fffU)
#define addr_map_rpg_pm_mss6_base_r() (0x8101627000U)
#define addr_map_rpg_pm_mss6_limit_r() (0x8101627fffU)
#define addr_map_rpg_pm_mss7_base_r() (0x8101628000U)
#define addr_map_rpg_pm_mss7_limit_r() (0x8101628fffU)
#define addr_map_rpg_pm_mss8_base_r() (0x8101629000U)
#define addr_map_rpg_pm_mss8_limit_r() (0x8101629fffU)
#define addr_map_rpg_pm_mss9_base_r() (0x810162a000U)
#define addr_map_rpg_pm_mss9_limit_r() (0x810162afffU)
#define addr_map_rpg_pm_mss10_base_r() (0x810162b000U)
#define addr_map_rpg_pm_mss10_limit_r() (0x810162bfffU)
#define addr_map_rpg_pm_mss11_base_r() (0x810162c000U)
#define addr_map_rpg_pm_mss11_limit_r() (0x810162cfffU)
#define addr_map_rpg_pm_mss12_base_r() (0x810162d000U)
#define addr_map_rpg_pm_mss12_limit_r() (0x810162dfffU)
#define addr_map_rpg_pm_mss13_base_r() (0x810162e000U)
#define addr_map_rpg_pm_mss13_limit_r() (0x810162efffU)
#define addr_map_rpg_pm_mss14_base_r() (0x810162f000U)
#define addr_map_rpg_pm_mss14_limit_r() (0x810162ffffU)
#define addr_map_rpg_pm_mss15_base_r() (0x8101630000U)
#define addr_map_rpg_pm_mss15_limit_r() (0x8101630fffU)
#define addr_map_mcb_base_r() (0x8108020000U)
#define addr_map_mcb_limit_r() (0x810803ffffU)
#define addr_map_mc0_base_r() (0x8108040000U)
#define addr_map_mc0_limit_r() (0x810805ffffU)
#define addr_map_mc1_base_r() (0x8108060000U)
#define addr_map_mc1_limit_r() (0x810807ffffU)
#define addr_map_mc2_base_r() (0x8108080000U)
#define addr_map_mc2_limit_r() (0x810809ffffU)
#define addr_map_mc3_base_r() (0x81080a0000U)
#define addr_map_mc3_limit_r() (0x81080bffffU)
#define addr_map_mc4_base_r() (0x81080c0000U)
#define addr_map_mc4_limit_r() (0x81080dffffU)
#define addr_map_mc5_base_r() (0x81080e0000U)
#define addr_map_mc5_limit_r() (0x81080fffffU)
#define addr_map_mc6_base_r() (0x8108100000U)
#define addr_map_mc6_limit_r() (0x810811ffffU)
#define addr_map_mc7_base_r() (0x8108120000U)
#define addr_map_mc7_limit_r() (0x810813ffffU)
#define addr_map_mc8_base_r() (0x8108140000U)
#define addr_map_mc8_limit_r() (0x810815ffffU)
#define addr_map_mc9_base_r() (0x8108160000U)
#define addr_map_mc9_limit_r() (0x810817ffffU)
#define addr_map_mc10_base_r() (0x8108180000U)
#define addr_map_mc10_limit_r() (0x810819ffffU)
#define addr_map_mc11_base_r() (0x81081a0000U)
#define addr_map_mc11_limit_r() (0x81081bffffU)
#define addr_map_mc12_base_r() (0x81081c0000U)
#define addr_map_mc12_limit_r() (0x81081dffffU)
#define addr_map_mc13_base_r() (0x81081e0000U)
#define addr_map_mc13_limit_r() (0x81081fffffU)
#define addr_map_mc14_base_r() (0x8108200000U)
#define addr_map_mc14_limit_r() (0x810821ffffU)
#define addr_map_mc15_base_r() (0x8108220000U)
#define addr_map_mc15_limit_r() (0x810823ffffU)
#define addr_map_rpg_pm_pvac0_base_r() (0x8181605000U)
#define addr_map_rpg_pm_pvac0_limit_r() (0x8181605fffU)
#define addr_map_rpg_pm_pvav0_base_r() (0x8181606000U)
#define addr_map_rpg_pm_pvav0_limit_r() (0x8181606fffU)
#define addr_map_rpg_pm_pvav1_base_r() (0x8181607000U)
#define addr_map_rpg_pm_pvav1_limit_r() (0x8181607fffU)
#define addr_map_rpg_pm_pvap0_base_r() (0x818160e000U)
#define addr_map_rpg_pm_pvap0_limit_r() (0x818160efffU)
#define addr_map_rpg_pm_pvap1_base_r() (0x818160f000U)
#define addr_map_rpg_pm_pvap1_limit_r() (0x818160ffffU)
#define addr_map_pva0_pm_base_r() (0x818c200000U)
#define addr_map_pva0_pm_limit_r() (0x818c20ffffU)
#define addr_map_pva1_pm_base_r() (0x818cb00000U)
#define addr_map_pva1_pm_limit_r() (0x818cb0ffffU)
#define addr_map_rpg_pm_vic0_base_r() (0x8181604000U)
#define addr_map_rpg_pm_vic0_limit_r() (0x8181604fffU)
#define addr_map_vic_base_r() (0x8188050000U)
#define addr_map_vic_limit_r() (0x818808ffffU)
#define addr_map_rpg_pm_system_msshub0_base_r() (0x1600000U)
#define addr_map_rpg_pm_system_msshub0_limit_r() (0x1600fffU)
#define addr_map_rpg_pm_ucf_msshub0_base_r() (0x810163e000U)
#define addr_map_rpg_pm_ucf_msshub0_limit_r() (0x810163efffU)
#define addr_map_rpg_pm_ucf_msshub1_base_r() (0x810163f000U)
#define addr_map_rpg_pm_ucf_msshub1_limit_r() (0x810163ffffU)
#define addr_map_rpg_pm_ucf_msshub2_base_r() (0x810164f000U)
#define addr_map_rpg_pm_ucf_msshub2_limit_r() (0x810164ffffU)
#define addr_map_rpg_pm_vision_msshub0_base_r() (0x818160b000U)
#define addr_map_rpg_pm_vision_msshub0_limit_r() (0x818160bfffU)
#define addr_map_rpg_pm_vision_msshub1_base_r() (0x818160c000U)
#define addr_map_rpg_pm_vision_msshub1_limit_r() (0x818160cfffU)
#define addr_map_rpg_pm_disp_usb_msshub0_base_r() (0x8801601000U)
#define addr_map_rpg_pm_disp_usb_msshub0_limit_r() (0x8801601fffU)
#define addr_map_rpg_pm_uphy0_msshub0_base_r() (0xa801628000U)
#define addr_map_rpg_pm_uphy0_msshub0_limit_r() (0xa801628fffU)
#define addr_map_rpg_pm_uphy0_msshub1_base_r() (0xa801629000U)
#define addr_map_rpg_pm_uphy0_msshub1_limit_r() (0xa801629fffU)
#define addr_map_rpg_pm_ocu_base_r() (0xa801604000U)
#define addr_map_rpg_pm_ocu_limit_r() (0xa801604fffU)
#define addr_map_ocu_base_r() (0xa808740000U)
#define addr_map_ocu_limit_r() (0xa80874ffffU)
#define addr_map_rpg_pm_ucf_smmu0_base_r() (0x8101642000U)
#define addr_map_rpg_pm_ucf_smmu0_limit_r() (0x8101642fffU)
#define addr_map_rpg_pm_ucf_smmu1_base_r() (0x8101643000U)
#define addr_map_rpg_pm_ucf_smmu1_limit_r() (0x8101643fffU)
#define addr_map_rpg_pm_ucf_smmu3_base_r() (0x810164b000U)
#define addr_map_rpg_pm_ucf_smmu3_limit_r() (0x810164bfffU)
#define addr_map_rpg_pm_ucf_smmu2_base_r() (0x8101653000U)
#define addr_map_rpg_pm_ucf_smmu2_limit_r() (0x8101653fffU)
#define addr_map_rpg_pm_disp_usb_smmu0_base_r() (0x8801602000U)
#define addr_map_rpg_pm_disp_usb_smmu0_limit_r() (0x8801602fffU)
#define addr_map_smmu1_base_r() (0x8105a30000U)
#define addr_map_smmu1_limit_r() (0x8105a3ffffU)
#define addr_map_smmu2_base_r() (0x8106a30000U)
#define addr_map_smmu2_limit_r() (0x8106a3ffffU)
#define addr_map_smmu0_base_r() (0x810aa30000U)
#define addr_map_smmu0_limit_r() (0x810aa3ffffU)
#define addr_map_smmu4_base_r() (0x810ba30000U)
#define addr_map_smmu4_limit_r() (0x810ba3ffffU)
#define addr_map_smmu3_base_r() (0x8806a30000U)
#define addr_map_smmu3_limit_r() (0x8806a3ffffU)
#define addr_map_rpg_pm_ucf_msw0_base_r() (0x8101600000U)
#define addr_map_rpg_pm_ucf_msw0_limit_r() (0x8101600fffU)
#define addr_map_rpg_pm_ucf_msw1_base_r() (0x8101601000U)
#define addr_map_rpg_pm_ucf_msw1_limit_r() (0x8101601fffU)
#define addr_map_rpg_pm_ucf_msw2_base_r() (0x8101602000U)
#define addr_map_rpg_pm_ucf_msw2_limit_r() (0x8101602fffU)
#define addr_map_rpg_pm_ucf_msw3_base_r() (0x8101603000U)
#define addr_map_rpg_pm_ucf_msw3_limit_r() (0x8101603fffU)
#define addr_map_rpg_pm_ucf_msw4_base_r() (0x8101604000U)
#define addr_map_rpg_pm_ucf_msw4_limit_r() (0x8101604fffU)
#define addr_map_rpg_pm_ucf_msw5_base_r() (0x8101605000U)
#define addr_map_rpg_pm_ucf_msw5_limit_r() (0x8101605fffU)
#define addr_map_rpg_pm_ucf_msw6_base_r() (0x8101606000U)
#define addr_map_rpg_pm_ucf_msw6_limit_r() (0x8101606fffU)
#define addr_map_rpg_pm_ucf_msw7_base_r() (0x8101607000U)
#define addr_map_rpg_pm_ucf_msw7_limit_r() (0x8101607fffU)
#define addr_map_rpg_pm_ucf_msw8_base_r() (0x8101608000U)
#define addr_map_rpg_pm_ucf_msw8_limit_r() (0x8101608fffU)
#define addr_map_rpg_pm_ucf_msw9_base_r() (0x8101609000U)
#define addr_map_rpg_pm_ucf_msw9_limit_r() (0x8101609fffU)
#define addr_map_rpg_pm_ucf_msw10_base_r() (0x810160a000U)
#define addr_map_rpg_pm_ucf_msw10_limit_r() (0x810160afffU)
#define addr_map_rpg_pm_ucf_msw11_base_r() (0x810160b000U)
#define addr_map_rpg_pm_ucf_msw11_limit_r() (0x810160bfffU)
#define addr_map_rpg_pm_ucf_msw12_base_r() (0x810160c000U)
#define addr_map_rpg_pm_ucf_msw12_limit_r() (0x810160cfffU)
#define addr_map_rpg_pm_ucf_msw13_base_r() (0x810160d000U)
#define addr_map_rpg_pm_ucf_msw13_limit_r() (0x810160dfffU)
#define addr_map_rpg_pm_ucf_msw14_base_r() (0x810160e000U)
#define addr_map_rpg_pm_ucf_msw14_limit_r() (0x810160efffU)
#define addr_map_rpg_pm_ucf_msw15_base_r() (0x810160f000U)
#define addr_map_rpg_pm_ucf_msw15_limit_r() (0x810160ffffU)
#define addr_map_ucf_msn0_msw_base_r() (0x8128000000U)
#define addr_map_ucf_msn0_msw_limit_r() (0x8128000080U)
#define addr_map_ucf_msn1_msw_base_r() (0x8128200000U)
#define addr_map_ucf_msn1_msw_limit_r() (0x8128200080U)
#define addr_map_ucf_msn2_msw_base_r() (0x8128400000U)
#define addr_map_ucf_msn2_msw_limit_r() (0x8128400080U)
#define addr_map_ucf_msn3_msw_base_r() (0x8128600000U)
#define addr_map_ucf_msn3_msw_limit_r() (0x8128600080U)
#define addr_map_ucf_msn4_msw_base_r() (0x8128800000U)
#define addr_map_ucf_msn4_msw_limit_r() (0x8128800080U)
#define addr_map_ucf_msn5_msw_base_r() (0x8128a00000U)
#define addr_map_ucf_msn5_msw_limit_r() (0x8128a00080U)
#define addr_map_ucf_msn6_msw_base_r() (0x8128c00000U)
#define addr_map_ucf_msn6_msw_limit_r() (0x8128c00080U)
#define addr_map_ucf_msn7_msw_base_r() (0x8128e00000U)
#define addr_map_ucf_msn7_msw_limit_r() (0x8128e00080U)
#define addr_map_ucf_msn0_slice0_base_r() (0x812a040000U)
#define addr_map_ucf_msn0_slice0_limit_r() (0x812a040080U)
#define addr_map_ucf_msn0_slice1_base_r() (0x812a140000U)
#define addr_map_ucf_msn0_slice1_limit_r() (0x812a140080U)
#define addr_map_ucf_msn1_slice0_base_r() (0x812a240000U)
#define addr_map_ucf_msn1_slice0_limit_r() (0x812a240080U)
#define addr_map_ucf_msn1_slice1_base_r() (0x812a340000U)
#define addr_map_ucf_msn1_slice1_limit_r() (0x812a340080U)
#define addr_map_ucf_msn2_slice0_base_r() (0x812a440000U)
#define addr_map_ucf_msn2_slice0_limit_r() (0x812a440080U)
#define addr_map_ucf_msn2_slice1_base_r() (0x812a540000U)
#define addr_map_ucf_msn2_slice1_limit_r() (0x812a540080U)
#define addr_map_ucf_msn3_slice0_base_r() (0x812a640000U)
#define addr_map_ucf_msn3_slice0_limit_r() (0x812a640080U)
#define addr_map_ucf_msn3_slice1_base_r() (0x812a740000U)
#define addr_map_ucf_msn3_slice1_limit_r() (0x812a740080U)
#define addr_map_ucf_msn4_slice0_base_r() (0x812a840000U)
#define addr_map_ucf_msn4_slice0_limit_r() (0x812a840080U)
#define addr_map_ucf_msn4_slice1_base_r() (0x812a940000U)
#define addr_map_ucf_msn4_slice1_limit_r() (0x812a940080U)
#define addr_map_ucf_msn5_slice0_base_r() (0x812aa40000U)
#define addr_map_ucf_msn5_slice0_limit_r() (0x812aa40080U)
#define addr_map_ucf_msn5_slice1_base_r() (0x812ab40000U)
#define addr_map_ucf_msn5_slice1_limit_r() (0x812ab40080U)
#define addr_map_ucf_msn6_slice0_base_r() (0x812ac40000U)
#define addr_map_ucf_msn6_slice0_limit_r() (0x812ac40080U)
#define addr_map_ucf_msn6_slice1_base_r() (0x812ad40000U)
#define addr_map_ucf_msn6_slice1_limit_r() (0x812ad40080U)
#define addr_map_ucf_msn7_slice0_base_r() (0x812ae40000U)
#define addr_map_ucf_msn7_slice0_limit_r() (0x812ae40080U)
#define addr_map_ucf_msn7_slice1_base_r() (0x812af40000U)
#define addr_map_ucf_msn7_slice1_limit_r() (0x812af40080U)
#define addr_map_rpg_pm_ucf_psw0_base_r() (0x8101644000U)
#define addr_map_rpg_pm_ucf_psw0_limit_r() (0x8101644fffU)
#define addr_map_rpg_pm_ucf_psw1_base_r() (0x8101645000U)
#define addr_map_rpg_pm_ucf_psw1_limit_r() (0x8101645fffU)
#define addr_map_rpg_pm_ucf_psw2_base_r() (0x8101646000U)
#define addr_map_rpg_pm_ucf_psw2_limit_r() (0x8101646fffU)
#define addr_map_rpg_pm_ucf_psw3_base_r() (0x8101647000U)
#define addr_map_rpg_pm_ucf_psw3_limit_r() (0x8101647fffU)
#define addr_map_ucf_psn0_psw_base_r() (0x8130080000U)
#define addr_map_ucf_psn0_psw_limit_r() (0x8130080020U)
#define addr_map_ucf_psn1_psw_base_r() (0x8130480000U)
#define addr_map_ucf_psn1_psw_limit_r() (0x8130480020U)
#define addr_map_ucf_psn2_psw_base_r() (0x8130880000U)
#define addr_map_ucf_psn2_psw_limit_r() (0x8130880020U)
#define addr_map_ucf_psn3_psw_base_r() (0x8130c80000U)
#define addr_map_ucf_psn3_psw_limit_r() (0x8130c80020U)
#define addr_map_rpg_pm_ucf_vddmss0_base_r() (0x8101631000U)
#define addr_map_rpg_pm_ucf_vddmss0_limit_r() (0x8101631fffU)
#define addr_map_rpg_pm_ucf_vddmss1_base_r() (0x8101632000U)
#define addr_map_rpg_pm_ucf_vddmss1_limit_r() (0x8101632fffU)
#define addr_map_ucf_csw0_base_r() (0x8122000000U)
#define addr_map_ucf_csw0_limit_r() (0x8122000080U)
#define addr_map_ucf_csw1_base_r() (0x8122400000U)
#define addr_map_ucf_csw1_limit_r() (0x8122400080U)
#define addr_map_rpg_pm_cpu_core_base_r() (0x14100000U)
#define addr_map_rpg_pm_cpu_core_base_width_v() (0x00000014U)
#define addr_map_cpucore0_base_r() (0x8132030000U)
#define addr_map_cpucore0_base_size_v() (0x00001000U)
#define addr_map_cpucore1_base_r() (0x8132130000U)
#define addr_map_cpucore1_base_size_v() (0x00001000U)
#define addr_map_cpucore2_base_r() (0x8132230000U)
#define addr_map_cpucore2_base_size_v() (0x00001000U)
#define addr_map_cpucore3_base_r() (0x8132330000U)
#define addr_map_cpucore3_base_size_v() (0x00001000U)
#define addr_map_cpucore4_base_r() (0x8132430000U)
#define addr_map_cpucore4_base_size_v() (0x00001000U)
#define addr_map_cpucore5_base_r() (0x8132530000U)
#define addr_map_cpucore5_base_size_v() (0x00001000U)
#define addr_map_cpucore6_base_r() (0x8132630000U)
#define addr_map_cpucore6_base_size_v() (0x00001000U)
#define addr_map_cpucore7_base_r() (0x8132730000U)
#define addr_map_cpucore7_base_size_v() (0x00001000U)
#define addr_map_cpucore8_base_r() (0x8132830000U)
#define addr_map_cpucore8_base_size_v() (0x00001000U)
#define addr_map_cpucore9_base_r() (0x8132930000U)
#define addr_map_cpucore9_base_size_v() (0x00001000U)
#define addr_map_cpucore10_base_r() (0x8132a30000U)
#define addr_map_cpucore10_base_size_v() (0x00001000U)
#define addr_map_cpucore11_base_r() (0x8132b30000U)
#define addr_map_cpucore11_base_size_v() (0x00001000U)
#define addr_map_cpucore12_base_r() (0x8132c30000U)
#define addr_map_cpucore12_base_size_v() (0x00001000U)
#define addr_map_cpucore13_base_r() (0x8132d30000U)
#define addr_map_cpucore13_base_size_v() (0x00001000U)
#define addr_map_rpg_pm_vi0_base_r() (0x8181600000U)
#define addr_map_rpg_pm_vi0_limit_r() (0x8181600fffU)
#define addr_map_rpg_pm_vi1_base_r() (0x8181601000U)
#define addr_map_rpg_pm_vi1_limit_r() (0x8181601fffU)
#define addr_map_vi_thi_base_r() (0x8188700000U)
#define addr_map_vi_thi_limit_r() (0x81887fffffU)
#define addr_map_vi2_thi_base_r() (0x8188f00000U)
#define addr_map_vi2_thi_limit_r() (0x8188ffffffU)
#define addr_map_rpg_pm_isp0_base_r() (0x8181602000U)
#define addr_map_rpg_pm_isp0_limit_r() (0x8181602fffU)
#define addr_map_rpg_pm_isp1_base_r() (0x8181603000U)
#define addr_map_rpg_pm_isp1_limit_r() (0x8181603fffU)
#define addr_map_isp_thi_base_r() (0x8188b00000U)
#define addr_map_isp_thi_limit_r() (0x8188bfffffU)
#define addr_map_isp1_thi_base_r() (0x818ab00000U)
#define addr_map_isp1_thi_limit_r() (0x818abfffffU)
#define addr_map_pmc_misc_base_r() (0xc9c0000U)
#endif /* T264_ADDR_MAP_SOC_HWPM_H */

View File

@@ -1,192 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_PMASYS_SOC_HWPM_H
#define T264_PMASYS_SOC_HWPM_H
#define pmasys_channel_control_user_r(i,j)\
(0x1610a10U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_control_user_update_bytes_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_control_user_update_bytes_m() (0x1U << 16U)
#define pmasys_channel_control_user_update_bytes_doit_v() (0x00000001U)
#define pmasys_channel_control_user_update_bytes_doit_f() (0x10000U)
#define pmasys_channel_control_user_membuf_clear_status_m() (0x1U << 1U)
#define pmasys_channel_control_user_membuf_clear_status_doit_f() (0x2U)
#define pmasys_channel_mem_bump_r(i,j) (0x1610a14U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbase_r(i,j) (0x1610a28U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbase_ptr_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outbase_ptr_m() (0x7ffffffU << 5U)
#define pmasys_channel_outbase_ptr_v(r) (((r) >> 5U) & 0x7ffffffU)
#define pmasys_channel_outbase_ptr_init_f() (0x0U)
#define pmasys_channel_outbaseupper_r(i,j)\
(0x1610a2cU + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outbaseupper_ptr_f(v) (((v) & 0x1ffffffU) << 0U)
#define pmasys_channel_outbaseupper_ptr_m() (0x1ffffffU << 0U)
#define pmasys_channel_outbaseupper_ptr_v(r) (((r) >> 0U) & 0x1ffffffU)
#define pmasys_channel_outbaseupper_ptr_init_f() (0x0U)
#define pmasys_channel_outsize_r(i,j) (0x1610a30U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_outsize_numbytes_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outsize_numbytes_m() (0x7ffffffU << 5U)
#define pmasys_channel_outsize_numbytes_init_f() (0x0U)
#define pmasys_channel_mem_head_r(i,j) (0x1610a34U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_head_ptr_m() (0xfffffffU << 4U)
#define pmasys_channel_mem_head_ptr_init_f() (0x0U)
#define pmasys_channel_mem_bytes_r(i,j)\
(0x1610a38U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_bytes_numbytes_m() (0xfffffffU << 4U)
#define pmasys_channel_mem_bytes_numbytes_init_f() (0x0U)
#define pmasys_channel_mem_bytes_addr_r(i,j)\
(0x1610a3cU + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_mem_bytes_addr_ptr_f(v) (((v) & 0x3fffffffU) << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_m() (0x3fffffffU << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_init_f() (0x0U)
#define pmasys_cblock_bpc_mem_block_r(i) (0x1611e04U + ((i)*32U))
#define pmasys_cblock_bpc_mem_block_base_m() (0xffffffffU << 0U)
#define pmasys_cblock_bpc_mem_blockupper_r(i) (0x1611e08U + ((i)*32U))
#define pmasys_cblock_bpc_mem_blockupper_valid_f(v) (((v) & 0x1U) << 31U)
#define pmasys_cblock_bpc_mem_blockupper_valid_false_v() (0x00000000U)
#define pmasys_cblock_bpc_mem_blockupper_valid_true_v() (0x00000001U)
#define pmasys_channel_config_user_r(i,j)\
(0x1610a24U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_config_user_stream_f(v) (((v) & 0x1U) << 0U)
#define pmasys_channel_config_user_stream_m() (0x1U << 0U)
#define pmasys_channel_config_user_stream_disable_f() (0x0U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_v()\
(0x00000004U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_f()\
(0x4000000U)
#define pmasys_channel_status_r(i,j) (0x1610a00U + ((i) * 128U)) + ((j) * 64U)
#define pmasys_channel_status_engine_status_m() (0x7U << 0U)
#define pmasys_channel_status_engine_status_empty_v() (0x00000000U)
#define pmasys_channel_status_engine_status_empty_f() (0x0U)
#define pmasys_channel_status_engine_status_active_v() (0x00000001U)
#define pmasys_channel_status_engine_status_paused_v() (0x00000002U)
#define pmasys_channel_status_engine_status_quiescent_v() (0x00000003U)
#define pmasys_channel_status_engine_status_stalled_v() (0x00000005U)
#define pmasys_channel_status_engine_status_faulted_v() (0x00000006U)
#define pmasys_channel_status_engine_status_halted_v() (0x00000007U)
#define pmasys_channel_status_membuf_status_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_status_membuf_status_m() (0x1U << 16U)
#define pmasys_channel_status_membuf_status_v(r) (((r) >> 16U) & 0x1U)
#define pmasys_channel_status_membuf_status_overflowed_v() (0x00000001U)
#define pmasys_channel_status_membuf_status_init_f() (0x0U)
#define pmasys_command_slice_trigger_start_mask0_r(i) (0x1611128U + ((i)*144U))
#define pmasys_command_slice_trigger_start_mask0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_start_mask0_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_start_mask1_r(i) (0x161112cU + ((i)*144U))
#define pmasys_command_slice_trigger_start_mask1_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_start_mask1_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_stop_mask0_r(i) (0x1611130U + ((i)*144U))
#define pmasys_command_slice_trigger_stop_mask0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_stop_mask0_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_stop_mask1_r(i) (0x1611134U + ((i)*144U))
#define pmasys_command_slice_trigger_stop_mask1_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_trigger_stop_mask1_engine_init_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_r(i) (0x161111cU + ((i)*144U))
#define pmasys_command_slice_trigger_config_user_pma_pulse_f(v)\
(((v) & 0x1U) << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_m() (0x1U << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_record_stream_f(v)\
(((v) & 0x1U) << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_m() (0x1U << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_f()\
(0x0U)
#define pmasys_streaming_capabilities1_r() (0x16109f4U)
#define pmasys_streaming_capabilities1_local_credits_f(v) (((v) & 0x1ffU) << 0U)
#define pmasys_streaming_capabilities1_local_credits_m() (0x1ffU << 0U)
#define pmasys_streaming_capabilities1_local_credits_init_v() (0x00000100U)
#define pmasys_streaming_capabilities1_total_credits_f(v) (((v) & 0x7ffU) << 9U)
#define pmasys_streaming_capabilities1_total_credits_m() (0x7ffU << 9U)
#define pmasys_streaming_capabilities1_total_credits_v(r) (((r) >> 9U) & 0x7ffU)
#define pmasys_streaming_capabilities1_total_credits_init_f() (0x20000U)
#define pmasys_command_slice_trigger_mask_secure0_r(i) (0x1611110U + ((i)*144U))
#define pmasys_command_slice_trigger_mask_secure0_engine_f(v)\
(((v) & 0xffffffffU) << 0U)
#define pmasys_command_slice_trigger_mask_secure0_engine_m() (0xffffffffU << 0U)
#define pmasys_command_slice_record_select_secure_r(i) (0x1611180U + ((i)*144U))
#define pmasys_command_slice_record_select_secure_trigger_select_f(v)\
(((v) & 0x3fU) << 0U)
#define pmasys_command_slice_record_select_secure_trigger_select_m()\
(0x3fU << 0U)
#define pmasys_profiling_cg2_secure_r() (0x1610844U)
#define pmasys_profiling_cg2_secure_slcg_f(v) (((v) & 0x1U) << 0U)
#define pmasys_profiling_cg2_secure_slcg_m() (0x1U << 0U)
#define pmasys_profiling_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmasys_profiling_cg2_secure_slcg_enabled_f() (0x0U)
#define pmasys_profiling_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmasys_profiling_cg2_secure_slcg__prod_f() (0x0U)
#define pmasys_profiling_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmasys_profiling_cg2_secure_slcg_disabled_f() (0x1U)
#define pmasys_profiling_cg1_secure_r() (0x1610848U)
#define pmasys_profiling_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmasys_profiling_cg1_secure_flcg_m() (0x1U << 31U)
#define pmasys_profiling_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmasys_profiling_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmasys_profiling_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmasys_profiling_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmasys_profiling_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmasys_profiling_cg1_secure_flcg_disabled_f() (0x0U)
#endif /* T264_PMASYS_SOC_HWPM_H */

View File

@@ -1,170 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef T264_PMMSYS_SOC_HWPM_H
#define T264_PMMSYS_SOC_HWPM_H
#define pmmsys_perdomain_offset_v() (0x00001000U)
#define pmmsys_user_channel_register_stride_v() (0x00000020U)
#define pmmsys_num_user_command_slices_v() (0x00000002U)
#define pmmsys_num_cblocks_v() (0x00000001U)
#define pmmsys_num_streaming_channels_v() (0x00000002U)
#define pmmsys_num_channels_per_cblock_v() (0x00000002U)
#define pmmsys_cblock_stride_v() (0x00000020U)
#define pmmsys_channel_stride_v() (0x00000010U)
#define pmmsys_dg_bitmap_array_size_v() (0x00000008U)
#define pmmsys_control_r(i) (0x160009cU + ((i)*4096U))
#define pmmsys_control_mode_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_control_mode_m() (0x7U << 0U)
#define pmmsys_control_mode_disable_v() (0x00000000U)
#define pmmsys_control_mode_disable_f() (0x0U)
#define pmmsys_control_mode_a_v() (0x00000001U)
#define pmmsys_control_mode_b_v() (0x00000002U)
#define pmmsys_control_mode_c_v() (0x00000003U)
#define pmmsys_control_mode_e_v() (0x00000005U)
#define pmmsys_control_mode_null_v() (0x00000007U)
#define pmmsys_control_o() (0x9cU)
#define pmmsys_enginestatus_r(i) (0x16000c8U + ((i)*4096U))
#define pmmsys_enginestatus_enable_f(v) (((v) & 0x1U) << 8U)
#define pmmsys_enginestatus_enable_m() (0x1U << 8U)
#define pmmsys_enginestatus_enable_out_v() (0x00000001U)
#define pmmsys_enginestatus_enable_out_f() (0x100U)
#define pmmsys_enginestatus_o() (0xc8U)
#define pmmsys_secure_config_r(i) (0x160012cU + ((i)*4096U))
#define pmmsys_secure_config_o() (0x12cU)
#define pmmsys_secure_config_cmd_slice_id_f(v) (((v) & 0x1fU) << 0U)
#define pmmsys_secure_config_cmd_slice_id_m() (0x1fU << 0U)
#define pmmsys_secure_config_channel_id_f(v) (((v) & 0x3U) << 8U)
#define pmmsys_secure_config_channel_id_m() (0x3U << 8U)
#define pmmsys_secure_config_cblock_id_f(v) (((v) & 0xfU) << 11U)
#define pmmsys_secure_config_cblock_id_m() (0xfU << 11U)
#define pmmsys_secure_config_dg_idx_v(r) (((r) >> 16U) & 0xffU)
#define pmmsys_secure_config_mapped_f(v) (((v) & 0x1U) << 28U)
#define pmmsys_secure_config_mapped_m() (0x1U << 28U)
#define pmmsys_secure_config_mapped_false_f() (0x0U)
#define pmmsys_secure_config_mapped_true_f() (0x10000000U)
#define pmmsys_secure_config_use_prog_dg_idx_f(v) (((v) & 0x1U) << 30U)
#define pmmsys_secure_config_use_prog_dg_idx_m() (0x1U << 30U)
#define pmmsys_secure_config_use_prog_dg_idx_false_f() (0x0U)
#define pmmsys_secure_config_use_prog_dg_idx_true_f() (0x40000000U)
#define pmmsys_secure_config_command_pkt_decoder_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_secure_config_command_pkt_decoder_m() (0x1U << 31U)
#define pmmsys_secure_config_command_pkt_decoder_disable_f() (0x0U)
#define pmmsys_secure_config_command_pkt_decoder_enable_f() (0x80000000U)
#define pmmsys_router_user_dgmap_status_secure_r(i) (0x1612050U + ((i)*4U))
#define pmmsys_router_user_dgmap_status_secure__size_1_v() (0x00000008U)
#define pmmsys_router_user_dgmap_status_secure_dg_s() (1U)
#define pmmsys_router_user_dgmap_status_secure_dg_not_mapped_v() (0x00000000U)
#define pmmsys_router_user_dgmap_status_secure_dg_mapped_v() (0x00000001U)
#define pmmsys_router_enginestatus_r() (0x1612080U)
#define pmmsys_router_enginestatus_status_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_router_enginestatus_status_m() (0x7U << 0U)
#define pmmsys_router_enginestatus_status_v(r) (((r) >> 0U) & 0x7U)
#define pmmsys_router_enginestatus_status_empty_v() (0x00000000U)
#define pmmsys_router_enginestatus_status_active_v() (0x00000001U)
#define pmmsys_router_enginestatus_status_paused_v() (0x00000002U)
#define pmmsys_router_enginestatus_status_quiescent_v() (0x00000003U)
#define pmmsys_router_enginestatus_status_stalled_v() (0x00000005U)
#define pmmsys_router_enginestatus_status_faulted_v() (0x00000006U)
#define pmmsys_router_enginestatus_status_halted_v() (0x00000007U)
#define pmmsys_router_enginestatus_merged_perfmon_status_f(v)\
(((v) & 0x7U) << 8U)
#define pmmsys_router_enginestatus_merged_perfmon_status_m() (0x7U << 8U)
#define pmmsys_router_enginestatus_merged_perfmon_status_v(r)\
(((r) >> 8U) & 0x7U)
#define pmmsys_router_profiling_dg_cg1_secure_r() (0x1612094U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_f() (0x0U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmmsys_router_profiling_cg1_secure_r() (0x1612098U)
#define pmmsys_router_profiling_cg1_secure_flcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_cg1_secure_flcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_cg1_secure_flcg__prod_v() (0x00000001U)
#define pmmsys_router_profiling_cg1_secure_flcg__prod_f() (0x80000000U)
#define pmmsys_router_profiling_cg1_secure_flcg_disabled_v() (0x00000000U)
#define pmmsys_router_profiling_cg1_secure_flcg_disabled_f() (0x0U)
#define pmmsys_router_profiling_cg1_secure_flcg_enabled_v() (0x00000001U)
#define pmmsys_router_profiling_cg1_secure_flcg_enabled_f() (0x80000000U)
#define pmmsys_router_perfmon_cg2_secure_r() (0x161209cU)
#define pmmsys_router_perfmon_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_perfmon_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_router_perfmon_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_router_perfmon_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_router_perfmon_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_router_perfmon_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_router_profiling_cg2_secure_r() (0x1612090U)
#define pmmsys_router_profiling_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_router_profiling_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_router_profiling_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_router_profiling_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_router_profiling_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_router_profiling_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_router_profiling_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_router_profiling_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_user_channel_config_secure_r(i,j)\
(0x16120b8U + ((i) * 32U)) + ((j) * 16U)
#define pmmsys_user_channel_config_secure_hs_credits_m() (0x1ffU << 0U)
#define pmmsys_user_channel_config_secure_hs_credits_init_f() (0x0U)
#endif /* T264_PMMSYS_SOC_HWPM_H */

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,107 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_CPU_H
#define T264_HWPM_IP_CPU_H
#if defined(CONFIG_T264_HWPM_IP_CPU)
#define T264_HWPM_ACTIVE_IP_CPU T264_HWPM_IP_CPU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_CPU_NUM_INSTANCES 14U
#define T264_HWPM_IP_CPU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_CPU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_cpu;
#define addr_map_rpg_pm_cpu_core_size() BIT(0x00000014U)
#define addr_map_rpg_pm_cpu_core0_base_r() \
(addr_map_rpg_pm_cpu_core_base_r())
#define addr_map_rpg_pm_cpu_core0_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x00FFF)
#define addr_map_rpg_pm_cpu_core1_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x10000)
#define addr_map_rpg_pm_cpu_core1_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x10FFF)
#define addr_map_rpg_pm_cpu_core2_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x20000)
#define addr_map_rpg_pm_cpu_core2_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x20FFF)
#define addr_map_rpg_pm_cpu_core3_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x30000)
#define addr_map_rpg_pm_cpu_core3_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x30FFF)
#define addr_map_rpg_pm_cpu_core4_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x40000)
#define addr_map_rpg_pm_cpu_core4_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x40FFF)
#define addr_map_rpg_pm_cpu_core5_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x50000)
#define addr_map_rpg_pm_cpu_core5_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x50FFF)
#define addr_map_rpg_pm_cpu_core6_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x60000)
#define addr_map_rpg_pm_cpu_core6_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x60FFF)
#define addr_map_rpg_pm_cpu_core7_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x70000)
#define addr_map_rpg_pm_cpu_core7_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x70FFF)
#define addr_map_rpg_pm_cpu_core8_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x80000)
#define addr_map_rpg_pm_cpu_core8_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x80FFF)
#define addr_map_rpg_pm_cpu_core9_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x90000)
#define addr_map_rpg_pm_cpu_core9_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0x90FFF)
#define addr_map_rpg_pm_cpu_core10_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xa0000)
#define addr_map_rpg_pm_cpu_core10_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xa0FFF)
#define addr_map_rpg_pm_cpu_core11_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xb0000)
#define addr_map_rpg_pm_cpu_core11_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xb0FFF)
#define addr_map_rpg_pm_cpu_core12_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xc0000)
#define addr_map_rpg_pm_cpu_core12_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xc0FFF)
#define addr_map_rpg_pm_cpu_core13_base_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xd0000)
#define addr_map_rpg_pm_cpu_core13_limit_r() \
(addr_map_rpg_pm_cpu_core_base_r() + 0xd0FFF)
#else
#define T264_HWPM_ACTIVE_IP_CPU
#endif
#endif /* T264_HWPM_IP_CPU_H */

View File

@@ -1,301 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_isp.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_isp_inst0_perfmon_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_isp0",
.device_index = T264_ISP0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_isp0_base_r(),
.end_abs_pa = addr_map_rpg_pm_isp0_limit_r(),
.start_pa = addr_map_rpg_pm_isp0_base_r(),
.end_pa = addr_map_rpg_pm_isp0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst1_perfmon_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_isp1",
.device_index = T264_ISP1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_isp1_base_r(),
.end_abs_pa = addr_map_rpg_pm_isp1_limit_r(),
.start_pa = addr_map_rpg_pm_isp1_base_r(),
.end_pa = addr_map_rpg_pm_isp1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst0_perfmux_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_isp_thi_base_r(),
.end_abs_pa = addr_map_isp_thi_limit_r(),
.start_pa = addr_map_isp_thi_base_r(),
.end_pa = addr_map_isp_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_isp_alist,
.alist_size = ARRAY_SIZE(t264_isp_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_isp_inst1_perfmux_element_static_array[
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_isp1_thi_base_r(),
.end_abs_pa = addr_map_isp1_thi_limit_r(),
.start_pa = addr_map_isp1_thi_base_r(),
.end_pa = addr_map_isp1_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_isp_alist,
.alist_size = ARRAY_SIZE(t264_isp_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_isp_inst_static_array[
T264_HWPM_IP_ISP_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_isp_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp_thi_limit_r(),
.element_stride = addr_map_isp_thi_limit_r() -
addr_map_isp_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST,
.element_static_array =
t264_isp_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_isp0_base_r(),
.range_end = addr_map_rpg_pm_isp0_limit_r(),
.element_stride = addr_map_rpg_pm_isp0_limit_r() -
addr_map_rpg_pm_isp0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_isp_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp1_thi_base_r(),
.range_end = addr_map_isp1_thi_limit_r(),
.element_stride = addr_map_isp1_thi_limit_r() -
addr_map_isp1_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST,
.element_static_array =
t264_isp_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_isp1_base_r(),
.range_end = addr_map_rpg_pm_isp1_limit_r(),
.element_stride = addr_map_rpg_pm_isp1_limit_r() -
addr_map_rpg_pm_isp1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_isp = {
.num_instances = T264_HWPM_IP_ISP_NUM_INSTANCES,
.ip_inst_static_array = t264_isp_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_isp_thi_base_r(),
.range_end = addr_map_isp1_thi_limit_r(),
.inst_stride = addr_map_isp_thi_limit_r() -
addr_map_isp_thi_base_r() + 1ULL,
.inst_slots = 0U,
.islots_overlimit = true,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_isp0_base_r(),
.range_end = addr_map_rpg_pm_isp1_limit_r(),
.inst_stride = addr_map_rpg_pm_isp0_limit_r() -
addr_map_rpg_pm_isp0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_ISP_H
#define T264_HWPM_IP_ISP_H
#if defined(CONFIG_T264_HWPM_IP_ISP)
#define T264_HWPM_ACTIVE_IP_ISP T264_HWPM_IP_ISP,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_ISP_NUM_INSTANCES 2U
#define T264_HWPM_IP_ISP_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_ISP_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_isp;
#else
#define T264_HWPM_ACTIVE_IP_ISP
#endif
#endif /* T264_HWPM_IP_ISP_H */

View File

@@ -1,714 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_mss_channel.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_mss_channel_inst0_perfmon_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta0",
.device_index = T264_MSS_CHANNEL_PARTA0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss0_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss0_limit_r(),
.start_pa = addr_map_rpg_pm_mss0_base_r(),
.end_pa = addr_map_rpg_pm_mss0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta1",
.device_index = T264_MSS_CHANNEL_PARTA1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss1_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss1_limit_r(),
.start_pa = addr_map_rpg_pm_mss1_base_r(),
.end_pa = addr_map_rpg_pm_mss1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta2",
.device_index = T264_MSS_CHANNEL_PARTA2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss2_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss2_limit_r(),
.start_pa = addr_map_rpg_pm_mss2_base_r(),
.end_pa = addr_map_rpg_pm_mss2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_parta3",
.device_index = T264_MSS_CHANNEL_PARTA3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss3_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss3_limit_r(),
.start_pa = addr_map_rpg_pm_mss3_base_r(),
.end_pa = addr_map_rpg_pm_mss3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb0",
.device_index = T264_MSS_CHANNEL_PARTB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss4_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss4_limit_r(),
.start_pa = addr_map_rpg_pm_mss4_base_r(),
.end_pa = addr_map_rpg_pm_mss4_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb1",
.device_index = T264_MSS_CHANNEL_PARTB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss5_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss5_limit_r(),
.start_pa = addr_map_rpg_pm_mss5_base_r(),
.end_pa = addr_map_rpg_pm_mss5_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb2",
.device_index = T264_MSS_CHANNEL_PARTB2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss6_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss6_limit_r(),
.start_pa = addr_map_rpg_pm_mss6_base_r(),
.end_pa = addr_map_rpg_pm_mss6_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partb3",
.device_index = T264_MSS_CHANNEL_PARTB3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss7_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss7_limit_r(),
.start_pa = addr_map_rpg_pm_mss7_base_r(),
.end_pa = addr_map_rpg_pm_mss7_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc0",
.device_index = T264_MSS_CHANNEL_PARTC0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss8_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss8_limit_r(),
.start_pa = addr_map_rpg_pm_mss8_base_r(),
.end_pa = addr_map_rpg_pm_mss8_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc1",
.device_index = T264_MSS_CHANNEL_PARTC1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss9_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss9_limit_r(),
.start_pa = addr_map_rpg_pm_mss9_base_r(),
.end_pa = addr_map_rpg_pm_mss9_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc2",
.device_index = T264_MSS_CHANNEL_PARTC2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss10_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss10_limit_r(),
.start_pa = addr_map_rpg_pm_mss10_base_r(),
.end_pa = addr_map_rpg_pm_mss10_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partc3",
.device_index = T264_MSS_CHANNEL_PARTC3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss11_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss11_limit_r(),
.start_pa = addr_map_rpg_pm_mss11_base_r(),
.end_pa = addr_map_rpg_pm_mss11_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd0",
.device_index = T264_MSS_CHANNEL_PARTD0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss12_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss12_limit_r(),
.start_pa = addr_map_rpg_pm_mss12_base_r(),
.end_pa = addr_map_rpg_pm_mss12_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd1",
.device_index = T264_MSS_CHANNEL_PARTD1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss13_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss13_limit_r(),
.start_pa = addr_map_rpg_pm_mss13_base_r(),
.end_pa = addr_map_rpg_pm_mss13_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd2",
.device_index = T264_MSS_CHANNEL_PARTD2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss14_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss14_limit_r(),
.start_pa = addr_map_rpg_pm_mss14_base_r(),
.end_pa = addr_map_rpg_pm_mss14_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
.name = "perfmon_msschannel_partd3",
.device_index = T264_MSS_CHANNEL_PARTD3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_mss15_base_r(),
.end_abs_pa = addr_map_rpg_pm_mss15_limit_r(),
.start_pa = addr_map_rpg_pm_mss15_base_r(),
.end_pa = addr_map_rpg_pm_mss15_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_channel_inst0_perfmux_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc0_base_r(),
.end_abs_pa = addr_map_mc0_limit_r(),
.start_pa = addr_map_mc0_base_r(),
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc1_base_r(),
.end_abs_pa = addr_map_mc1_limit_r(),
.start_pa = addr_map_mc1_base_r(),
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc2_base_r(),
.end_abs_pa = addr_map_mc2_limit_r(),
.start_pa = addr_map_mc2_base_r(),
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc3_base_r(),
.end_abs_pa = addr_map_mc3_limit_r(),
.start_pa = addr_map_mc3_base_r(),
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc4_base_r(),
.end_abs_pa = addr_map_mc4_limit_r(),
.start_pa = addr_map_mc4_base_r(),
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc5_base_r(),
.end_abs_pa = addr_map_mc5_limit_r(),
.start_pa = addr_map_mc5_base_r(),
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc6_base_r(),
.end_abs_pa = addr_map_mc6_limit_r(),
.start_pa = addr_map_mc6_base_r(),
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc7_base_r(),
.end_abs_pa = addr_map_mc7_limit_r(),
.start_pa = addr_map_mc7_base_r(),
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc8_base_r(),
.end_abs_pa = addr_map_mc8_limit_r(),
.start_pa = addr_map_mc8_base_r(),
.end_pa = addr_map_mc8_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 9U,
.element_index_mask = BIT(9),
.element_index = 10U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc9_base_r(),
.end_abs_pa = addr_map_mc9_limit_r(),
.start_pa = addr_map_mc9_base_r(),
.end_pa = addr_map_mc9_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 10U,
.element_index_mask = BIT(10),
.element_index = 11U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc10_base_r(),
.end_abs_pa = addr_map_mc10_limit_r(),
.start_pa = addr_map_mc10_base_r(),
.end_pa = addr_map_mc10_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 11U,
.element_index_mask = BIT(11),
.element_index = 12U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc11_base_r(),
.end_abs_pa = addr_map_mc11_limit_r(),
.start_pa = addr_map_mc11_base_r(),
.end_pa = addr_map_mc11_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 12U,
.element_index_mask = BIT(12),
.element_index = 13U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc12_base_r(),
.end_abs_pa = addr_map_mc12_limit_r(),
.start_pa = addr_map_mc12_base_r(),
.end_pa = addr_map_mc12_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 13U,
.element_index_mask = BIT(13),
.element_index = 14U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc13_base_r(),
.end_abs_pa = addr_map_mc13_limit_r(),
.start_pa = addr_map_mc13_base_r(),
.end_pa = addr_map_mc13_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 14U,
.element_index_mask = BIT(14),
.element_index = 15U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc14_base_r(),
.end_abs_pa = addr_map_mc14_limit_r(),
.start_pa = addr_map_mc14_base_r(),
.end_pa = addr_map_mc14_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 15U,
.element_index_mask = BIT(15),
.element_index = 16U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc15_base_r(),
.end_abs_pa = addr_map_mc15_limit_r(),
.start_pa = addr_map_mc15_base_r(),
.end_pa = addr_map_mc15_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_channel_inst0_broadcast_element_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mcb_base_r(),
.end_abs_pa = addr_map_mcb_limit_r(),
.start_pa = addr_map_mcb_base_r(),
.end_pa = addr_map_mcb_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_channel_alist,
.alist_size = ARRAY_SIZE(t264_mss_channel_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_mss_channel_inst_static_array[
T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_mss_channel_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc15_limit_r(),
.element_stride = addr_map_mc0_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST,
.element_static_array =
t264_mss_channel_inst0_broadcast_element_static_array,
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.element_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST,
.element_static_array =
t264_mss_channel_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_mss0_base_r(),
.range_end = addr_map_rpg_pm_mss15_limit_r(),
.element_stride = addr_map_rpg_pm_mss0_limit_r() -
addr_map_rpg_pm_mss0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_mss_channel = {
.num_instances = T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES,
.ip_inst_static_array = t264_mss_channel_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc15_limit_r(),
.inst_stride = addr_map_mc15_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.inst_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_mss0_base_r(),
.range_end = addr_map_rpg_pm_mss15_limit_r(),
.inst_stride = addr_map_rpg_pm_mss15_limit_r() -
addr_map_rpg_pm_mss0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_MSS_CHANNEL_H
#define T264_HWPM_IP_MSS_CHANNEL_H
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
#define T264_HWPM_ACTIVE_IP_MSS_CHANNEL T264_HWPM_IP_MSS_CHANNEL,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_MSS_CHANNEL_NUM_INSTANCES 1U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_CORE_ELEMENT_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMON_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_PERFMUX_PER_INST 16U
#define T264_HWPM_IP_MSS_CHANNEL_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t264_hwpm_ip_mss_channel;
#else
#define T264_HWPM_ACTIVE_IP_MSS_CHANNEL
#endif
#endif /* T264_HWPM_IP_MSS_CHANNEL_H */

View File

@@ -1,483 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_mss_hubs.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_mss_hubs_inst0_perfmon_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_system_msshub0",
.device_index = T264_SYSTEM_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_system_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_system_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_system_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_system_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_system_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_disp_usb_msshub0",
.device_index = T264_DISP_USB_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_disp_usb_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_disp_usb_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_disp_usb_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_disp_usb_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_disp_usb_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_vision_msshub0",
.device_index = T264_VISION_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vision_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vision_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_vision_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_vision_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_vision_msshub1",
.device_index = T264_VISION_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vision_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_vision_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_vision_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_vision_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub0",
.device_index = T264_UCF_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub1",
.device_index = T264_UCF_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = "perfmon_ucf_msshub2",
.device_index = T264_UCF_MSS_HUB2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_msshub2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_msshub2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_msshub2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_msshub2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = "perfmon_uphy0_msshub0",
.device_index = T264_UPHY0_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_uphy0_msshub0_base_r(),
.end_abs_pa = addr_map_rpg_pm_uphy0_msshub0_limit_r(),
.start_pa = addr_map_rpg_pm_uphy0_msshub0_base_r(),
.end_pa = addr_map_rpg_pm_uphy0_msshub0_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = "perfmon_uphy0_msshub1",
.device_index = T264_UPHY0_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_uphy0_msshub1_base_r(),
.end_abs_pa = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.start_pa = addr_map_rpg_pm_uphy0_msshub1_base_r(),
.end_pa = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_hubs_inst0_perfmux_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc0_base_r(),
.end_abs_pa = addr_map_mc0_limit_r(),
.start_pa = addr_map_mc0_base_r(),
.end_pa = addr_map_mc0_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(1),
.element_index = 2U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc1_base_r(),
.end_abs_pa = addr_map_mc1_limit_r(),
.start_pa = addr_map_mc1_base_r(),
.end_pa = addr_map_mc1_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 2U,
.element_index_mask = BIT(2),
.element_index = 3U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc2_base_r(),
.end_abs_pa = addr_map_mc2_limit_r(),
.start_pa = addr_map_mc2_base_r(),
.end_pa = addr_map_mc2_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 3U,
.element_index_mask = BIT(3),
.element_index = 4U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc3_base_r(),
.end_abs_pa = addr_map_mc3_limit_r(),
.start_pa = addr_map_mc3_base_r(),
.end_pa = addr_map_mc3_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 4U,
.element_index_mask = BIT(4),
.element_index = 5U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc4_base_r(),
.end_abs_pa = addr_map_mc4_limit_r(),
.start_pa = addr_map_mc4_base_r(),
.end_pa = addr_map_mc4_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 5U,
.element_index_mask = BIT(5),
.element_index = 6U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc5_base_r(),
.end_abs_pa = addr_map_mc5_limit_r(),
.start_pa = addr_map_mc5_base_r(),
.end_pa = addr_map_mc5_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 6U,
.element_index_mask = BIT(6),
.element_index = 7U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc6_base_r(),
.end_abs_pa = addr_map_mc6_limit_r(),
.start_pa = addr_map_mc6_base_r(),
.end_pa = addr_map_mc6_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 7U,
.element_index_mask = BIT(7),
.element_index = 8U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc7_base_r(),
.end_abs_pa = addr_map_mc7_limit_r(),
.start_pa = addr_map_mc7_base_r(),
.end_pa = addr_map_mc7_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 8U,
.element_index_mask = BIT(8),
.element_index = 9U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mc8_base_r(),
.end_abs_pa = addr_map_mc8_limit_r(),
.start_pa = addr_map_mc8_base_r(),
.end_pa = addr_map_mc8_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_mss_hubs_inst0_broadcast_element_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST] = {
{
.element_type = IP_ELEMENT_BROADCAST,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_mcb_base_r(),
.end_abs_pa = addr_map_mcb_limit_r(),
.start_pa = addr_map_mcb_base_r(),
.end_pa = addr_map_mcb_limit_r(),
.base_pa = 0ULL,
.alist = t264_mss_hub_alist,
.alist_size = ARRAY_SIZE(t264_mss_hub_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_mss_hubs_inst_static_array[
T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc8_limit_r(),
.element_stride = addr_map_mc0_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_broadcast_element_static_array,
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.element_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST,
.element_static_array =
t264_mss_hubs_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_system_msshub0_base_r(),
.range_end = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.element_stride = addr_map_rpg_pm_system_msshub0_limit_r() -
addr_map_rpg_pm_system_msshub0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_mss_hubs = {
.num_instances = T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES,
.ip_inst_static_array = t264_mss_hubs_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_mc0_base_r(),
.range_end = addr_map_mc8_limit_r(),
.inst_stride = addr_map_mc8_limit_r() -
addr_map_mc0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = addr_map_mcb_base_r(),
.range_end = addr_map_mcb_limit_r(),
.inst_stride = addr_map_mcb_limit_r() -
addr_map_mcb_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_system_msshub0_base_r(),
.range_end = addr_map_rpg_pm_uphy0_msshub1_limit_r(),
.inst_stride = addr_map_rpg_pm_uphy0_msshub1_limit_r() -
addr_map_rpg_pm_system_msshub0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_MSS_HUBS_H
#define T264_HWPM_IP_MSS_HUBS_H
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
#define T264_HWPM_ACTIVE_IP_MSS_HUBS T264_HWPM_IP_MSS_HUBS,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_MSS_HUBS_NUM_INSTANCES 1U
#define T264_HWPM_IP_MSS_HUBS_NUM_CORE_ELEMENT_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_PERFMON_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_PERFMUX_PER_INST 9U
#define T264_HWPM_IP_MSS_HUBS_NUM_BROADCAST_PER_INST 1U
extern struct hwpm_ip t264_hwpm_ip_mss_hubs;
#else
#define T264_HWPM_ACTIVE_IP_MSS_HUBS
#endif
#endif /* T264_HWPM_IP_MSS_HUBS_H */

View File

@@ -1,195 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ocu.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ocu_inst0_perfmon_element_static_array[
T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ocu0",
.device_index = T264_OCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ocu_base_r(),
.end_abs_pa = addr_map_rpg_pm_ocu_limit_r(),
.start_pa = addr_map_rpg_pm_ocu_base_r(),
.end_pa = addr_map_rpg_pm_ocu_limit_r(),
.base_pa = addr_map_rpg_grp_uphy0_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ocu_inst0_perfmux_element_static_array[
T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ocu_base_r(),
.end_abs_pa = addr_map_ocu_limit_r(),
.start_pa = addr_map_ocu_base_r(),
.end_pa = addr_map_ocu_limit_r(),
.base_pa = 0ULL,
.alist = t264_ocu_alist,
.alist_size = ARRAY_SIZE(t264_ocu_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ocu_inst_static_array[
T264_HWPM_IP_OCU_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_OCU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ocu_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ocu_base_r(),
.range_end = addr_map_ocu_limit_r(),
.element_stride = addr_map_ocu_limit_r() -
addr_map_ocu_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ocu_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ocu_base_r(),
.range_end = addr_map_rpg_pm_ocu_limit_r(),
.element_stride = addr_map_rpg_pm_ocu_limit_r() -
addr_map_rpg_pm_ocu_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ocu = {
.num_instances = T264_HWPM_IP_OCU_NUM_INSTANCES,
.ip_inst_static_array = t264_ocu_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ocu_base_r(),
.range_end = addr_map_ocu_limit_r(),
.inst_stride = addr_map_ocu_limit_r() -
addr_map_ocu_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ocu_base_r(),
.range_end = addr_map_rpg_pm_ocu_limit_r(),
.inst_stride = addr_map_rpg_pm_ocu_limit_r() -
addr_map_rpg_pm_ocu_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_OCU_H
#define T264_HWPM_IP_OCU_H
#if defined(CONFIG_T264_HWPM_IP_OCU)
#define T264_HWPM_ACTIVE_IP_OCU T264_HWPM_IP_OCU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_OCU_NUM_INSTANCES 1U
#define T264_HWPM_IP_OCU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_OCU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ocu;
#else
#define T264_HWPM_ACTIVE_IP_OCU
#endif
#endif /* T264_HWPM_IP_OCU_H */

View File

@@ -1,190 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_pma.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_pma_inst0_perfmon_element_static_array[
T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_hwpm",
.device_index = T264_HWPM_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_hwpm_base_r(),
.end_abs_pa = addr_map_rpg_pm_hwpm_limit_r(),
.start_pa = addr_map_rpg_pm_hwpm_base_r(),
.end_pa = addr_map_rpg_pm_hwpm_limit_r(),
.base_pa = addr_map_rpg_grp_system_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_pma_inst0_perfmux_element_static_array[
T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "pma",
.device_index = T264_PMA_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_pma_base_r(),
.end_abs_pa = addr_map_pma_limit_r(),
.start_pa = addr_map_pma_base_r(),
.end_pa = addr_map_pma_limit_r(),
.base_pa = addr_map_pma_base_r(),
.alist = t264_pma_res_pma_alist,
.alist_size = ARRAY_SIZE(t264_pma_res_pma_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_pma_inst_static_array[
T264_HWPM_IP_PMA_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_PMA_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_pma_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.element_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST,
.element_static_array =
t264_pma_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_hwpm_base_r(),
.range_end = addr_map_rpg_pm_hwpm_limit_r(),
.element_stride = addr_map_rpg_pm_hwpm_limit_r() -
addr_map_rpg_pm_hwpm_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_pma = {
.num_instances = T264_HWPM_IP_PMA_NUM_INSTANCES,
.ip_inst_static_array = t264_pma_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.inst_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_hwpm_base_r(),
.range_end = addr_map_rpg_pm_hwpm_limit_r(),
.inst_stride = addr_map_rpg_pm_hwpm_limit_r() -
addr_map_rpg_pm_hwpm_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0x1U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_VALID,
.reserved = false,
};

View File

@@ -1,38 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_IP_PMA_H
#define T264_HWPM_IP_PMA_H
#define T264_HWPM_ACTIVE_IP_PMA T264_HWPM_IP_PMA,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_PMA_NUM_INSTANCES 1U
#define T264_HWPM_IP_PMA_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_PMA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_pma;
#endif /* T264_HWPM_IP_PMA_H */

View File

@@ -1,280 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_pva.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_pva_inst0_perfmon_element_static_array[
T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_pvac0",
.device_index = T264_PVAC0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvac0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvac0_limit_r(),
.start_pa = addr_map_rpg_pm_pvac0_base_r(),
.end_pa = addr_map_rpg_pm_pvac0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = "perfmon_pvav0",
.device_index = T264_PVAV0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvav0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvav0_limit_r(),
.start_pa = addr_map_rpg_pm_pvav0_base_r(),
.end_pa = addr_map_rpg_pm_pvav0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 2U,
.element_index_mask = BIT(0),
.element_index = 2U,
.dt_mmio = NULL,
.name = "perfmon_pvav1",
.device_index = T264_PVAV1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvav1_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvav1_limit_r(),
.start_pa = addr_map_rpg_pm_pvav1_base_r(),
.end_pa = addr_map_rpg_pm_pvav1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 3U,
.element_index_mask = BIT(0),
.element_index = 3U,
.dt_mmio = NULL,
.name = "perfmon_pvap0",
.device_index = T264_PVAP0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvap0_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvap0_limit_r(),
.start_pa = addr_map_rpg_pm_pvap0_base_r(),
.end_pa = addr_map_rpg_pm_pvap0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 4U,
.element_index_mask = BIT(0),
.element_index = 4U,
.dt_mmio = NULL,
.name = "perfmon_pvap1",
.device_index = T264_PVAP1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_pvap1_base_r(),
.end_abs_pa = addr_map_rpg_pm_pvap1_limit_r(),
.start_pa = addr_map_rpg_pm_pvap1_base_r(),
.end_pa = addr_map_rpg_pm_pvap1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_pva_inst0_perfmux_element_static_array[
T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_pva0_pm_base_r(),
.end_abs_pa = addr_map_pva0_pm_limit_r(),
.start_pa = addr_map_pva0_pm_base_r(),
.end_pa = addr_map_pva0_pm_limit_r(),
.base_pa = 0ULL,
.alist = t264_pva_pm_alist,
.alist_size = ARRAY_SIZE(t264_pva_pm_alist),
.fake_registers = NULL,
},
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 1U,
.element_index_mask = BIT(0),
.element_index = 1U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_pva1_pm_base_r(),
.end_abs_pa = addr_map_pva1_pm_limit_r(),
.start_pa = addr_map_pva1_pm_base_r(),
.end_pa = addr_map_pva1_pm_limit_r(),
.base_pa = 0ULL,
.alist = t264_pva_pm_alist,
.alist_size = ARRAY_SIZE(t264_pva_pm_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_pva_inst_static_array[
T264_HWPM_IP_PVA_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_pva_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva1_pm_limit_r(),
.element_stride = addr_map_pva0_pm_limit_r() -
addr_map_pva0_pm_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST,
.element_static_array =
t264_pva_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_pvac0_base_r(),
.range_end = addr_map_rpg_pm_pvap1_limit_r(),
.element_stride = addr_map_rpg_pm_pvac0_limit_r() -
addr_map_rpg_pm_pvac0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvpvadebugfs/pva0/hwpm",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_pva = {
.num_instances = T264_HWPM_IP_PVA_NUM_INSTANCES,
.ip_inst_static_array = t264_pva_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_pva0_pm_base_r(),
.range_end = addr_map_pva1_pm_limit_r(),
.inst_stride = addr_map_pva1_pm_limit_r() -
addr_map_pva0_pm_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_pvac0_base_r(),
.range_end = addr_map_rpg_pm_pvap1_limit_r(),
.inst_stride = addr_map_rpg_pm_pvap1_limit_r() -
addr_map_rpg_pm_pvac0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_PVA_H
#define T264_HWPM_IP_PVA_H
#if defined(CONFIG_T264_HWPM_IP_PVA)
#define T264_HWPM_ACTIVE_IP_PVA T264_HWPM_IP_PVA,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_PVA_NUM_INSTANCES 1U
#define T264_HWPM_IP_PVA_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_PVA_NUM_PERFMON_PER_INST 5U
#define T264_HWPM_IP_PVA_NUM_PERFMUX_PER_INST 2U
#define T264_HWPM_IP_PVA_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_pva;
#else
#define T264_HWPM_ACTIVE_IP_PVA
#endif
#endif /* T264_HWPM_IP_PVA_H */

View File

@@ -1,264 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_rtr.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
#include <hal/t264/t264_perfmon_device_index.h>
/* RTR aperture should be placed in instance T264_HWPM_IP_RTR_STATIC_RTR_INST */
static struct hwpm_ip_aperture t264_rtr_inst0_perfmux_element_static_array[
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "rtr",
.device_index = T264_RTR_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rtr_base_r(),
.end_abs_pa = addr_map_rtr_limit_r(),
.start_pa = addr_map_rtr_base_r(),
.end_pa = addr_map_rtr_limit_r(),
.base_pa = addr_map_rtr_base_r(),
.alist = t264_rtr_alist,
.alist_size = ARRAY_SIZE(t264_rtr_alist),
.fake_registers = NULL,
},
};
/* PMA from RTR perspective */
/* PMA aperture should be placed in instance T264_HWPM_IP_RTR_STATIC_PMA_INST */
static struct hwpm_ip_aperture t264_rtr_inst1_perfmux_element_static_array[
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMUX,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "pma",
.device_index = T264_PMA_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_pma_base_r(),
.end_abs_pa = addr_map_pma_limit_r(),
.start_pa = addr_map_pma_base_r(),
.end_pa = addr_map_pma_limit_r(),
.base_pa = addr_map_pma_base_r(),
.alist = t264_pma_res_cmd_slice_rtr_alist,
.alist_size = ARRAY_SIZE(t264_pma_res_cmd_slice_rtr_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_rtr_inst_static_array[
T264_HWPM_IP_RTR_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_rtr_inst0_perfmux_element_static_array,
.range_start = addr_map_rtr_base_r(),
.range_end = addr_map_rtr_limit_r(),
.element_stride = addr_map_rtr_limit_r() -
addr_map_rtr_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_rtr_inst1_perfmux_element_static_array,
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r(),
.element_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = -1,
},
.element_fs_mask = 0x1U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_rtr = {
.num_instances = T264_HWPM_IP_RTR_NUM_INSTANCES,
.ip_inst_static_array = t264_rtr_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/*
* PMA block is 0x2000 wide and RTR block is 0x1000 wide
* Expected facts:
* - PMA should be referred as a single entity
* - RTR IP instance array should have 2 slots(PMA, RTR)
*
* To ensure that the inst_slots are computed correctly
* as 2 slots, the instance range for perfmux aperture
* needs to be twice the PMA block.
*/
.range_start = addr_map_pma_base_r(),
.range_end = addr_map_pma_limit_r() +
(addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL),
/* Use PMA stride as it is larger block than RTR */
.inst_stride = addr_map_pma_limit_r() -
addr_map_pma_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = 0U,
.override_enable = false,
/* RTR is defined as 2 instance IP corresponding to router and pma */
/* Set this mask to indicate that instances are available */
.inst_fs_mask = 0x3U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_VALID,
.reserved = false,
};

View File

@@ -1,43 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_IP_RTR_H
#define T264_HWPM_IP_RTR_H
#define T264_HWPM_ACTIVE_IP_RTR T264_HWPM_IP_RTR,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_RTR_NUM_INSTANCES 2U
#define T264_HWPM_IP_RTR_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_RTR_NUM_PERFMON_PER_INST 0U
#define T264_HWPM_IP_RTR_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_RTR_NUM_BROADCAST_PER_INST 0U
#define T264_HWPM_IP_RTR_STATIC_RTR_INST 0U
#define T264_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX 0U
#define T264_HWPM_IP_RTR_STATIC_PMA_INST 1U
#define T264_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX 0U
extern struct hwpm_ip t264_hwpm_ip_rtr;
#endif /* T264_HWPM_IP_RTR_H */

View File

@@ -1,615 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_smmu.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_smmu_inst0_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu0",
.device_index = T264_UCF_TCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst1_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu1",
.device_index = T264_UCF_TCU1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst2_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu3",
.device_index = T264_UCF_TCU3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu3_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu3_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst3_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucftcu2",
.device_index = T264_UCF_TCU2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_smmu2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_smmu2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst4_perfmon_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_dispusbtcu0",
.device_index = T264_DISP_USB_TCU0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.end_abs_pa = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.start_pa = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.end_pa = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst0_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu2_base_r(),
.end_abs_pa = addr_map_smmu2_limit_r(),
.start_pa = addr_map_smmu2_base_r(),
.end_pa = addr_map_smmu2_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst1_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu1_base_r(),
.end_abs_pa = addr_map_smmu1_limit_r(),
.start_pa = addr_map_smmu1_base_r(),
.end_pa = addr_map_smmu1_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst2_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu4_base_r(),
.end_abs_pa = addr_map_smmu4_limit_r(),
.start_pa = addr_map_smmu4_base_r(),
.end_pa = addr_map_smmu4_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst3_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu0_base_r(),
.end_abs_pa = addr_map_smmu0_limit_r(),
.start_pa = addr_map_smmu0_base_r(),
.end_pa = addr_map_smmu0_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_smmu_inst4_perfmux_element_static_array[
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_smmu3_base_r(),
.end_abs_pa = addr_map_smmu3_limit_r(),
.start_pa = addr_map_smmu3_base_r(),
.end_pa = addr_map_smmu3_limit_r(),
.base_pa = 0ULL,
.alist = t264_smmu_alist,
.alist_size = ARRAY_SIZE(t264_smmu_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_smmu_inst_static_array[
T264_HWPM_IP_SMMU_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu2_base_r(),
.range_end = addr_map_smmu2_limit_r(),
.element_stride = addr_map_smmu2_limit_r() -
addr_map_smmu2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu0_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu0_limit_r() -
addr_map_rpg_pm_ucf_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu1_base_r(),
.range_end = addr_map_smmu1_limit_r(),
.element_stride = addr_map_smmu1_limit_r() -
addr_map_smmu1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu1_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu1_limit_r() -
addr_map_rpg_pm_ucf_smmu1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu4_base_r(),
.range_end = addr_map_smmu4_limit_r(),
.element_stride = addr_map_smmu4_limit_r() -
addr_map_smmu4_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst2_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu3_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu3_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu3_limit_r() -
addr_map_rpg_pm_ucf_smmu3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu0_base_r(),
.range_end = addr_map_smmu0_limit_r(),
.element_stride = addr_map_smmu0_limit_r() -
addr_map_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst3_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_smmu2_base_r(),
.range_end = addr_map_rpg_pm_ucf_smmu2_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_smmu2_limit_r() -
addr_map_rpg_pm_ucf_smmu2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(4),
.num_core_elements_per_inst =
T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_smmu_inst4_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu3_base_r(),
.range_end = addr_map_smmu3_limit_r(),
.element_stride = addr_map_smmu3_limit_r() -
addr_map_smmu3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST,
.element_static_array =
t264_smmu_inst4_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_disp_usb_smmu0_base_r(),
.range_end = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.element_stride = addr_map_rpg_pm_disp_usb_smmu0_limit_r() -
addr_map_rpg_pm_disp_usb_smmu0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_smmu = {
.num_instances = T264_HWPM_IP_SMMU_NUM_INSTANCES,
.ip_inst_static_array = t264_smmu_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_smmu1_base_r(),
.range_end = addr_map_smmu3_limit_r(),
.inst_stride = addr_map_smmu1_limit_r() -
addr_map_smmu1_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_smmu0_base_r(),
.range_end = addr_map_rpg_pm_disp_usb_smmu0_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_smmu0_limit_r() -
addr_map_rpg_pm_ucf_smmu0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_SMMU_H
#define T264_HWPM_IP_SMMU_H
#if defined(CONFIG_T264_HWPM_IP_SMMU)
#define T264_HWPM_ACTIVE_IP_SMMU T264_HWPM_IP_SMMU,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_SMMU_NUM_INSTANCES 5U
#define T264_HWPM_IP_SMMU_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_SMMU_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_smmu;
#else
#define T264_HWPM_ACTIVE_IP_SMMU
#endif
#endif /* T264_HWPM_IP_SMMU_H */

View File

@@ -1,300 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ucf_csw.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ucf_csw_inst0_perfmon_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfcsw0",
.device_index = T264_UCF_CSW0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst1_perfmon_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfcsw1",
.device_index = T264_UCF_CSW1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst0_perfmux_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_csw0_base_r(),
.end_abs_pa = addr_map_ucf_csw0_limit_r(),
.start_pa = addr_map_ucf_csw0_base_r(),
.end_pa = addr_map_ucf_csw0_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_csw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_csw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_csw_inst1_perfmux_element_static_array[
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_csw1_base_r(),
.end_abs_pa = addr_map_ucf_csw1_limit_r(),
.start_pa = addr_map_ucf_csw1_base_r(),
.end_pa = addr_map_ucf_csw1_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_csw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_csw_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ucf_csw_inst_static_array[
T264_HWPM_IP_UCF_CSW_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_csw_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw0_base_r(),
.range_end = addr_map_ucf_csw0_limit_r(),
.element_stride = addr_map_ucf_csw0_limit_r() -
addr_map_ucf_csw0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_csw_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_vddmss0_limit_r() -
addr_map_rpg_pm_ucf_vddmss0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_csw_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw1_base_r(),
.range_end = addr_map_ucf_csw1_limit_r(),
.element_stride = addr_map_ucf_csw1_limit_r() -
addr_map_ucf_csw1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_csw_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_vddmss1_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_vddmss1_limit_r() -
addr_map_rpg_pm_ucf_vddmss1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ucf_csw = {
.num_instances = T264_HWPM_IP_UCF_CSW_NUM_INSTANCES,
.ip_inst_static_array = t264_ucf_csw_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_csw0_base_r(),
.range_end = addr_map_ucf_csw1_limit_r(),
.inst_stride = addr_map_ucf_csw0_limit_r() -
addr_map_ucf_csw0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_vddmss0_base_r(),
.range_end = addr_map_rpg_pm_ucf_vddmss1_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_vddmss0_limit_r() -
addr_map_rpg_pm_ucf_vddmss0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_CSW_H
#define T264_HWPM_IP_UCF_CSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
#define T264_HWPM_ACTIVE_IP_UCF_CSW T264_HWPM_IP_UCF_CSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_CSW_NUM_INSTANCES 2U
#define T264_HWPM_IP_UCF_CSW_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_UCF_CSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_csw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_CSW
#endif
#endif /* T264_HWPM_IP_UCF_CSW_H */

View File

File diff suppressed because it is too large Load Diff

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_MSW_H
#define T264_HWPM_IP_UCF_MSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
#define T264_HWPM_ACTIVE_IP_UCF_MSW T264_HWPM_IP_UCF_MSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_MSW_NUM_INSTANCES 8U
#define T264_HWPM_IP_UCF_MSW_NUM_CORE_ELEMENT_PER_INST 2U
#define T264_HWPM_IP_UCF_MSW_NUM_PERFMON_PER_INST 2U
#define T264_HWPM_IP_UCF_MSW_NUM_PERFMUX_PER_INST 6U
#define T264_HWPM_IP_UCF_MSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_msw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_MSW
#endif
#endif /* T264_HWPM_IP_UCF_MSW_H */

View File

@@ -1,510 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_ucf_psw.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_ucf_psw_inst0_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw0",
.device_index = T264_UCF_PSW0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw0_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw0_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw0_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw0_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst1_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw1",
.device_index = T264_UCF_PSW1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw1_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw1_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw1_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw1_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst2_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw2",
.device_index = T264_UCF_PSW2_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw2_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw2_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw2_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw2_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst3_perfmon_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_ucfpsw3",
.device_index = T264_UCF_PSW3_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_ucf_psw3_base_r(),
.end_abs_pa = addr_map_rpg_pm_ucf_psw3_limit_r(),
.start_pa = addr_map_rpg_pm_ucf_psw3_base_r(),
.end_pa = addr_map_rpg_pm_ucf_psw3_limit_r(),
.base_pa = addr_map_rpg_grp_ucf_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst0_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn0_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn0_psw_limit_r(),
.start_pa = addr_map_ucf_psn0_psw_base_r(),
.end_pa = addr_map_ucf_psn0_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst1_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn1_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn1_psw_limit_r(),
.start_pa = addr_map_ucf_psn1_psw_base_r(),
.end_pa = addr_map_ucf_psn1_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst2_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn2_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn2_psw_limit_r(),
.start_pa = addr_map_ucf_psn2_psw_base_r(),
.end_pa = addr_map_ucf_psn2_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_ucf_psw_inst3_perfmux_element_static_array[
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_ucf_psn3_psw_base_r(),
.end_abs_pa = addr_map_ucf_psn3_psw_limit_r(),
.start_pa = addr_map_ucf_psn3_psw_base_r(),
.end_pa = addr_map_ucf_psn3_psw_limit_r(),
.base_pa = 0ULL,
.alist = t264_ucf_psn_psw_alist,
.alist_size = ARRAY_SIZE(t264_ucf_psn_psw_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_ucf_psw_inst_static_array[
T264_HWPM_IP_UCF_PSW_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn0_psw_base_r(),
.range_end = addr_map_ucf_psn0_psw_limit_r(),
.element_stride = addr_map_ucf_psn0_psw_limit_r() -
addr_map_ucf_psn0_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw0_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw0_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw0_limit_r() -
addr_map_rpg_pm_ucf_psw0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn1_psw_base_r(),
.range_end = addr_map_ucf_psn1_psw_limit_r(),
.element_stride = addr_map_ucf_psn1_psw_limit_r() -
addr_map_ucf_psn1_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw1_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw1_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw1_limit_r() -
addr_map_rpg_pm_ucf_psw1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(2),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst2_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn2_psw_base_r(),
.range_end = addr_map_ucf_psn2_psw_limit_r(),
.element_stride = addr_map_ucf_psn2_psw_limit_r() -
addr_map_ucf_psn2_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst2_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw2_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw2_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw2_limit_r() -
addr_map_rpg_pm_ucf_psw2_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(3),
.num_core_elements_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_ucf_psw_inst3_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn3_psw_base_r(),
.range_end = addr_map_ucf_psn3_psw_limit_r(),
.element_stride = addr_map_ucf_psn3_psw_limit_r() -
addr_map_ucf_psn3_psw_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST,
.element_static_array =
t264_ucf_psw_inst3_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_ucf_psw3_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw3_limit_r(),
.element_stride = addr_map_rpg_pm_ucf_psw3_limit_r() -
addr_map_rpg_pm_ucf_psw3_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_ucf_psw = {
.num_instances = T264_HWPM_IP_UCF_PSW_NUM_INSTANCES,
.ip_inst_static_array = t264_ucf_psw_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_ucf_psn0_psw_base_r(),
.range_end = addr_map_ucf_psn3_psw_limit_r(),
.inst_stride = addr_map_ucf_psn0_psw_limit_r() -
addr_map_ucf_psn0_psw_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_ucf_psw0_base_r(),
.range_end = addr_map_rpg_pm_ucf_psw3_limit_r(),
.inst_stride = addr_map_rpg_pm_ucf_psw0_limit_r() -
addr_map_rpg_pm_ucf_psw0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_UCF_PSW_H
#define T264_HWPM_IP_UCF_PSW_H
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
#define T264_HWPM_ACTIVE_IP_UCF_PSW T264_HWPM_IP_UCF_PSW,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_UCF_PSW_NUM_INSTANCES 4U
#define T264_HWPM_IP_UCF_PSW_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_UCF_PSW_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_ucf_psw;
#else
#define T264_HWPM_ACTIVE_IP_UCF_PSW
#endif
#endif /* T264_HWPM_IP_UCF_PSW_H */

View File

@@ -1,301 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_vi.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_vi_inst0_perfmon_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vi0",
.device_index = T264_VI0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vi0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vi0_limit_r(),
.start_pa = addr_map_rpg_pm_vi0_base_r(),
.end_pa = addr_map_rpg_pm_vi0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst1_perfmon_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vi1",
.device_index = T264_VI1_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vi1_base_r(),
.end_abs_pa = addr_map_rpg_pm_vi1_limit_r(),
.start_pa = addr_map_rpg_pm_vi1_base_r(),
.end_pa = addr_map_rpg_pm_vi1_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst0_perfmux_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vi_thi_base_r(),
.end_abs_pa = addr_map_vi_thi_limit_r(),
.start_pa = addr_map_vi_thi_base_r(),
.end_pa = addr_map_vi_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_vi_alist,
.alist_size = ARRAY_SIZE(t264_vi_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vi_inst1_perfmux_element_static_array[
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vi2_thi_base_r(),
.end_abs_pa = addr_map_vi2_thi_limit_r(),
.start_pa = addr_map_vi2_thi_base_r(),
.end_pa = addr_map_vi2_thi_limit_r(),
.base_pa = 0ULL,
.alist = t264_vi_alist,
.alist_size = ARRAY_SIZE(t264_vi_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_vi_inst_static_array[
T264_HWPM_IP_VI_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vi_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi_thi_base_r(),
.range_end = addr_map_vi_thi_limit_r(),
.element_stride = addr_map_vi_thi_limit_r() -
addr_map_vi_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vi_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vi0_base_r(),
.range_end = addr_map_rpg_pm_vi0_limit_r(),
.element_stride = addr_map_rpg_pm_vi0_limit_r() -
addr_map_rpg_pm_vi0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
{
.hw_inst_mask = BIT(1),
.num_core_elements_per_inst =
T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vi_inst1_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi2_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.element_stride = addr_map_vi2_thi_limit_r() -
addr_map_vi2_thi_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VI_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vi_inst1_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vi1_base_r(),
.range_end = addr_map_rpg_pm_vi1_limit_r(),
.element_stride = addr_map_rpg_pm_vi1_limit_r() -
addr_map_rpg_pm_vi1_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_INVALID,
},
.element_fs_mask = 0U,
.dev_name = "",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_vi = {
.num_instances = T264_HWPM_IP_VI_NUM_INSTANCES,
.ip_inst_static_array = t264_vi_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vi_thi_base_r(),
.range_end = addr_map_vi2_thi_limit_r(),
.inst_stride = addr_map_vi_thi_limit_r() -
addr_map_vi_thi_base_r() + 1ULL,
.inst_slots = 0U,
.islots_overlimit = true,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_vi0_base_r(),
.range_end = addr_map_rpg_pm_vi1_limit_r(),
.inst_stride = addr_map_rpg_pm_vi0_limit_r() -
addr_map_rpg_pm_vi0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK | TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_VI_H
#define T264_HWPM_IP_VI_H
#if defined(CONFIG_T264_HWPM_IP_VI)
#define T264_HWPM_ACTIVE_IP_VI T264_HWPM_IP_VI,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_VI_NUM_INSTANCES 2U
#define T264_HWPM_IP_VI_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_VI_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_vi;
#else
#define T264_HWPM_ACTIVE_IP_VI
#endif
#endif /* T264_HWPM_IP_VI_H */

View File

@@ -1,196 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#include "t264_vic.h"
#include <tegra_hwpm.h>
#include <hal/t264/t264_regops_allowlist.h>
#include <hal/t264/t264_perfmon_device_index.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
static struct hwpm_ip_aperture t264_vic_inst0_perfmon_element_static_array[
T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST] = {
{
.element_type = HWPM_ELEMENT_PERFMON,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = "perfmon_vica0",
.device_index = T264_VICA0_PERFMON_DEVICE_NODE_INDEX,
.start_abs_pa = addr_map_rpg_pm_vic0_base_r(),
.end_abs_pa = addr_map_rpg_pm_vic0_limit_r(),
.start_pa = addr_map_rpg_pm_vic0_base_r(),
.end_pa = addr_map_rpg_pm_vic0_limit_r(),
.base_pa = addr_map_rpg_grp_vision_base_r(),
.alist = t264_perfmon_alist,
.alist_size = ARRAY_SIZE(t264_perfmon_alist),
.fake_registers = NULL,
},
};
static struct hwpm_ip_aperture t264_vic_inst0_perfmux_element_static_array[
T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST] = {
{
.element_type = IP_ELEMENT_PERFMUX,
.aperture_index = 0U,
.element_index_mask = BIT(0),
.element_index = 0U,
.dt_mmio = NULL,
.name = {'\0'},
.start_abs_pa = addr_map_vic_base_r(),
.end_abs_pa = addr_map_vic_limit_r(),
.start_pa = addr_map_vic_base_r(),
.end_pa = addr_map_vic_limit_r(),
.base_pa = 0ULL,
.alist = t264_vic_alist,
.alist_size = ARRAY_SIZE(t264_vic_alist),
.fake_registers = NULL,
},
};
/* IP instance array */
static struct hwpm_ip_inst t264_vic_inst_static_array[
T264_HWPM_IP_VIC_NUM_INSTANCES] = {
{
.hw_inst_mask = BIT(0),
.num_core_elements_per_inst =
T264_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST,
.element_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST,
.element_static_array =
t264_vic_inst0_perfmux_element_static_array,
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.element_stride = addr_map_vic_limit_r() -
addr_map_vic_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_BROADCAST_PER_INST,
.element_static_array = NULL,
.range_start = 0ULL,
.range_end = 0ULL,
.element_stride = 0ULL,
.element_slots = 0U,
.element_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.num_element_per_inst =
T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST,
.element_static_array =
t264_vic_inst0_perfmon_element_static_array,
.range_start = addr_map_rpg_pm_vic0_base_r(),
.range_end = addr_map_rpg_pm_vic0_limit_r(),
.element_stride = addr_map_rpg_pm_vic0_limit_r() -
addr_map_rpg_pm_vic0_base_r() + 1ULL,
.element_slots = 0U,
.element_arr = NULL,
},
},
.ip_ops = {
.ip_dev = NULL,
.hwpm_ip_pm = NULL,
.hwpm_ip_reg_op = NULL,
.fd = TEGRA_HWPM_IP_DEBUG_FD_VALID,
},
.element_fs_mask = 0U,
.dev_name = "/dev/nvhost-debug/vic_hwpm",
},
};
/* IP structure */
struct hwpm_ip t264_hwpm_ip_vic = {
.num_instances = T264_HWPM_IP_VIC_NUM_INSTANCES,
.ip_inst_static_array = t264_vic_inst_static_array,
.inst_aperture_info = {
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMUX
*/
{
/* NOTE: range should be in ascending order */
.range_start = addr_map_vic_base_r(),
.range_end = addr_map_vic_limit_r(),
.inst_stride = addr_map_vic_limit_r() -
addr_map_vic_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_BROADCAST
*/
{
.range_start = 0ULL,
.range_end = 0ULL,
.inst_stride = 0ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
/*
* Instance info corresponding to
* TEGRA_HWPM_APERTURE_TYPE_PERFMON
*/
{
.range_start = addr_map_rpg_pm_vic0_base_r(),
.range_end = addr_map_rpg_pm_vic0_limit_r(),
.inst_stride = addr_map_rpg_pm_vic0_limit_r() -
addr_map_rpg_pm_vic0_base_r() + 1ULL,
.inst_slots = 0U,
.inst_arr = NULL,
},
},
.dependent_fuse_mask = TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK |
TEGRA_HWPM_FUSE_OPT_HWPM_DISABLE_MASK,
.override_enable = false,
.inst_fs_mask = 0U,
.resource_status = TEGRA_HWPM_RESOURCE_STATUS_INVALID,
.reserved = false,
};

View File

@@ -1,48 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* This is a generated file. Do not edit.
*
* Steps to regenerate:
* python3 ip_files_generator.py <soc_chip> <IP_name> [<dir_name>]
*/
#ifndef T264_HWPM_IP_VIC_H
#define T264_HWPM_IP_VIC_H
#if defined(CONFIG_T264_HWPM_IP_VIC)
#define T264_HWPM_ACTIVE_IP_VIC T264_HWPM_IP_VIC,
/* This data should ideally be available in HW headers */
#define T264_HWPM_IP_VIC_NUM_INSTANCES 1U
#define T264_HWPM_IP_VIC_NUM_CORE_ELEMENT_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_PERFMON_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_PERFMUX_PER_INST 1U
#define T264_HWPM_IP_VIC_NUM_BROADCAST_PER_INST 0U
extern struct hwpm_ip t264_hwpm_ip_vic;
#else
#define T264_HWPM_ACTIVE_IP_VIC
#endif
#endif /* T264_HWPM_IP_VIC_H */

View File

@@ -1,546 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
#define T264_HWPM_ENGINE_INDEX_GPMA0 3U
#define T264_HWPM_ENGINE_INDEX_GPMA1 4U
#define T264_HWPM_ENGINE_INDEX_PMA 8U
int t264_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = active_chip->chip_ips[
active_chip->get_rtr_int_idx()];
struct hwpm_ip_inst *ip_inst_rtr = &chip_ip->ip_inst_static_array[
T264_HWPM_IP_RTR_STATIC_RTR_INST];
struct hwpm_ip_inst *ip_inst_pma = &chip_ip->ip_inst_static_array[
T264_HWPM_IP_RTR_STATIC_PMA_INST];
if (rtr_perfmux_ptr != NULL) {
*rtr_perfmux_ptr = &ip_inst_rtr->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T264_HWPM_IP_RTR_STATIC_RTR_PERFMUX_INDEX];
}
if (pma_perfmux_ptr != NULL) {
*pma_perfmux_ptr = &ip_inst_pma->element_info[
TEGRA_HWPM_APERTURE_TYPE_PERFMUX].element_static_array[
T264_HWPM_IP_RTR_STATIC_PMA_PERFMUX_INDEX];
}
return 0;
}
int t264_hwpm_check_status(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Check ROUTER state */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val);
hwpm_assert_print(hwpm,
pmmsys_router_enginestatus_status_v(reg_val) ==
pmmsys_router_enginestatus_status_empty_v(),
return -EINVAL, "Router not ready value 0x%x", reg_val);
/* Check PMA state */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val);
hwpm_assert_print(hwpm,
(reg_val & pmasys_channel_status_engine_status_m()) ==
pmasys_channel_status_engine_status_empty_f(),
return -EINVAL, "PMA not ready value 0x%x", reg_val);
return 0;
}
int t264_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 retries = 10U;
u32 sleep_msecs = 100U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA triggers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_config_user_pma_pulse_m(),
pmasys_command_slice_trigger_config_user_pma_pulse_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), reg_val);
/* Reset TRIGGER_START_MASK registers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask0_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_start_mask0_engine_m(),
pmasys_command_slice_trigger_start_mask0_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask0_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask1_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_start_mask1_engine_m(),
pmasys_command_slice_trigger_start_mask1_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_start_mask1_r(0), reg_val);
/* Reset TRIGGER_STOP_MASK registers */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask0_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_stop_mask0_engine_m(),
pmasys_command_slice_trigger_stop_mask0_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask0_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask1_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_stop_mask1_engine_m(),
pmasys_command_slice_trigger_stop_mask1_engine_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_stop_mask1_r(0), reg_val);
/* Wait for PERFMONs to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val,
(pmmsys_router_enginestatus_merged_perfmon_status_v(
reg_val) != 0U),
"PMMSYS_ROUTER_ENGINESTATUS_PERFMON_STATUS timed out");
/* Wait for ROUTER to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_enginestatus_r(), &reg_val,
(pmmsys_router_enginestatus_status_v(reg_val) !=
pmmsys_router_enginestatus_status_empty_v()),
"PMMSYS_ROUTER_ENGINESTATUS_STATUS timed out");
/* Wait for PMA to idle */
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val,
((reg_val & pmasys_channel_status_engine_status_m()) !=
pmasys_channel_status_engine_status_empty_f()),
"PMASYS_CHANNEL_STATUS timed out");
return err;
}
int t264_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0,0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_config_user_coalesce_timeout_cycles_m(),
pmasys_channel_config_user_coalesce_timeout_cycles__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_config_user_r(0,0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg__prod_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg_disabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg2_secure_slcg_m(),
pmasys_profiling_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val, pmasys_profiling_cg1_secure_flcg_m(),
pmasys_profiling_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_dg_cg1_secure_flcg_m(),
pmmsys_router_profiling_dg_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_dg_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg1_secure_flcg_m(),
pmmsys_router_profiling_cg1_secure_flcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg1_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_perfmon_cg2_secure_slcg_m(),
pmmsys_router_perfmon_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_perfmon_cg2_secure_r(), reg_val);
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), &reg_val);
reg_val = set_field(reg_val,
pmmsys_router_profiling_cg2_secure_slcg_m(),
pmmsys_router_profiling_cg2_secure_slcg_enabled_f());
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_router_profiling_cg2_secure_r(), reg_val);
return 0;
}
int t264_hwpm_credit_program(struct tegra_soc_hwpm *hwpm,
u32 *num_credits, u8 cblock_idx, u8 pma_channel_idx,
uint16_t credit_cmd)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr, pma perfmux failed");
switch (credit_cmd) {
case TEGRA_HWPM_CMD_SET_HS_CREDITS:
/* Write credits information */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
&reg_val);
reg_val = set_field(reg_val,
pmmsys_user_channel_config_secure_hs_credits_m(),
*num_credits);
tegra_hwpm_writel(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
reg_val);
break;
case TEGRA_HWPM_CMD_GET_HS_CREDITS:
/* Read credits information */
tegra_hwpm_readl(hwpm, rtr_perfmux,
pmmsys_user_channel_config_secure_r(
cblock_idx, pma_channel_idx),
num_credits);
break;
case TEGRA_HWPM_CMD_GET_TOTAL_HS_CREDITS:
/* read the total HS Credits */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_streaming_capabilities1_r(), &reg_val);
*num_credits = pmasys_streaming_capabilities1_total_credits_v(
reg_val);
break;
case TEGRA_HWPM_CMD_GET_CHIPLET_HS_CREDITS_POOL:
/* Defined for future chips */
tegra_hwpm_err(hwpm,
"TEGRA_SOC_HWPM_CMD_GET_CHIPLET_HS_CREDIT_POOL"
" not supported");
err = -EINVAL;
break;
case TEGRA_HWPM_CMD_GET_HS_CREDITS_MAPPING:
/* Defined for future chips */
tegra_hwpm_err(hwpm,
"TEGRA_SOC_HWPM_CMD_GET_HS_CREDIT_MAPPING"
" not supported");
err = -EINVAL;
break;
default:
tegra_hwpm_err(hwpm, "Invalid Credit Programming State (%d)",
credit_cmd);
err = -EINVAL;
break;
}
return err;
}
int t264_hwpm_setup_trigger(struct tegra_soc_hwpm *hwpm,
u8 enable_cross_trigger, u8 session_type)
{
int err = 0;
u32 trigger_mask_secure0 = 0U;
u32 record_select_secure = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get pma perfmux failed");
/*
* Case 1: profiler, cross-trigger enabled, GPU->SoC
* - Action: enable incoming start-stop trigger from GPU PMA
* - GPU PMA Action: enable outgoing trigger from GPU PMA,
* trigger type doesn't matter on GPU side
*
* Case 2: sampler, cross-trigger enabled, GPU->SoC
* - Action: enable incoming periodic trigger from GPU PMA
* - GPU PMA Action: enable outgoing trigger from GPU PMA,
* trigger type doesn't matter on GPU side
*
* Case 3: profiler, cross-trigger enabled, SoC->GPU
* - Action: enable outgoing trigger from SoC PMA,
* trigger type doesn't matter on SoC side
* - GPU PMA Action: configure incoming start-stop trigger from SoC PMA
*
* Case 4: sampler, cross-trigger enabled, SoC->GPU
* - Action: enable outgoing trigger from SoC PMA,
* trigger type doesn't matter on SoC side
* - GPU PMA Action: configure incoming periodic trigger from SoC PMA
*
* Case 5: profiler, cross-trigger disabled
* - Action: enable own trigger from SoC PMA,
* trigger type doesn't matter
* - GPU PMA Action: enable own trigger from GPU PMA,
* trigger type doesn't matter)
*
* Case 6: sampler, cross-trigger disabled
* - Action: enable own trigger from SoC PMA,
* trigger type doesn't matter
* - GPU PMA Action: enable own trigger from GPU PMA,
* trigger type doesn't matter
*/
if (!enable_cross_trigger) {
/*
* Handle Case-3 to Case-6
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_PMA);
record_select_secure = T264_HWPM_ENGINE_INDEX_PMA;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
return err;
}
switch (session_type) {
case TEGRA_HWPM_CMD_PERIODIC_SESSION:
/*
* Handle Case-1
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_GPMA1);
record_select_secure = T264_HWPM_ENGINE_INDEX_GPMA1;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
break;
case TEGRA_HWPM_CMD_START_STOP_SESSION:
/*
* Handle Case-2
*/
trigger_mask_secure0 = BIT(T264_HWPM_ENGINE_INDEX_GPMA0);
record_select_secure = T264_HWPM_ENGINE_INDEX_GPMA0;
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_mask_secure0_r(0),
trigger_mask_secure0);
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_record_select_secure_r(0),
record_select_secure);
break;
case TEGRA_HWPM_CMD_INVALID_SESSION:
default:
tegra_hwpm_err(hwpm, "Invalid Session type");
err = -EINVAL;
break;
}
return err;
}

View File

@@ -1,31 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_INIT_H
#define T264_HWPM_INIT_H
struct tegra_soc_hwpm;
int t264_hwpm_init_chip_info(struct tegra_soc_hwpm *hwpm);
#endif /* T264_HWPM_INIT_H */

View File

@@ -1,360 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_clk_rst.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_kmem.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_init.h>
#include <hal/t264/t264_internal.h>
static struct tegra_soc_hwpm_chip t264_chip_info = {
.la_clk_rate = 648000000,
.chip_ips = NULL,
/* HALs */
.validate_secondary_hals = t264_hwpm_validate_secondary_hals,
/* Clocks-Resets */
.clk_rst_prepare = tegra_hwpm_clk_rst_prepare,
.clk_rst_set_rate_enable = tegra_hwpm_clk_rst_set_rate_enable,
.clk_rst_disable = tegra_hwpm_clk_rst_disable,
.clk_rst_release = tegra_hwpm_clk_rst_release,
/* IP */
.is_ip_active = t264_hwpm_is_ip_active,
.is_resource_active = t264_hwpm_is_resource_active,
.get_rtr_int_idx = t264_get_rtr_int_idx,
.get_ip_max_idx = t264_get_ip_max_idx,
.get_rtr_pma_perfmux_ptr = t264_hwpm_get_rtr_pma_perfmux_ptr,
.extract_ip_ops = t264_hwpm_extract_ip_ops,
.force_enable_ips = t264_hwpm_force_enable_ips,
.validate_current_config = t264_hwpm_validate_current_config,
.get_fs_info = tegra_hwpm_get_fs_info,
.get_resource_info = tegra_hwpm_get_resource_info,
/* Clock gating */
.init_prod_values = t264_hwpm_init_prod_values,
.disable_cg = t264_hwpm_disable_cg,
.enable_cg = t264_hwpm_enable_cg,
/* Secure register programming */
.credit_program = t264_hwpm_credit_program,
.setup_trigger = t264_hwpm_setup_trigger,
/* Resource reservation */
.reserve_rtr = tegra_hwpm_reserve_rtr,
.release_rtr = tegra_hwpm_release_rtr,
/* Aperture */
.perfmon_enable = t264_hwpm_perfmon_enable,
.perfmon_disable = t264_hwpm_perfmon_disable,
.perfmux_disable = tegra_hwpm_perfmux_disable,
.disable_triggers = t264_hwpm_disable_triggers,
.check_status = t264_hwpm_check_status,
/* Memory management */
.disable_mem_mgmt = t264_hwpm_disable_mem_mgmt,
.enable_mem_mgmt = t264_hwpm_enable_mem_mgmt,
.invalidate_mem_config = t264_hwpm_invalidate_mem_config,
.stream_mem_bytes = t264_hwpm_stream_mem_bytes,
.disable_pma_streaming = t264_hwpm_disable_pma_streaming,
.update_mem_bytes_get_ptr = t264_hwpm_update_mem_bytes_get_ptr,
.get_mem_bytes_put_ptr = t264_hwpm_get_mem_bytes_put_ptr,
.membuf_overflow_status = t264_hwpm_membuf_overflow_status,
/* Allowlist */
.get_alist_buf_size = tegra_hwpm_get_alist_buf_size,
.zero_alist_regs = tegra_hwpm_zero_alist_regs,
.copy_alist = tegra_hwpm_copy_alist,
.check_alist = tegra_hwpm_check_alist,
};
bool t264_hwpm_validate_secondary_hals(struct tegra_soc_hwpm *hwpm)
{
tegra_hwpm_fn(hwpm, " ");
if (hwpm->active_chip->clk_rst_prepare == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_prepare HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_set_rate_enable == NULL) {
tegra_hwpm_err(hwpm,
"clk_rst_set_rate_enable HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_disable == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_disable HAL uninitialized");
return false;
}
if (hwpm->active_chip->clk_rst_release == NULL) {
tegra_hwpm_err(hwpm, "clk_rst_release HAL uninitialized");
return false;
}
if (hwpm->active_chip->credit_program == NULL) {
tegra_hwpm_err(hwpm, "credit_program HAL uninitialized");
return false;
}
if (hwpm->active_chip->setup_trigger == NULL) {
tegra_hwpm_err(hwpm, "setup_trigger HAL uninitialized");
return false;
}
return true;
}
bool t264_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u32 *config_ip_index)
{
u32 config_ip = TEGRA_HWPM_IP_INACTIVE;
switch (ip_enum) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case TEGRA_HWPM_IP_VIC:
config_ip = T264_HWPM_IP_VIC;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case TEGRA_HWPM_IP_MSS_CHANNEL:
config_ip = T264_HWPM_IP_MSS_CHANNEL;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_PVA)
case TEGRA_HWPM_IP_PVA:
config_ip = T264_HWPM_IP_PVA;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case TEGRA_HWPM_IP_MSS_HUB:
config_ip = T264_HWPM_IP_MSS_HUBS;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_OCU)
case TEGRA_HWPM_IP_MCF_OCU:
config_ip = T264_HWPM_IP_OCU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case TEGRA_HWPM_IP_VI:
config_ip = T264_HWPM_IP_VI;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_ISP)
case TEGRA_HWPM_IP_ISP:
config_ip = T264_HWPM_IP_ISP;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case TEGRA_HWPM_IP_SMMU:
config_ip = T264_HWPM_IP_SMMU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case TEGRA_HWPM_IP_UCF_MSW:
config_ip = T264_HWPM_IP_UCF_MSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case TEGRA_HWPM_IP_UCF_PSW:
config_ip = T264_HWPM_IP_UCF_PSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case TEGRA_HWPM_IP_UCF_CSW:
config_ip = T264_HWPM_IP_UCF_CSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_CPU)
case TEGRA_HWPM_IP_CPU:
config_ip = T264_HWPM_IP_CPU;
#endif
break;
default:
tegra_hwpm_err(hwpm, "Queried enum tegra_hwpm_ip %d invalid",
ip_enum);
break;
}
*config_ip_index = config_ip;
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
bool t264_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index)
{
u32 config_ip = TEGRA_HWPM_IP_INACTIVE;
switch (res_enum) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case TEGRA_HWPM_RESOURCE_VIC:
config_ip = T264_HWPM_IP_VIC;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case TEGRA_HWPM_RESOURCE_MSS_CHANNEL:
config_ip = T264_HWPM_IP_MSS_CHANNEL;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_PVA)
case TEGRA_HWPM_RESOURCE_PVA:
config_ip = T264_HWPM_IP_PVA;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case TEGRA_HWPM_RESOURCE_MSS_HUB:
config_ip = T264_HWPM_IP_MSS_HUBS;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_OCU)
case TEGRA_HWPM_RESOURCE_MCF_OCU:
config_ip = T264_HWPM_IP_OCU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case TEGRA_HWPM_RESOURCE_VI:
config_ip = T264_HWPM_IP_VI;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_ISP)
case TEGRA_HWPM_RESOURCE_ISP:
config_ip = T264_HWPM_IP_ISP;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case TEGRA_HWPM_RESOURCE_SMMU:
config_ip = T264_HWPM_IP_SMMU;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case TEGRA_HWPM_RESOURCE_UCF_MSW:
config_ip = T264_HWPM_IP_UCF_MSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case TEGRA_HWPM_RESOURCE_UCF_PSW:
config_ip = T264_HWPM_IP_UCF_PSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case TEGRA_HWPM_RESOURCE_UCF_CSW:
config_ip = T264_HWPM_IP_UCF_CSW;
#endif
break;
#if defined(CONFIG_T264_HWPM_IP_CPU)
case TEGRA_HWPM_RESOURCE_CPU:
config_ip = T264_HWPM_IP_CPU;
#endif
break;
case TEGRA_HWPM_RESOURCE_PMA:
config_ip = T264_HWPM_IP_PMA;
break;
case TEGRA_HWPM_RESOURCE_CMD_SLICE_RTR:
config_ip = T264_HWPM_IP_RTR;
break;
default:
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Queried resource %d invalid",
res_enum);
break;
}
*config_ip_index = config_ip;
return (config_ip != TEGRA_HWPM_IP_INACTIVE);
}
u32 t264_get_rtr_int_idx(void)
{
return T264_HWPM_IP_RTR;
}
u32 t264_get_ip_max_idx(void)
{
return T264_HWPM_IP_MAX;
}
int t264_hwpm_init_chip_info(struct tegra_soc_hwpm *hwpm)
{
struct hwpm_ip **t264_active_ip_info;
/* Allocate array of pointers to hold active IP structures */
t264_chip_info.chip_ips = tegra_hwpm_kcalloc(hwpm,
T264_HWPM_IP_MAX, sizeof(struct hwpm_ip *));
/* Add active chip structure link to hwpm super-structure */
hwpm->active_chip = &t264_chip_info;
/* Temporary pointer to make below assignments legible */
t264_active_ip_info = t264_chip_info.chip_ips;
t264_active_ip_info[T264_HWPM_IP_PMA] = &t264_hwpm_ip_pma;
t264_active_ip_info[T264_HWPM_IP_RTR] = &t264_hwpm_ip_rtr;
#if defined(CONFIG_T264_HWPM_IP_VIC)
t264_active_ip_info[T264_HWPM_IP_VIC] = &t264_hwpm_ip_vic;
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
t264_active_ip_info[T264_HWPM_IP_MSS_CHANNEL] =
&t264_hwpm_ip_mss_channel;
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
t264_active_ip_info[T264_HWPM_IP_MSS_HUBS] =
&t264_hwpm_ip_mss_hubs;
#endif
#if defined(CONFIG_T264_HWPM_IP_PVA)
t264_active_ip_info[T264_HWPM_IP_PVA] = &t264_hwpm_ip_pva;
#endif
#if defined(CONFIG_T264_HWPM_IP_OCU)
t264_active_ip_info[T264_HWPM_IP_OCU] = &t264_hwpm_ip_ocu;
#endif
#if defined(CONFIG_T264_HWPM_IP_SMMU)
t264_active_ip_info[T264_HWPM_IP_SMMU] = &t264_hwpm_ip_smmu;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
t264_active_ip_info[T264_HWPM_IP_UCF_MSW] = &t264_hwpm_ip_ucf_msw;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
t264_active_ip_info[T264_HWPM_IP_UCF_PSW] = &t264_hwpm_ip_ucf_psw;
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
t264_active_ip_info[T264_HWPM_IP_UCF_CSW] = &t264_hwpm_ip_ucf_csw;
#endif
#if defined(CONFIG_T264_HWPM_IP_CPU)
t264_active_ip_info[T264_HWPM_IP_CPU] = &t264_hwpm_ip_cpu;
#endif
#if defined(CONFIG_T264_HWPM_IP_VI)
t264_active_ip_info[T264_HWPM_IP_VI] = &t264_hwpm_ip_vi;
#endif
#if defined(CONFIG_T264_HWPM_IP_ISP)
t264_active_ip_info[T264_HWPM_IP_ISP] = &t264_hwpm_ip_isp;
#endif
if (!tegra_hwpm_validate_primary_hals(hwpm)) {
return -EINVAL;
}
return 0;
}

View File

@@ -1,119 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_INTERNAL_H
#define T264_HWPM_INTERNAL_H
#include <hal/t264/ip/vic/t264_vic.h>
#include <hal/t264/ip/pva/t264_pva.h>
#include <hal/t264/ip/mss_channel/t264_mss_channel.h>
#include <hal/t264/ip/mss_hubs/t264_mss_hubs.h>
#include <hal/t264/ip/ocu/t264_ocu.h>
#include <hal/t264/ip/smmu/t264_smmu.h>
#include <hal/t264/ip/ucf_msw/t264_ucf_msw.h>
#include <hal/t264/ip/ucf_psw/t264_ucf_psw.h>
#include <hal/t264/ip/ucf_csw/t264_ucf_csw.h>
#include <hal/t264/ip/cpu/t264_cpu.h>
#include <hal/t264/ip/vi/t264_vi.h>
#include <hal/t264/ip/isp/t264_isp.h>
#include <hal/t264/ip/pma/t264_pma.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#undef DEFINE_SOC_HWPM_ACTIVE_IP
#define DEFINE_SOC_HWPM_ACTIVE_IP(name) name
#define T264_HWPM_ACTIVE_IP_MAX T264_HWPM_IP_MAX
#define T264_ACTIVE_IPS \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_PMA) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_RTR) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_VI) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_ISP) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_VIC) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_PVA) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MSS_CHANNEL) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MSS_HUBS) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_OCU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_SMMU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_MSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_PSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_UCF_CSW) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_CPU) \
DEFINE_SOC_HWPM_ACTIVE_IP(T264_HWPM_ACTIVE_IP_MAX)
enum t264_hwpm_active_ips {
T264_ACTIVE_IPS
};
#undef DEFINE_SOC_HWPM_ACTIVE_IP
enum tegra_soc_hwpm_ip;
enum tegra_soc_hwpm_resource;
struct tegra_soc_hwpm;
struct hwpm_ip_aperture;
bool t264_hwpm_validate_secondary_hals(struct tegra_soc_hwpm *hwpm);
bool t264_hwpm_is_ip_active(struct tegra_soc_hwpm *hwpm,
u32 ip_enum, u32 *config_ip_index);
bool t264_hwpm_is_resource_active(struct tegra_soc_hwpm *hwpm,
u32 res_enum, u32 *config_ip_index);
u32 t264_get_rtr_int_idx(void);
u32 t264_get_ip_max_idx(void);
int t264_hwpm_get_rtr_pma_perfmux_ptr(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture **rtr_perfmux_ptr,
struct hwpm_ip_aperture **pma_perfmux_ptr);
int t264_hwpm_check_status(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
struct tegra_hwpm_ip_ops *ip_ops, bool available);
int t264_hwpm_force_enable_ips(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_triggers(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_init_prod_values(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_credit_program(struct tegra_soc_hwpm *hwpm,
u32 *num_credits, u8 cblock_idx, u8 pma_channel_idx,
uint16_t credit_cmd);
int t264_hwpm_setup_trigger(struct tegra_soc_hwpm *hwpm,
u8 enable_cross_trigger, u8 session_type);
int t264_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon);
int t264_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon);
int t264_hwpm_disable_cg(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_enable_cg(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm);
int t264_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump);
int t264_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr);
int t264_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status);
#endif /* T264_HWPM_INTERNAL_H */

View File

@@ -1,673 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_static_analysis.h>
#include <tegra_hwpm_common.h>
#include <tegra_hwpm_soc.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/hw/t264_addr_map_soc_hwpm.h>
/*
* This function is invoked by register_ip API.
* Convert the external resource enum to internal IP index.
* Extract given ip_ops and update corresponding IP structure.
*/
int t264_hwpm_extract_ip_ops(struct tegra_soc_hwpm *hwpm,
u32 resource_enum, u64 base_address,
struct tegra_hwpm_ip_ops *ip_ops, bool available)
{
int ret = 0;
u32 ip_idx = 0U;
tegra_hwpm_fn(hwpm, " ");
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"Extract IP ops for resource enum %d info", resource_enum);
/* Convert tegra_soc_hwpm_resource to internal enum */
if (!(t264_hwpm_is_resource_active(hwpm, resource_enum, &ip_idx))) {
tegra_hwpm_dbg(hwpm, hwpm_dbg_ip_register,
"SOC hwpm resource %d (base 0x%llx) is unconfigured",
resource_enum, (unsigned long long)base_address);
goto fail;
}
switch (ip_idx) {
#if defined(CONFIG_T264_HWPM_IP_VIC)
case T264_HWPM_IP_VIC:
#endif
#if defined(CONFIG_T264_HWPM_IP_PVA)
case T264_HWPM_IP_PVA:
#endif
#if defined(CONFIG_T264_HWPM_IP_OCU)
case T264_HWPM_IP_OCU:
#endif
#if defined(CONFIG_T264_HWPM_IP_SMMU)
case T264_HWPM_IP_SMMU:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
case T264_HWPM_IP_UCF_MSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
case T264_HWPM_IP_UCF_PSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
case T264_HWPM_IP_UCF_CSW:
#endif
#if defined(CONFIG_T264_HWPM_IP_CPU)
case T264_HWPM_IP_CPU:
#endif
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"Failed to %s fs/ops for IP %d (base 0x%llx)",
available == true ? "set" : "reset",
ip_idx, (unsigned long long)base_address);
goto fail;
}
break;
#if defined(CONFIG_T264_HWPM_IP_VI)
case T264_HWPM_IP_VI:
#endif
#if defined(CONFIG_T264_HWPM_IP_ISP)
case T264_HWPM_IP_ISP:
#endif
if (tegra_hwpm_is_hypervisor_mode()) {
/*
* VI and ISP are enabled only on AV+L configuration
* as the camera driver is not supported on L4T.
*/
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"Failed to %s fs/ops for IP %d (base 0x%llx)",
available == true ? "set" : "reset",
ip_idx, (unsigned long long)base_address);
goto fail;
}
} else {
tegra_hwpm_err(hwpm, "Invalid IP %d for ip_ops", ip_idx);
}
break;
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case T264_HWPM_IP_MSS_CHANNEL:
#endif
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
case T264_HWPM_IP_MSS_HUBS:
#endif
/* MSS channel and MSS hubs share MC channels */
/* Check base address in T264_HWPM_IP_MSS_CHANNEL */
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
ip_idx = T264_HWPM_IP_MSS_CHANNEL;
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
/*
* Return value of ENODEV will indicate that the base
* address doesn't belong to this IP.
*/
if (ret != -ENODEV) {
tegra_hwpm_err(hwpm,
"IP %d base 0x%llx:Failed to %s fs/ops",
ip_idx, (unsigned long long)base_address,
available == true ? "set" : "reset");
goto fail;
}
/*
* ret = -ENODEV indicates given address doesn't belong
* to IP. This means ip_ops will not be set for this IP.
* This shouldn't be a reason to fail this function.
* Hence, reset ret to 0.
*/
ret = 0;
}
#endif
/* Check base address in T264_HWPM_IP_MSS_HUBS */
#if defined(CONFIG_T264_HWPM_IP_MSS_HUBS)
ip_idx = T264_HWPM_IP_MSS_HUBS;
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, ip_ops,
base_address, ip_idx, available);
if (ret != 0) {
/*
* Return value of ENODEV will indicate that the base
* address doesn't belong to this IP.
*/
if (ret != -ENODEV) {
tegra_hwpm_err(hwpm,
"IP %d base 0x%llx:Failed to %s fs/ops",
ip_idx, (unsigned long long)base_address,
available == true ? "set" : "reset");
goto fail;
}
/*
* ret = -ENODEV indicates given address doesn't belong
* to IP. This means ip_ops will not be set for this IP.
* This shouldn't be a reason to fail this function.
* Hence, reset ret to 0.
*/
ret = 0;
}
#endif
break;
case T264_HWPM_IP_PMA:
case T264_HWPM_IP_RTR:
default:
tegra_hwpm_err(hwpm, "Invalid IP %d for ip_ops", ip_idx);
break;
}
fail:
return ret;
}
static int t264_hwpm_validate_emc_config(struct tegra_soc_hwpm *hwpm)
{
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
# if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
struct hwpm_ip *chip_ip = NULL;
struct hwpm_ip_inst *ip_inst = NULL;
u32 inst_idx = 0U;
u32 element_mask_max = 0U;
#endif
u32 mss_disable_fuse_val = 0U;
u32 mss_disable_fuse_val_mask = 0xFU;
u32 mss_disable_fuse_bit_idx = 0U;
u32 emc_element_floorsweep_mask = 0U;
u32 idx = 0U;
int err;
tegra_hwpm_fn(hwpm, " ");
if (!tegra_hwpm_is_platform_silicon()) {
tegra_hwpm_err(hwpm,
"Fuse readl is not implemented yet. Skip for now ");
return 0;
}
#define TEGRA_FUSE_OPT_MSS_DISABLE 0x8c0U
err = tegra_hwpm_fuse_readl(hwpm,
TEGRA_FUSE_OPT_MSS_DISABLE, &mss_disable_fuse_val);
if (err != 0) {
tegra_hwpm_err(hwpm, "emc_disable fuse read failed");
return err;
}
/*
* In floorsweep fuse value,
* each bit corresponds to 4 elements.
* Bit value 0 indicates those elements are
* available and bit value 1 indicates
* corresponding elements are floorswept.
*
* Convert floorsweep fuse value to available EMC elements.
*/
do {
if (!(mss_disable_fuse_val & (0x1U << mss_disable_fuse_bit_idx))) {
emc_element_floorsweep_mask |=
(0xFU << (mss_disable_fuse_bit_idx * 4U));
}
mss_disable_fuse_bit_idx++;
mss_disable_fuse_val_mask = (mss_disable_fuse_val_mask >> 1U);
} while (mss_disable_fuse_val_mask != 0U);
/* Set fuse value in MSS IP instances */
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
switch (idx) {
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
case T264_HWPM_IP_MSS_CHANNEL:
#endif
# if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
chip_ip = active_chip->chip_ips[idx];
for (inst_idx = 0U; inst_idx < chip_ip->num_instances;
inst_idx++) {
ip_inst = &chip_ip->ip_inst_static_array[
inst_idx];
/*
* Hence use max element mask to get correct
* fs info to use in HWPM driver.
*/
element_mask_max = tegra_hwpm_safe_sub_u32(
tegra_hwpm_safe_cast_u64_to_u32(BIT(
ip_inst->num_core_elements_per_inst)),
1U);
ip_inst->fuse_fs_mask =
(emc_element_floorsweep_mask &
element_mask_max);
tegra_hwpm_dbg(hwpm, hwpm_info,
"ip %d, fuse_mask 0x%x",
idx, ip_inst->fuse_fs_mask);
}
break;
#endif
default:
continue;
}
}
return 0;
}
int t264_hwpm_validate_current_config(struct tegra_soc_hwpm *hwpm)
{
u32 opt_hwpm_disable = 0U;
u32 fa_mode = 0U;
u32 hwpm_global_disable = 0U;
u32 idx = 0U;
int err;
struct tegra_soc_hwpm_chip *active_chip = hwpm->active_chip;
struct hwpm_ip *chip_ip = NULL;
tegra_hwpm_fn(hwpm, " ");
if (!tegra_hwpm_is_platform_silicon()) {
return 0;
}
err = t264_hwpm_validate_emc_config(hwpm);
if (err != 0) {
tegra_hwpm_err(hwpm, "failed to validate emc config");
return err;
}
#define TEGRA_FUSE_OPT_HWPM_DISABLE 0xc18
/* Read fuse_opt_hwpm_disable_0 fuse */
err = tegra_hwpm_fuse_readl(hwpm,
TEGRA_FUSE_OPT_HWPM_DISABLE, &opt_hwpm_disable);
if (err != 0) {
tegra_hwpm_err(hwpm, "opt_hwpm_disable fuse read failed");
return err;
}
#define TEGRA_FUSE_FA_MODE 0x48U
err = tegra_hwpm_fuse_readl(hwpm, TEGRA_FUSE_FA_MODE, &fa_mode);
if (err != 0) {
tegra_hwpm_err(hwpm, "fa mode fuse read failed");
return err;
}
/*
* Configure global control register to disable PCFIFO interlock
* By writing to MSS_HUB_HUBC_CONFIG_0 register
*/
#define TEGRA_HUB_HUBC_CONFIG0_OFFSET 0x6244U
#define TEGRA_HUB_HUBC_PCFIFO_INTERLOCK_DISABLED 0x1U
err = tegra_hwpm_write_sticky_bits(hwpm, addr_map_mcb_base_r(),
TEGRA_HUB_HUBC_CONFIG0_OFFSET,
TEGRA_HUB_HUBC_PCFIFO_INTERLOCK_DISABLED);
hwpm_assert_print(hwpm, err == 0, return err,
"PCFIFO Interlock disable failed");
#define TEGRA_HWPM_GLOBAL_DISABLE_OFFSET 0x300CU
#define TEGRA_HWPM_GLOBAL_DISABLE_DISABLED 0x0U
err = tegra_hwpm_read_sticky_bits(hwpm, addr_map_pmc_misc_base_r(),
TEGRA_HWPM_GLOBAL_DISABLE_OFFSET, &hwpm_global_disable);
if (err != 0) {
tegra_hwpm_err(hwpm, "hwpm global disable read failed");
return err;
}
/*
* Do not enable override if FA mode fuse is set. FA_MODE fuse enables
* all PERFMONs regardless of level of fuse, sticky bit or secure register
* settings.
*/
if (fa_mode != 0U) {
tegra_hwpm_dbg(hwpm, hwpm_info,
"fa mode fuse enabled, no override required, enable HWPM");
return 0;
}
/* Override enable depends on opt_hwpm_disable and global hwpm disable */
if ((opt_hwpm_disable == 0U) &&
(hwpm_global_disable == TEGRA_HWPM_GLOBAL_DISABLE_DISABLED)) {
tegra_hwpm_dbg(hwpm, hwpm_info,
"OPT_HWPM_DISABLE fuses are disabled, no override required");
return 0;
}
for (idx = 0U; idx < active_chip->get_ip_max_idx(); idx++) {
chip_ip = active_chip->chip_ips[idx];
if ((hwpm_global_disable !=
TEGRA_HWPM_GLOBAL_DISABLE_DISABLED) ||
(opt_hwpm_disable != 0U)) {
/*
* Both HWPM_GLOBAL_DISABLE and OPT_HWPM_DISABLE disables all
* Perfmons in SOC HWPM. Hence, check for either of them to be set.
*/
if ((chip_ip->dependent_fuse_mask &
TEGRA_HWPM_FUSE_HWPM_GLOBAL_DISABLE_MASK) != 0U) {
/*
* check to prevent RTR from being overriden
*/
chip_ip->override_enable = true;
} else {
tegra_hwpm_dbg(hwpm, hwpm_info,
"IP %d not overridden", idx);
}
}
}
return 0;
}
int t264_hwpm_force_enable_ips(struct tegra_soc_hwpm *hwpm)
{
int ret = 0;
tegra_hwpm_fn(hwpm, " ");
/* Force enable MSS channel IP for AV+L/Q */
if (tegra_hwpm_is_hypervisor_mode()) {
/*
* MSS CHANNEL
* MSS channel driver cannot implement HWPM <-> IP interface in AV + L, and
* AV + Q configs. Since MSS channel is part of both POR and non-POR IPs,
* this force enable is not limited by minimal config or force enable flags.
*/
#if defined(CONFIG_T264_HWPM_IP_MSS_CHANNEL)
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc0_base_r(),
T264_HWPM_IP_MSS_CHANNEL, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_MSS_CHANNEL force enable failed");
return ret;
}
#endif
} else {
#if defined(CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE)
if (tegra_hwpm_is_platform_vsp()) {
/* Static IP instances as per VSP netlist */
}
if (tegra_hwpm_is_platform_silicon()) {
/* Static IP instances corresponding to silicon */
#if defined(CONFIG_T264_HWPM_IP_OCU)
if (hwpm->ip_config[TEGRA_HWPM_IP_MCF_OCU]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ocu_base_r(),
T264_HWPM_IP_OCU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_OCU force enable failed");
return ret;
}
}
#endif
#if defined(CONFIG_T264_HWPM_IP_UCF_PSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_PSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn0_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn1_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn2_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_psn3_psw_base_r(),
T264_HWPM_IP_UCF_PSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_PSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_PSW */
#if defined(CONFIG_T264_HWPM_IP_UCF_CSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_CSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_csw0_base_r(),
T264_HWPM_IP_UCF_CSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_CSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_ucf_csw1_base_r(),
T264_HWPM_IP_UCF_CSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_CSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_CSW */
#if defined(CONFIG_T264_HWPM_IP_UCF_MSW)
if (hwpm->ip_config[TEGRA_HWPM_IP_UCF_MSW]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc0_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc2_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc4_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc6_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc8_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc10_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc12_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_mc14_base_r(),
T264_HWPM_IP_UCF_MSW, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_UCF_MSW force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_UCF_MSW */
#if defined(CONFIG_T264_HWPM_IP_CPU)
if (hwpm->ip_config[TEGRA_HWPM_IP_CPU]) {
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore0_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore1_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore2_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore3_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore4_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore5_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore6_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore7_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore8_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore9_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore10_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore11_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore12_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
ret = tegra_hwpm_set_fs_info_ip_ops(hwpm, NULL,
addr_map_cpucore13_base_r(),
T264_HWPM_IP_CPU, true);
if (ret != 0) {
tegra_hwpm_err(hwpm,
"T264_HWPM_IP_CPU force enable failed");
return ret;
}
}
#endif /* CONFIG_T264_HWPM_IP_CPU */
}
#endif /* CONFIG_T264_HWPM_ALLOW_FORCE_ENABLE */
}
return ret;
}

View File

@@ -1,338 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_mem_mgmt.h>
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/ip/rtr/t264_rtr.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
int t264_hwpm_disable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reset_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Reset OUTBASE register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outbase_ptr_m(),
pmasys_channel_outbase_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), reset_val);
/* Reset OUTBASEUPPER register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outbaseupper_ptr_m(),
pmasys_channel_outbaseupper_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), reset_val);
/* Reset OUTSIZE register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_outsize_numbytes_m(),
pmasys_channel_outsize_numbytes_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), reset_val);
/* Reset MEM_BYTES_ADDR register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_bytes_addr_ptr_m(),
pmasys_channel_mem_bytes_addr_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), reset_val);
/* Reset MEM_HEAD register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_head_ptr_m(),
pmasys_channel_mem_head_ptr_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), reset_val);
/* Reset MEM_BYTES register */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_mem_bytes_numbytes_m(),
pmasys_channel_mem_bytes_numbytes_init_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_r(0, 0), reset_val);
/* Reset MEMBUF_STATUS */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reset_val);
reset_val = set_field(reset_val,
pmasys_channel_control_user_membuf_clear_status_m(),
pmasys_channel_control_user_membuf_clear_status_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reset_val);
return 0;
}
int t264_hwpm_enable_mem_mgmt(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 outbase_lo = 0U;
u32 outbase_hi = 0U;
u32 outsize = 0U;
u32 mem_bytes_addr = 0U;
u32 membuf_status = 0U;
u32 mem_head = 0U;
u32 bpc_mem_block = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
struct tegra_hwpm_mem_mgmt *mem_mgmt = hwpm->mem_mgmt;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL, &pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
outbase_lo = mem_mgmt->stream_buf_va & pmasys_channel_outbase_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbase_r(0, 0), outbase_lo);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTBASE = 0x%x", outbase_lo);
outbase_hi = (mem_mgmt->stream_buf_va >> 32) &
pmasys_channel_outbaseupper_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outbaseupper_r(0, 0), outbase_hi);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTBASEUPPER = 0x%x", outbase_hi);
outsize = mem_mgmt->stream_buf_size &
pmasys_channel_outsize_numbytes_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_outsize_r(0, 0), outsize);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "OUTSIZE = 0x%x", outsize);
mem_bytes_addr = mem_mgmt->mem_bytes_buf_va &
pmasys_channel_mem_bytes_addr_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bytes_addr_r(0, 0), mem_bytes_addr);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream,
"MEM_BYTES_ADDR = 0x%x", mem_bytes_addr);
/* Update MEM_HEAD to OUTBASE */
mem_head = mem_mgmt->stream_buf_va & pmasys_channel_mem_head_ptr_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), mem_head);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "MEM_HEAD = 0x%x", mem_head);
/* Reset MEMBUF_STATUS */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &membuf_status);
membuf_status = set_field(membuf_status,
pmasys_channel_control_user_membuf_clear_status_m(),
pmasys_channel_control_user_membuf_clear_status_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), membuf_status);
/* Update CBLOCK_BPC_MEM_BLOCK to OUTBASE to ensure BPC is bound */
bpc_mem_block = mem_mgmt->stream_buf_va &
pmasys_cblock_bpc_mem_block_base_m();
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_block_r(0), outbase_lo);
tegra_hwpm_dbg(hwpm, hwpm_dbg_alloc_pma_stream, "bpc_mem_block = 0x%x",
bpc_mem_block);
/* Mark mem block valid */
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_blockupper_r(0),
pmasys_cblock_bpc_mem_blockupper_valid_f(
pmasys_cblock_bpc_mem_blockupper_valid_true_v()));
return 0;
}
int t264_hwpm_invalidate_mem_config(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_cblock_bpc_mem_blockupper_r(0),
pmasys_cblock_bpc_mem_blockupper_valid_f(
pmasys_cblock_bpc_mem_blockupper_valid_false_v()));
return 0;
}
int t264_hwpm_stream_mem_bytes(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
u32 *mem_bytes_kernel_u32 =
(u32 *)(hwpm->mem_mgmt->mem_bytes_kernel);
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
*mem_bytes_kernel_u32 = TEGRA_HWPM_MEM_BYTES_INVALID;
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_control_user_update_bytes_m(),
pmasys_channel_control_user_update_bytes_doit_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reg_val);
return 0;
}
int t264_hwpm_disable_pma_streaming(struct tegra_soc_hwpm *hwpm)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable PMA streaming */
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), &reg_val);
reg_val = set_field(reg_val,
pmasys_command_slice_trigger_config_user_record_stream_m(),
pmasys_command_slice_trigger_config_user_record_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_command_slice_trigger_config_user_r(0), reg_val);
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), &reg_val);
reg_val = set_field(reg_val,
pmasys_channel_config_user_stream_m(),
pmasys_channel_config_user_stream_disable_f());
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_control_user_r(0, 0), reg_val);
return 0;
}
int t264_hwpm_update_mem_bytes_get_ptr(struct tegra_soc_hwpm *hwpm,
u64 mem_bump)
{
int err = 0;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
if (mem_bump > (u64)U32_MAX) {
tegra_hwpm_err(hwpm, "mem_bump is out of bounds");
return -EINVAL;
}
tegra_hwpm_writel(hwpm, pma_perfmux,
pmasys_channel_mem_bump_r(0, 0), mem_bump);
return 0;
}
int t264_hwpm_get_mem_bytes_put_ptr(struct tegra_soc_hwpm *hwpm,
u64 *mem_head_ptr)
{
int err = 0;
u32 reg_val = 0U;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_mem_head_r(0, 0), &reg_val);
*mem_head_ptr = (u64)reg_val;
return err;
}
int t264_hwpm_membuf_overflow_status(struct tegra_soc_hwpm *hwpm,
u32 *overflow_status)
{
int err = 0;
u32 reg_val, field_val;
struct hwpm_ip_aperture *pma_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, NULL,
&pma_perfmux);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
tegra_hwpm_readl(hwpm, pma_perfmux,
pmasys_channel_status_r(0, 0), &reg_val);
field_val = pmasys_channel_status_membuf_status_v(
reg_val);
*overflow_status = (field_val ==
pmasys_channel_status_membuf_status_overflowed_v()) ?
TEGRA_HWPM_MEMBUF_OVERFLOWED : TEGRA_HWPM_MEMBUF_NOT_OVERFLOWED;
return err;
}

View File

@@ -1,110 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_PERFMON_DEVICE_INDEX_H
#define T264_HWPM_PERFMON_DEVICE_INDEX_H
enum t264_hwpm_perfmon_device_index {
T264_SYSTEM_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_HWPM_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE0_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE1_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE2_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE3_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE4_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE5_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE6_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE7_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE8_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE9_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE10_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE11_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE12_PERFMON_DEVICE_NODE_INDEX,
T264_CPU_CORE13_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW4_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW5_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW6_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW7_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW8_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW9_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW10_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW11_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW12_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW13_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW14_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSW15_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTA3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTB3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTC3_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD0_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD1_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD2_PERFMON_DEVICE_NODE_INDEX,
T264_MSS_CHANNEL_PARTD3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_CSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_CSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW0_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW1_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_PSW3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU3_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_MSS_HUB2_PERFMON_DEVICE_NODE_INDEX,
T264_UCF_TCU2_PERFMON_DEVICE_NODE_INDEX,
T264_VI0_PERFMON_DEVICE_NODE_INDEX,
T264_VI1_PERFMON_DEVICE_NODE_INDEX,
T264_ISP0_PERFMON_DEVICE_NODE_INDEX,
T264_ISP1_PERFMON_DEVICE_NODE_INDEX,
T264_VICA0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAC0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAV0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAV1_PERFMON_DEVICE_NODE_INDEX,
T264_VISION_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_VISION_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_PVAP0_PERFMON_DEVICE_NODE_INDEX,
T264_PVAP1_PERFMON_DEVICE_NODE_INDEX,
T264_DISP_USB_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_DISP_USB_TCU0_PERFMON_DEVICE_NODE_INDEX,
T264_OCU0_PERFMON_DEVICE_NODE_INDEX,
T264_UPHY0_MSS_HUB0_PERFMON_DEVICE_NODE_INDEX,
T264_UPHY0_MSS_HUB1_PERFMON_DEVICE_NODE_INDEX,
T264_PMA_DEVICE_NODE_INDEX,
T264_RTR_DEVICE_NODE_INDEX
};
#endif

View File

@@ -1,241 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include "t264_regops_allowlist.h"
struct allowlist t264_perfmon_alist[67] = {
{0x00000000, true},
{0x00000004, true},
{0x00000008, true},
{0x0000000c, true},
{0x00000010, true},
{0x00000014, true},
{0x00000020, true},
{0x00000024, true},
{0x00000028, true},
{0x0000002c, true},
{0x00000030, true},
{0x00000034, true},
{0x00000040, true},
{0x00000044, true},
{0x00000048, true},
{0x0000004c, true},
{0x00000050, true},
{0x00000054, true},
{0x00000058, true},
{0x0000005c, true},
{0x00000060, true},
{0x00000064, true},
{0x00000068, true},
{0x0000006c, true},
{0x00000070, true},
{0x00000074, true},
{0x00000078, true},
{0x0000007c, true},
{0x00000080, true},
{0x00000084, true},
{0x00000088, true},
{0x0000008c, true},
{0x00000090, true},
{0x00000098, true},
{0x0000009c, true},
{0x000000a0, true},
{0x000000a4, true},
{0x000000a8, true},
{0x000000ac, true},
{0x000000b0, true},
{0x000000b4, true},
{0x000000b8, true},
{0x000000bc, true},
{0x000000c0, true},
{0x000000c4, true},
{0x000000c8, true},
{0x000000cc, true},
{0x000000d0, true},
{0x000000d4, true},
{0x000000d8, true},
{0x000000dc, true},
{0x000000e0, true},
{0x000000e4, true},
{0x000000e8, true},
{0x000000ec, true},
{0x000000f8, true},
{0x000000fc, true},
{0x00000100, true},
{0x00000108, true},
{0x00000110, true},
{0x00000114, true},
{0x00000118, true},
{0x0000011c, true},
{0x00000120, true},
{0x00000124, true},
{0x00000128, true},
{0x00000130, true},
};
struct allowlist t264_pma_res_cmd_slice_rtr_alist[41] = {
{0x00000858, false},
{0x00000a00, false},
{0x00000a10, false},
{0x00000a14, false},
{0x00000a20, false},
{0x00000a24, false},
{0x00000a28, false},
{0x00000a2c, false},
{0x00000a30, false},
{0x00000a34, false},
{0x00000a38, false},
{0x00000a3c, false},
{0x00001104, false},
{0x00001110, false},
{0x00001114, false},
{0x0000111c, false},
{0x00001120, false},
{0x00001124, false},
{0x00001128, false},
{0x0000112c, false},
{0x00001130, false},
{0x00001134, false},
{0x00001138, false},
{0x0000113c, false},
{0x00001140, false},
{0x00001144, false},
{0x00001148, false},
{0x0000114c, false},
{0x00001150, false},
{0x00001154, false},
{0x00001158, false},
{0x0000115c, false},
{0x00001160, false},
{0x00001164, false},
{0x00001168, false},
{0x0000116c, false},
{0x00001170, false},
{0x00001174, false},
{0x00001178, false},
{0x0000117c, false},
{0x00000818, false},
};
struct allowlist t264_pma_res_pma_alist[1] = {
{0x00000858, true},
};
struct allowlist t264_rtr_alist[2] = {
{0x00000080, false},
{0x000000a4, false},
};
struct allowlist t264_vic_alist[8] = {
{0x00001088, true},
{0x000010a8, true},
{0x0000cb94, true},
{0x0000cb80, true},
{0x0000cb84, true},
{0x0000cb88, true},
{0x0000cb8c, true},
{0x0000cb90, true},
};
struct allowlist t264_pva_pm_alist[10] = {
{0x0000800c, true},
{0x00008010, true},
{0x00008014, true},
{0x00008018, true},
{0x0000801c, true},
{0x00008020, true},
{0x00008024, true},
{0x00008028, true},
{0x0000802c, true},
{0x00008030, true},
};
struct allowlist t264_mss_channel_alist[2] = {
{0x00008914, true},
{0x00008918, true},
};
struct allowlist t264_mss_hub_alist[3] = {
{0x00006f3c, true},
{0x00006f34, true},
{0x00006f38, true},
};
struct allowlist t264_ocu_alist[1] = {
{0x00000058, true},
};
struct allowlist t264_smmu_alist[1] = {
{0x00005000, true},
};
struct allowlist t264_ucf_msw_cbridge_alist[1] = {
{0x0000891c, true},
};
struct allowlist t264_ucf_msn_msw0_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_ucf_msn_msw1_alist[2] = {
{0x00000010, true},
{0x00000018, true},
};
struct allowlist t264_ucf_msw_slc_alist[1] = {
{0x00000000, true},
};
struct allowlist t264_ucf_psn_psw_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_ucf_csw_alist[2] = {
{0x00000000, true},
{0x00000008, true},
};
struct allowlist t264_cpucore_alist[4] = {
{0x00000000, true},
{0x00000008, true},
{0x00000010, true},
{0x00000018, true},
};
struct allowlist t264_vi_alist[5] = {
{0x00030008, true},
{0x0003000c, true},
{0x00030010, true},
{0x00030014, true},
{0x00030018, true},
};
struct allowlist t264_isp_alist[5] = {
{0x00030008, true},
{0x0003000c, true},
{0x00030010, true},
{0x00030014, true},
{0x00030018, true},
};

View File

@@ -1,49 +0,0 @@
/* SPDX-License-Identifier: MIT */
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef T264_HWPM_REGOPS_ALLOWLIST_H
#define T264_HWPM_REGOPS_ALLOWLIST_H
#include <tegra_hwpm.h>
extern struct allowlist t264_perfmon_alist[67];
extern struct allowlist t264_pma_res_cmd_slice_rtr_alist[41];
extern struct allowlist t264_pma_res_pma_alist[1];
extern struct allowlist t264_rtr_alist[2];
extern struct allowlist t264_vic_alist[8];
extern struct allowlist t264_pva_pm_alist[10];
extern struct allowlist t264_mss_channel_alist[2];
extern struct allowlist t264_mss_hub_alist[3];
extern struct allowlist t264_ocu_alist[1];
extern struct allowlist t264_smmu_alist[1];
extern struct allowlist t264_ucf_msw_cbridge_alist[1];
extern struct allowlist t264_ucf_msn_msw0_alist[2];
extern struct allowlist t264_ucf_msn_msw1_alist[2];
extern struct allowlist t264_ucf_msw_slc_alist[1];
extern struct allowlist t264_ucf_psn_psw_alist[2];
extern struct allowlist t264_ucf_csw_alist[2];
extern struct allowlist t264_cpucore_alist[4];
extern struct allowlist t264_vi_alist[5];
extern struct allowlist t264_isp_alist[5];
#endif /* T264_HWPM_REGOPS_ALLOWLIST_H */

View File

@@ -1,214 +0,0 @@
// SPDX-License-Identifier: MIT
/*
* SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#include <tegra_hwpm_static_analysis.h>
#include <tegra_hwpm_timers.h>
#include <tegra_hwpm_log.h>
#include <tegra_hwpm_io.h>
#include <tegra_hwpm.h>
#include <hal/t264/t264_internal.h>
#include <hal/t264/hw/t264_pmasys_soc_hwpm.h>
#include <hal/t264/hw/t264_pmmsys_soc_hwpm.h>
#define TEGRA_HWPM_CBLOCK_CHANNEL_TO_CMD_SLICE(cblock, channel) \
(((cblock) * pmmsys_num_channels_per_cblock_v()) + (channel))
#define TEGRA_HWPM_MAX_SUPPORTED_DGS 256U
#define TEGRA_HWPM_NUM_DG_STATUS_PER_REG \
(TEGRA_HWPM_MAX_SUPPORTED_DGS / \
pmmsys_router_user_dgmap_status_secure__size_1_v())
int t264_hwpm_perfmon_enable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
u32 reg_val;
u32 cblock = 0U;
u32 channel = 0U;
u32 dg_idx = 0U;
u32 config_dgmap = 0U;
u32 dgmap_status_reg_idx = 0U, dgmap_status_reg_dgidx = 0U;
u32 retries = 10U;
u32 sleep_msecs = 10U;
int err = 0;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
NULL);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Enable */
tegra_hwpm_dbg(hwpm, hwpm_dbg_bind,
"Enabling PERFMON(0x%llx - 0x%llx)",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use enginestatus offset + perfmon base_pa as the register
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_enginestatus_o(),
perfmon->base_pa), &reg_val);
reg_val = set_field(reg_val, pmmsys_enginestatus_enable_m(),
pmmsys_enginestatus_enable_out_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_enginestatus_o(),
perfmon->base_pa), reg_val);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use secure_config offset + perfmon base_pa as the register
* The register also contains dg_idx programmed by HW that will be used
* to poll dg mapping in router.
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), &config_dgmap);
dg_idx = pmmsys_secure_config_dg_idx_v(config_dgmap);
/* Configure DG map for this perfmon */
config_dgmap = set_field(config_dgmap,
pmmsys_secure_config_cmd_slice_id_m() |
pmmsys_secure_config_channel_id_m() |
pmmsys_secure_config_cblock_id_m() |
pmmsys_secure_config_mapped_m() |
pmmsys_secure_config_use_prog_dg_idx_m() |
pmmsys_secure_config_command_pkt_decoder_m(),
pmmsys_secure_config_cmd_slice_id_f(
TEGRA_HWPM_CBLOCK_CHANNEL_TO_CMD_SLICE(
cblock, channel)) |
pmmsys_secure_config_channel_id_f(channel) |
pmmsys_secure_config_cblock_id_f(cblock) |
pmmsys_secure_config_mapped_true_f() |
pmmsys_secure_config_use_prog_dg_idx_false_f() |
pmmsys_secure_config_command_pkt_decoder_enable_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), config_dgmap);
/* Make sure that the DG map status is propagated to the router */
dgmap_status_reg_idx = dg_idx / TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
dgmap_status_reg_dgidx = dg_idx % TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_user_dgmap_status_secure_r(dgmap_status_reg_idx),
&reg_val,
(((reg_val >> dgmap_status_reg_dgidx) &
pmmsys_router_user_dgmap_status_secure_dg_s()) !=
pmmsys_router_user_dgmap_status_secure_dg_mapped_v()),
"Perfmon(0x%llx - 0x%llx) dgmap %d status update timed out",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa, dg_idx);
return 0;
}
int t264_hwpm_perfmon_disable(struct tegra_soc_hwpm *hwpm,
struct hwpm_ip_aperture *perfmon)
{
u32 reg_val;
u32 dg_idx = 0U;
u32 config_dgmap = 0U;
u32 dgmap_status_reg_idx = 0U, dgmap_status_reg_dgidx = 0U;
u32 retries = 10U;
u32 sleep_msecs = 10U;
int err = 0;
struct hwpm_ip_aperture *rtr_perfmux = NULL;
tegra_hwpm_fn(hwpm, " ");
if (perfmon->element_type == HWPM_ELEMENT_PERFMUX) {
/*
* Since HWPM elements use perfmon functions,
* skip disabling HWPM PERFMUX elements
*/
return 0;
}
err = hwpm->active_chip->get_rtr_pma_perfmux_ptr(hwpm, &rtr_perfmux,
NULL);
hwpm_assert_print(hwpm, err == 0, return err,
"get rtr pma perfmux failed");
/* Disable */
tegra_hwpm_dbg(hwpm, hwpm_dbg_release_resource,
"Disabling PERFMON(0x%llx - 0x%llx)",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use sys0_control offset + perfmon base_pa as the register
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_control_o(),
perfmon->base_pa), &reg_val);
reg_val = set_field(reg_val, pmmsys_control_mode_m(),
pmmsys_control_mode_disable_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_control_o(),
perfmon->base_pa), reg_val);
/*
* HWPM readl function expects register address relative to
* perfmon group base address.
* Hence use secure_config offset + perfmon base_pa as the register
* The register also contains dg_idx programmed by HW that will be used
* to poll dg mapping in router.
*/
tegra_hwpm_readl(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), &config_dgmap);
dg_idx = pmmsys_secure_config_dg_idx_v(config_dgmap);
/* Reset DG map for this perfmon */
config_dgmap = set_field(config_dgmap,
pmmsys_secure_config_mapped_m(),
pmmsys_secure_config_mapped_false_f());
tegra_hwpm_writel(hwpm, perfmon,
tegra_hwpm_safe_add_u64(pmmsys_secure_config_o(),
perfmon->base_pa), config_dgmap);
/* Make sure that the DG map status is propagated to the router */
dgmap_status_reg_idx = dg_idx / TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
dgmap_status_reg_dgidx = dg_idx % TEGRA_HWPM_NUM_DG_STATUS_PER_REG;
tegra_hwpm_timeout_print(hwpm, retries, sleep_msecs, rtr_perfmux,
pmmsys_router_user_dgmap_status_secure_r(dgmap_status_reg_idx),
&reg_val,
(((reg_val >> dgmap_status_reg_dgidx) &
pmmsys_router_user_dgmap_status_secure_dg_s()) !=
pmmsys_router_user_dgmap_status_secure_dg_not_mapped_v()),
"Perfmon(0x%llx - 0x%llx) dgmap %d status update timed out",
(unsigned long long)perfmon->start_abs_pa,
(unsigned long long)perfmon->end_abs_pa, dg_idx);
return 0;
}

View File

@@ -1,578 +0,0 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_ADDR_MAP_SOC_HWPM_H
#define TH500_ADDR_MAP_SOC_HWPM_H
#define addr_map_rpg_pm_base_r() (0x13e00000U)
#define addr_map_rpg_pm_limit_r() (0x13eeffffU)
#define addr_map_rpg_pm_sys0_base_r() (0x13e1e000U)
#define addr_map_rpg_pm_sys0_limit_r() (0x13e1efffU)
#define addr_map_pma_base_r() (0x13ef0000U)
#define addr_map_pma_limit_r() (0x13ef1fffU)
#define addr_map_rtr_base_r() (0x13ef2000U)
#define addr_map_rtr_limit_r() (0x13ef2fffU)
#define addr_map_rpg_pm_msschannel0_base_r() (0x13e1f000U)
#define addr_map_rpg_pm_msschannel0_limit_r() (0x13e1ffffU)
#define addr_map_rpg_pm_msschannel1_base_r() (0x13e20000U)
#define addr_map_rpg_pm_msschannel1_limit_r() (0x13e20fffU)
#define addr_map_rpg_pm_msschannel2_base_r() (0x13e21000U)
#define addr_map_rpg_pm_msschannel2_limit_r() (0x13e21fffU)
#define addr_map_rpg_pm_msschannel3_base_r() (0x13e22000U)
#define addr_map_rpg_pm_msschannel3_limit_r() (0x13e22fffU)
#define addr_map_rpg_pm_msschannel4_base_r() (0x13e23000U)
#define addr_map_rpg_pm_msschannel4_limit_r() (0x13e23fffU)
#define addr_map_rpg_pm_msschannel5_base_r() (0x13e24000U)
#define addr_map_rpg_pm_msschannel5_limit_r() (0x13e24fffU)
#define addr_map_rpg_pm_msschannel6_base_r() (0x13e25000U)
#define addr_map_rpg_pm_msschannel6_limit_r() (0x13e25fffU)
#define addr_map_rpg_pm_msschannel7_base_r() (0x13e26000U)
#define addr_map_rpg_pm_msschannel7_limit_r() (0x13e26fffU)
#define addr_map_rpg_pm_msschannel8_base_r() (0x13e27000U)
#define addr_map_rpg_pm_msschannel8_limit_r() (0x13e27fffU)
#define addr_map_rpg_pm_msschannel9_base_r() (0x13e28000U)
#define addr_map_rpg_pm_msschannel9_limit_r() (0x13e28fffU)
#define addr_map_rpg_pm_msschannel10_base_r() (0x13e29000U)
#define addr_map_rpg_pm_msschannel10_limit_r() (0x13e29fffU)
#define addr_map_rpg_pm_msschannel11_base_r() (0x13e2a000U)
#define addr_map_rpg_pm_msschannel11_limit_r() (0x13e2afffU)
#define addr_map_rpg_pm_msschannel12_base_r() (0x13e2b000U)
#define addr_map_rpg_pm_msschannel12_limit_r() (0x13e2bfffU)
#define addr_map_rpg_pm_msschannel13_base_r() (0x13e2c000U)
#define addr_map_rpg_pm_msschannel13_limit_r() (0x13e2cfffU)
#define addr_map_rpg_pm_msschannel14_base_r() (0x13e2d000U)
#define addr_map_rpg_pm_msschannel14_limit_r() (0x13e2dfffU)
#define addr_map_rpg_pm_msschannel15_base_r() (0x13e2e000U)
#define addr_map_rpg_pm_msschannel15_limit_r() (0x13e2efffU)
#define addr_map_rpg_pm_msschannel16_base_r() (0x13e2f000U)
#define addr_map_rpg_pm_msschannel16_limit_r() (0x13e2ffffU)
#define addr_map_rpg_pm_msschannel17_base_r() (0x13e30000U)
#define addr_map_rpg_pm_msschannel17_limit_r() (0x13e30fffU)
#define addr_map_rpg_pm_msschannel18_base_r() (0x13e31000U)
#define addr_map_rpg_pm_msschannel18_limit_r() (0x13e31fffU)
#define addr_map_rpg_pm_msschannel19_base_r() (0x13e32000U)
#define addr_map_rpg_pm_msschannel19_limit_r() (0x13e32fffU)
#define addr_map_rpg_pm_msschannel20_base_r() (0x13e33000U)
#define addr_map_rpg_pm_msschannel20_limit_r() (0x13e33fffU)
#define addr_map_rpg_pm_msschannel21_base_r() (0x13e34000U)
#define addr_map_rpg_pm_msschannel21_limit_r() (0x13e34fffU)
#define addr_map_rpg_pm_msschannel22_base_r() (0x13e35000U)
#define addr_map_rpg_pm_msschannel22_limit_r() (0x13e35fffU)
#define addr_map_rpg_pm_msschannel23_base_r() (0x13e36000U)
#define addr_map_rpg_pm_msschannel23_limit_r() (0x13e36fffU)
#define addr_map_rpg_pm_msschannel24_base_r() (0x13e37000U)
#define addr_map_rpg_pm_msschannel24_limit_r() (0x13e37fffU)
#define addr_map_rpg_pm_msschannel25_base_r() (0x13e38000U)
#define addr_map_rpg_pm_msschannel25_limit_r() (0x13e38fffU)
#define addr_map_rpg_pm_msschannel26_base_r() (0x13e39000U)
#define addr_map_rpg_pm_msschannel26_limit_r() (0x13e39fffU)
#define addr_map_rpg_pm_msschannel27_base_r() (0x13e3a000U)
#define addr_map_rpg_pm_msschannel27_limit_r() (0x13e3afffU)
#define addr_map_rpg_pm_msschannel28_base_r() (0x13e3b000U)
#define addr_map_rpg_pm_msschannel28_limit_r() (0x13e3bfffU)
#define addr_map_rpg_pm_msschannel29_base_r() (0x13e3c000U)
#define addr_map_rpg_pm_msschannel29_limit_r() (0x13e3cfffU)
#define addr_map_rpg_pm_msschannel30_base_r() (0x13e3d000U)
#define addr_map_rpg_pm_msschannel30_limit_r() (0x13e3dfffU)
#define addr_map_rpg_pm_msschannel31_base_r() (0x13e3e000U)
#define addr_map_rpg_pm_msschannel31_limit_r() (0x13e3efffU)
#define addr_map_mcb_base_r() (0x04020000U)
#define addr_map_mcb_limit_r() (0x0403ffffU)
#define addr_map_mc0_base_r() (0x04040000U)
#define addr_map_mc0_limit_r() (0x0405ffffU)
#define addr_map_mc1_base_r() (0x04060000U)
#define addr_map_mc1_limit_r() (0x0407ffffU)
#define addr_map_mc2_base_r() (0x04080000U)
#define addr_map_mc2_limit_r() (0x0409ffffU)
#define addr_map_mc3_base_r() (0x040a0000U)
#define addr_map_mc3_limit_r() (0x040bffffU)
#define addr_map_mc4_base_r() (0x040c0000U)
#define addr_map_mc4_limit_r() (0x040dffffU)
#define addr_map_mc5_base_r() (0x040e0000U)
#define addr_map_mc5_limit_r() (0x040fffffU)
#define addr_map_mc6_base_r() (0x04100000U)
#define addr_map_mc6_limit_r() (0x0411ffffU)
#define addr_map_mc7_base_r() (0x04120000U)
#define addr_map_mc7_limit_r() (0x0413ffffU)
#define addr_map_mc8_base_r() (0x04140000U)
#define addr_map_mc8_limit_r() (0x0415ffffU)
#define addr_map_mc9_base_r() (0x04160000U)
#define addr_map_mc9_limit_r() (0x0417ffffU)
#define addr_map_mc10_base_r() (0x04180000U)
#define addr_map_mc10_limit_r() (0x0419ffffU)
#define addr_map_mc11_base_r() (0x041a0000U)
#define addr_map_mc11_limit_r() (0x041bffffU)
#define addr_map_mc12_base_r() (0x041c0000U)
#define addr_map_mc12_limit_r() (0x041dffffU)
#define addr_map_mc13_base_r() (0x041e0000U)
#define addr_map_mc13_limit_r() (0x041fffffU)
#define addr_map_mc14_base_r() (0x04200000U)
#define addr_map_mc14_limit_r() (0x0421ffffU)
#define addr_map_mc15_base_r() (0x04220000U)
#define addr_map_mc15_limit_r() (0x0423ffffU)
#define addr_map_mc16_base_r() (0x04240000U)
#define addr_map_mc16_limit_r() (0x0425ffffU)
#define addr_map_mc17_base_r() (0x04260000U)
#define addr_map_mc17_limit_r() (0x0427ffffU)
#define addr_map_mc18_base_r() (0x04280000U)
#define addr_map_mc18_limit_r() (0x0429ffffU)
#define addr_map_mc19_base_r() (0x042a0000U)
#define addr_map_mc19_limit_r() (0x042bffffU)
#define addr_map_mc20_base_r() (0x042c0000U)
#define addr_map_mc20_limit_r() (0x042dffffU)
#define addr_map_mc21_base_r() (0x042e0000U)
#define addr_map_mc21_limit_r() (0x042fffffU)
#define addr_map_mc22_base_r() (0x04300000U)
#define addr_map_mc22_limit_r() (0x0431ffffU)
#define addr_map_mc23_base_r() (0x04320000U)
#define addr_map_mc23_limit_r() (0x0433ffffU)
#define addr_map_mc24_base_r() (0x04340000U)
#define addr_map_mc24_limit_r() (0x0435ffffU)
#define addr_map_mc25_base_r() (0x04360000U)
#define addr_map_mc25_limit_r() (0x0437ffffU)
#define addr_map_mc26_base_r() (0x04380000U)
#define addr_map_mc26_limit_r() (0x0439ffffU)
#define addr_map_mc27_base_r() (0x043a0000U)
#define addr_map_mc27_limit_r() (0x043bffffU)
#define addr_map_mc28_base_r() (0x043c0000U)
#define addr_map_mc28_limit_r() (0x043dffffU)
#define addr_map_mc29_base_r() (0x043e0000U)
#define addr_map_mc29_limit_r() (0x043fffffU)
#define addr_map_mc30_base_r() (0x04400000U)
#define addr_map_mc30_limit_r() (0x0441ffffU)
#define addr_map_mc31_base_r() (0x04420000U)
#define addr_map_mc31_limit_r() (0x0443ffffU)
#define addr_map_rpg_pm_ltc0s0_base_r() (0x13e3f000U)
#define addr_map_rpg_pm_ltc0s0_limit_r() (0x13e3ffffU)
#define addr_map_rpg_pm_ltc0s1_base_r() (0x13e40000U)
#define addr_map_rpg_pm_ltc0s1_limit_r() (0x13e40fffU)
#define addr_map_rpg_pm_ltc1s0_base_r() (0x13e41000U)
#define addr_map_rpg_pm_ltc1s0_limit_r() (0x13e41fffU)
#define addr_map_rpg_pm_ltc1s1_base_r() (0x13e42000U)
#define addr_map_rpg_pm_ltc1s1_limit_r() (0x13e42fffU)
#define addr_map_rpg_pm_ltc2s0_base_r() (0x13e43000U)
#define addr_map_rpg_pm_ltc2s0_limit_r() (0x13e43fffU)
#define addr_map_rpg_pm_ltc2s1_base_r() (0x13e44000U)
#define addr_map_rpg_pm_ltc2s1_limit_r() (0x13e44fffU)
#define addr_map_rpg_pm_ltc3s0_base_r() (0x13e45000U)
#define addr_map_rpg_pm_ltc3s0_limit_r() (0x13e45fffU)
#define addr_map_rpg_pm_ltc3s1_base_r() (0x13e46000U)
#define addr_map_rpg_pm_ltc3s1_limit_r() (0x13e46fffU)
#define addr_map_rpg_pm_ltc4s0_base_r() (0x13e47000U)
#define addr_map_rpg_pm_ltc4s0_limit_r() (0x13e47fffU)
#define addr_map_rpg_pm_ltc4s1_base_r() (0x13e48000U)
#define addr_map_rpg_pm_ltc4s1_limit_r() (0x13e48fffU)
#define addr_map_rpg_pm_ltc5s0_base_r() (0x13e49000U)
#define addr_map_rpg_pm_ltc5s0_limit_r() (0x13e49fffU)
#define addr_map_rpg_pm_ltc5s1_base_r() (0x13e4a000U)
#define addr_map_rpg_pm_ltc5s1_limit_r() (0x13e4afffU)
#define addr_map_rpg_pm_ltc6s0_base_r() (0x13e4b000U)
#define addr_map_rpg_pm_ltc6s0_limit_r() (0x13e4bfffU)
#define addr_map_rpg_pm_ltc6s1_base_r() (0x13e4c000U)
#define addr_map_rpg_pm_ltc6s1_limit_r() (0x13e4cfffU)
#define addr_map_rpg_pm_ltc7s0_base_r() (0x13e4d000U)
#define addr_map_rpg_pm_ltc7s0_limit_r() (0x13e4dfffU)
#define addr_map_rpg_pm_ltc7s1_base_r() (0x13e4e000U)
#define addr_map_rpg_pm_ltc7s1_limit_r() (0x13e4efffU)
#define addr_map_ltc0_base_r() (0x04e10000U)
#define addr_map_ltc0_limit_r() (0x04e1ffffU)
#define addr_map_ltc1_base_r() (0x04e20000U)
#define addr_map_ltc1_limit_r() (0x04e2ffffU)
#define addr_map_ltc2_base_r() (0x04e30000U)
#define addr_map_ltc2_limit_r() (0x04e3ffffU)
#define addr_map_ltc3_base_r() (0x04e40000U)
#define addr_map_ltc3_limit_r() (0x04e4ffffU)
#define addr_map_ltc4_base_r() (0x04e50000U)
#define addr_map_ltc4_limit_r() (0x04e5ffffU)
#define addr_map_ltc5_base_r() (0x04e60000U)
#define addr_map_ltc5_limit_r() (0x04e6ffffU)
#define addr_map_ltc6_base_r() (0x04e70000U)
#define addr_map_ltc6_limit_r() (0x04e7ffffU)
#define addr_map_ltc7_base_r() (0x04e80000U)
#define addr_map_ltc7_limit_r() (0x04e8ffffU)
#define addr_map_rpg_pm_mcfcore0_base_r() (0x13e4f000U)
#define addr_map_rpg_pm_mcfcore0_limit_r() (0x13e4ffffU)
#define addr_map_rpg_pm_mcfcore1_base_r() (0x13e50000U)
#define addr_map_rpg_pm_mcfcore1_limit_r() (0x13e50fffU)
#define addr_map_rpg_pm_mcfcore2_base_r() (0x13e51000U)
#define addr_map_rpg_pm_mcfcore2_limit_r() (0x13e51fffU)
#define addr_map_rpg_pm_mcfcore3_base_r() (0x13e52000U)
#define addr_map_rpg_pm_mcfcore3_limit_r() (0x13e52fffU)
#define addr_map_rpg_pm_mcfcore4_base_r() (0x13e53000U)
#define addr_map_rpg_pm_mcfcore4_limit_r() (0x13e53fffU)
#define addr_map_rpg_pm_mcfcore5_base_r() (0x13e54000U)
#define addr_map_rpg_pm_mcfcore5_limit_r() (0x13e54fffU)
#define addr_map_rpg_pm_mcfcore6_base_r() (0x13e55000U)
#define addr_map_rpg_pm_mcfcore6_limit_r() (0x13e55fffU)
#define addr_map_rpg_pm_mcfcore7_base_r() (0x13e56000U)
#define addr_map_rpg_pm_mcfcore7_limit_r() (0x13e56fffU)
#define addr_map_rpg_pm_mcfcore8_base_r() (0x13e57000U)
#define addr_map_rpg_pm_mcfcore8_limit_r() (0x13e57fffU)
#define addr_map_rpg_pm_mcfcore9_base_r() (0x13e58000U)
#define addr_map_rpg_pm_mcfcore9_limit_r() (0x13e58fffU)
#define addr_map_rpg_pm_mcfcore10_base_r() (0x13e59000U)
#define addr_map_rpg_pm_mcfcore10_limit_r() (0x13e59fffU)
#define addr_map_rpg_pm_mcfcore11_base_r() (0x13e5a000U)
#define addr_map_rpg_pm_mcfcore11_limit_r() (0x13e5afffU)
#define addr_map_rpg_pm_mcfcore12_base_r() (0x13e5b000U)
#define addr_map_rpg_pm_mcfcore12_limit_r() (0x13e5bfffU)
#define addr_map_rpg_pm_mcfcore13_base_r() (0x13e5c000U)
#define addr_map_rpg_pm_mcfcore13_limit_r() (0x13e5cfffU)
#define addr_map_rpg_pm_mcfcore14_base_r() (0x13e5d000U)
#define addr_map_rpg_pm_mcfcore14_limit_r() (0x13e5dfffU)
#define addr_map_rpg_pm_mcfcore15_base_r() (0x13e5e000U)
#define addr_map_rpg_pm_mcfcore15_limit_r() (0x13e5efffU)
#define addr_map_rpg_pm_mcfsys0_base_r() (0x13e5f000U)
#define addr_map_rpg_pm_mcfsys0_limit_r() (0x13e5ffffU)
#define addr_map_rpg_pm_mcfsys1_base_r() (0x13e60000U)
#define addr_map_rpg_pm_mcfsys1_limit_r() (0x13e60fffU)
#define addr_map_rpg_pm_mcfc2c0_base_r() (0x13e61000U)
#define addr_map_rpg_pm_mcfc2c0_limit_r() (0x13e61fffU)
#define addr_map_rpg_pm_mcfc2c1_base_r() (0x13e62000U)
#define addr_map_rpg_pm_mcfc2c1_limit_r() (0x13e62fffU)
#define addr_map_rpg_pm_mcfsoc0_base_r() (0x13e63000U)
#define addr_map_rpg_pm_mcfsoc0_limit_r() (0x13e63fffU)
#define addr_map_rpg_pm_smmu0_base_r() (0x13e64000U)
#define addr_map_rpg_pm_smmu0_limit_r() (0x13e64fffU)
#define addr_map_rpg_pm_smmu1_base_r() (0x13e65000U)
#define addr_map_rpg_pm_smmu1_limit_r() (0x13e65fffU)
#define addr_map_rpg_pm_smmu2_base_r() (0x13e66000U)
#define addr_map_rpg_pm_smmu2_limit_r() (0x13e66fffU)
#define addr_map_rpg_pm_smmu3_base_r() (0x13e67000U)
#define addr_map_rpg_pm_smmu3_limit_r() (0x13e67fffU)
#define addr_map_rpg_pm_smmu4_base_r() (0x13e68000U)
#define addr_map_rpg_pm_smmu4_limit_r() (0x13e68fffU)
#define addr_map_smmu0_base_r() (0x11a30000U)
#define addr_map_smmu0_limit_r() (0x11a3ffffU)
#define addr_map_smmu1_base_r() (0x12a30000U)
#define addr_map_smmu1_limit_r() (0x12a3ffffU)
#define addr_map_smmu2_base_r() (0x15a30000U)
#define addr_map_smmu2_limit_r() (0x15a3ffffU)
#define addr_map_smmu3_base_r() (0x16a30000U)
#define addr_map_smmu3_limit_r() (0x16a3ffffU)
#define addr_map_smmu4_base_r() (0x05a30000U)
#define addr_map_smmu4_limit_r() (0x05a3ffffU)
#define addr_map_rpg_pm_msshub0_base_r() (0x13e69000U)
#define addr_map_rpg_pm_msshub0_limit_r() (0x13e69fffU)
#define addr_map_rpg_pm_msshub1_base_r() (0x13e6a000U)
#define addr_map_rpg_pm_msshub1_limit_r() (0x13e6afffU)
#define addr_map_rpg_pm_msshub2_base_r() (0x13e6b000U)
#define addr_map_rpg_pm_msshub2_limit_r() (0x13e6bfffU)
#define addr_map_rpg_pm_msshub3_base_r() (0x13e6c000U)
#define addr_map_rpg_pm_msshub3_limit_r() (0x13e6cfffU)
#define addr_map_rpg_pm_msshub4_base_r() (0x13e6d000U)
#define addr_map_rpg_pm_msshub4_limit_r() (0x13e6dfffU)
#define addr_map_rpg_pm_msshub5_base_r() (0x13e6e000U)
#define addr_map_rpg_pm_msshub5_limit_r() (0x13e6efffU)
#define addr_map_rpg_pm_msshub6_base_r() (0x13e6f000U)
#define addr_map_rpg_pm_msshub6_limit_r() (0x13e6ffffU)
#define addr_map_rpg_pm_msshub7_base_r() (0x13e70000U)
#define addr_map_rpg_pm_msshub7_limit_r() (0x13e70fffU)
#define addr_map_rpg_pm_nvltx0_base_r() (0x13e71000U)
#define addr_map_rpg_pm_nvltx0_limit_r() (0x13e71fffU)
#define addr_map_rpg_pm_nvltx1_base_r() (0x13e72000U)
#define addr_map_rpg_pm_nvltx1_limit_r() (0x13e72fffU)
#define addr_map_rpg_pm_nvltx2_base_r() (0x13e73000U)
#define addr_map_rpg_pm_nvltx2_limit_r() (0x13e73fffU)
#define addr_map_rpg_pm_nvltx3_base_r() (0x13e74000U)
#define addr_map_rpg_pm_nvltx3_limit_r() (0x13e74fffU)
#define addr_map_rpg_pm_nvltx4_base_r() (0x13e75000U)
#define addr_map_rpg_pm_nvltx4_limit_r() (0x13e75fffU)
#define addr_map_rpg_pm_nvltx5_base_r() (0x13e76000U)
#define addr_map_rpg_pm_nvltx5_limit_r() (0x13e76fffU)
#define addr_map_rpg_pm_nvltx6_base_r() (0x13e77000U)
#define addr_map_rpg_pm_nvltx6_limit_r() (0x13e77fffU)
#define addr_map_rpg_pm_nvltx7_base_r() (0x13e78000U)
#define addr_map_rpg_pm_nvltx7_limit_r() (0x13e78fffU)
#define addr_map_rpg_pm_nvltx8_base_r() (0x13e79000U)
#define addr_map_rpg_pm_nvltx8_limit_r() (0x13e79fffU)
#define addr_map_rpg_pm_nvltx9_base_r() (0x13e7a000U)
#define addr_map_rpg_pm_nvltx9_limit_r() (0x13e7afffU)
#define addr_map_rpg_pm_nvltx10_base_r() (0x13e7b000U)
#define addr_map_rpg_pm_nvltx10_limit_r() (0x13e7bfffU)
#define addr_map_rpg_pm_nvltx11_base_r() (0x13e7c000U)
#define addr_map_rpg_pm_nvltx11_limit_r() (0x13e7cfffU)
#define addr_map_rpg_pm_nvlrx0_base_r() (0x13e7d000U)
#define addr_map_rpg_pm_nvlrx0_limit_r() (0x13e7dfffU)
#define addr_map_rpg_pm_nvlrx1_base_r() (0x13e7e000U)
#define addr_map_rpg_pm_nvlrx1_limit_r() (0x13e7efffU)
#define addr_map_rpg_pm_nvlrx2_base_r() (0x13e7f000U)
#define addr_map_rpg_pm_nvlrx2_limit_r() (0x13e7ffffU)
#define addr_map_rpg_pm_nvlrx3_base_r() (0x13e80000U)
#define addr_map_rpg_pm_nvlrx3_limit_r() (0x13e80fffU)
#define addr_map_rpg_pm_nvlrx4_base_r() (0x13e81000U)
#define addr_map_rpg_pm_nvlrx4_limit_r() (0x13e81fffU)
#define addr_map_rpg_pm_nvlrx5_base_r() (0x13e82000U)
#define addr_map_rpg_pm_nvlrx5_limit_r() (0x13e82fffU)
#define addr_map_rpg_pm_nvlrx6_base_r() (0x13e83000U)
#define addr_map_rpg_pm_nvlrx6_limit_r() (0x13e83fffU)
#define addr_map_rpg_pm_nvlrx7_base_r() (0x13e84000U)
#define addr_map_rpg_pm_nvlrx7_limit_r() (0x13e84fffU)
#define addr_map_rpg_pm_nvlrx8_base_r() (0x13e85000U)
#define addr_map_rpg_pm_nvlrx8_limit_r() (0x13e85fffU)
#define addr_map_rpg_pm_nvlrx9_base_r() (0x13e86000U)
#define addr_map_rpg_pm_nvlrx9_limit_r() (0x13e86fffU)
#define addr_map_rpg_pm_nvlrx10_base_r() (0x13e87000U)
#define addr_map_rpg_pm_nvlrx10_limit_r() (0x13e87fffU)
#define addr_map_rpg_pm_nvlrx11_base_r() (0x13e88000U)
#define addr_map_rpg_pm_nvlrx11_limit_r() (0x13e88fffU)
#define addr_map_rpg_pm_nvlctrl0_base_r() (0x13e8b000U)
#define addr_map_rpg_pm_nvlctrl0_limit_r() (0x13e8bfffU)
#define addr_map_rpg_pm_nvlctrl1_base_r() (0x13e8c000U)
#define addr_map_rpg_pm_nvlctrl1_limit_r() (0x13e8cfffU)
#define addr_map_nvlw0_ctrl_base_r() (0x03b80000U)
#define addr_map_nvlw0_ctrl_limit_r() (0x03b81fffU)
#define addr_map_nvlw1_ctrl_base_r() (0x03bc0000U)
#define addr_map_nvlw1_ctrl_limit_r() (0x03bc1fffU)
#define addr_map_nvlw0_nvldl0_base_r() (0x03b90000U)
#define addr_map_nvlw0_nvldl0_limit_r() (0x03b94fffU)
#define addr_map_nvlw0_nvltlc0_base_r() (0x03b95000U)
#define addr_map_nvlw0_nvltlc0_limit_r() (0x03b96fffU)
#define addr_map_nvlw0_nvldl1_base_r() (0x03b98000U)
#define addr_map_nvlw0_nvldl1_limit_r() (0x03b9cfffU)
#define addr_map_nvlw0_nvltlc1_base_r() (0x03b9d000U)
#define addr_map_nvlw0_nvltlc1_limit_r() (0x03b9efffU)
#define addr_map_nvlw0_nvldl2_base_r() (0x03ba0000U)
#define addr_map_nvlw0_nvldl2_limit_r() (0x03ba4fffU)
#define addr_map_nvlw0_nvltlc2_base_r() (0x03ba5000U)
#define addr_map_nvlw0_nvltlc2_limit_r() (0x03ba6fffU)
#define addr_map_nvlw0_nvldl3_base_r() (0x03ba8000U)
#define addr_map_nvlw0_nvldl3_limit_r() (0x03bacfffU)
#define addr_map_nvlw0_nvltlc3_base_r() (0x03bad000U)
#define addr_map_nvlw0_nvltlc3_limit_r() (0x03baefffU)
#define addr_map_nvlw0_nvldl4_base_r() (0x03bb0000U)
#define addr_map_nvlw0_nvldl4_limit_r() (0x03bb4fffU)
#define addr_map_nvlw0_nvltlc4_base_r() (0x03bb5000U)
#define addr_map_nvlw0_nvltlc4_limit_r() (0x03bb6fffU)
#define addr_map_nvlw0_nvldl5_base_r() (0x03bb8000U)
#define addr_map_nvlw0_nvldl5_limit_r() (0x03bbcfffU)
#define addr_map_nvlw0_nvltlc5_base_r() (0x03bbd000U)
#define addr_map_nvlw0_nvltlc5_limit_r() (0x03bbefffU)
#define addr_map_nvlw1_nvldl0_base_r() (0x03bd0000U)
#define addr_map_nvlw1_nvldl0_limit_r() (0x03bd4fffU)
#define addr_map_nvlw1_nvltlc0_base_r() (0x03bd5000U)
#define addr_map_nvlw1_nvltlc0_limit_r() (0x03bd6fffU)
#define addr_map_nvlw1_nvldl1_base_r() (0x03bd8000U)
#define addr_map_nvlw1_nvldl1_limit_r() (0x03bdcfffU)
#define addr_map_nvlw1_nvltlc1_base_r() (0x03bdd000U)
#define addr_map_nvlw1_nvltlc1_limit_r() (0x03bdefffU)
#define addr_map_nvlw1_nvldl2_base_r() (0x03be0000U)
#define addr_map_nvlw1_nvldl2_limit_r() (0x03be4fffU)
#define addr_map_nvlw1_nvltlc2_base_r() (0x03be5000U)
#define addr_map_nvlw1_nvltlc2_limit_r() (0x03be6fffU)
#define addr_map_nvlw1_nvldl3_base_r() (0x03be8000U)
#define addr_map_nvlw1_nvldl3_limit_r() (0x03becfffU)
#define addr_map_nvlw1_nvltlc3_base_r() (0x03bed000U)
#define addr_map_nvlw1_nvltlc3_limit_r() (0x03beefffU)
#define addr_map_nvlw1_nvldl4_base_r() (0x03bf0000U)
#define addr_map_nvlw1_nvldl4_limit_r() (0x03bf4fffU)
#define addr_map_nvlw1_nvltlc4_base_r() (0x03bf5000U)
#define addr_map_nvlw1_nvltlc4_limit_r() (0x03bf6fffU)
#define addr_map_nvlw1_nvldl5_base_r() (0x03bf8000U)
#define addr_map_nvlw1_nvldl5_limit_r() (0x03bfcfffU)
#define addr_map_nvlw1_nvltlc5_base_r() (0x03bfd000U)
#define addr_map_nvlw1_nvltlc5_limit_r() (0x03bfefffU)
#define addr_map_nvlw0_nvldl_multi_base_r() (0x03b88000U)
#define addr_map_nvlw0_nvldl_multi_limit_r() (0x03b8cfffU)
#define addr_map_nvlw0_nvltlc_multi_base_r() (0x03b8d000U)
#define addr_map_nvlw0_nvltlc_multi_limit_r() (0x03b8efffU)
#define addr_map_nvlw1_nvldl_multi_base_r() (0x03bc8000U)
#define addr_map_nvlw1_nvldl_multi_limit_r() (0x03bccfffU)
#define addr_map_nvlw1_nvltlc_multi_base_r() (0x03bcd000U)
#define addr_map_nvlw1_nvltlc_multi_limit_r() (0x03bcefffU)
#define addr_map_rpg_pm_xalrc0_base_r() (0x13e00000U)
#define addr_map_rpg_pm_xalrc0_limit_r() (0x13e00fffU)
#define addr_map_rpg_pm_xalrc1_base_r() (0x13e01000U)
#define addr_map_rpg_pm_xalrc1_limit_r() (0x13e01fffU)
#define addr_map_rpg_pm_xalrc2_base_r() (0x13e02000U)
#define addr_map_rpg_pm_xalrc2_limit_r() (0x13e02fffU)
#define addr_map_rpg_pm_xalrc3_base_r() (0x13e03000U)
#define addr_map_rpg_pm_xalrc3_limit_r() (0x13e03fffU)
#define addr_map_rpg_pm_xalrc4_base_r() (0x13e04000U)
#define addr_map_rpg_pm_xalrc4_limit_r() (0x13e04fffU)
#define addr_map_rpg_pm_xalrc5_base_r() (0x13e05000U)
#define addr_map_rpg_pm_xalrc5_limit_r() (0x13e05fffU)
#define addr_map_rpg_pm_xalrc6_base_r() (0x13e06000U)
#define addr_map_rpg_pm_xalrc6_limit_r() (0x13e06fffU)
#define addr_map_rpg_pm_xalrc7_base_r() (0x13e07000U)
#define addr_map_rpg_pm_xalrc7_limit_r() (0x13e07fffU)
#define addr_map_rpg_pm_xalrc8_base_r() (0x13e08000U)
#define addr_map_rpg_pm_xalrc8_limit_r() (0x13e08fffU)
#define addr_map_rpg_pm_xalrc9_base_r() (0x13e09000U)
#define addr_map_rpg_pm_xalrc9_limit_r() (0x13e09fffU)
#define addr_map_rpg_pm_xtlrc0_base_r() (0x13e0a000U)
#define addr_map_rpg_pm_xtlrc0_limit_r() (0x13e0afffU)
#define addr_map_rpg_pm_xtlrc1_base_r() (0x13e0b000U)
#define addr_map_rpg_pm_xtlrc1_limit_r() (0x13e0bfffU)
#define addr_map_rpg_pm_xtlrc2_base_r() (0x13e0c000U)
#define addr_map_rpg_pm_xtlrc2_limit_r() (0x13e0cfffU)
#define addr_map_rpg_pm_xtlrc3_base_r() (0x13e0d000U)
#define addr_map_rpg_pm_xtlrc3_limit_r() (0x13e0dfffU)
#define addr_map_rpg_pm_xtlrc4_base_r() (0x13e0e000U)
#define addr_map_rpg_pm_xtlrc4_limit_r() (0x13e0efffU)
#define addr_map_rpg_pm_xtlrc5_base_r() (0x13e0f000U)
#define addr_map_rpg_pm_xtlrc5_limit_r() (0x13e0ffffU)
#define addr_map_rpg_pm_xtlrc6_base_r() (0x13e10000U)
#define addr_map_rpg_pm_xtlrc6_limit_r() (0x13e10fffU)
#define addr_map_rpg_pm_xtlrc7_base_r() (0x13e11000U)
#define addr_map_rpg_pm_xtlrc7_limit_r() (0x13e11fffU)
#define addr_map_rpg_pm_xtlrc8_base_r() (0x13e12000U)
#define addr_map_rpg_pm_xtlrc8_limit_r() (0x13e12fffU)
#define addr_map_rpg_pm_xtlrc9_base_r() (0x13e13000U)
#define addr_map_rpg_pm_xtlrc9_limit_r() (0x13e13fffU)
#define addr_map_rpg_pm_xtlq0_base_r() (0x13e14000U)
#define addr_map_rpg_pm_xtlq0_limit_r() (0x13e14fffU)
#define addr_map_rpg_pm_xtlq1_base_r() (0x13e15000U)
#define addr_map_rpg_pm_xtlq1_limit_r() (0x13e15fffU)
#define addr_map_rpg_pm_xtlq2_base_r() (0x13e16000U)
#define addr_map_rpg_pm_xtlq2_limit_r() (0x13e16fffU)
#define addr_map_rpg_pm_xtlq3_base_r() (0x13e17000U)
#define addr_map_rpg_pm_xtlq3_limit_r() (0x13e17fffU)
#define addr_map_rpg_pm_xtlq4_base_r() (0x13e18000U)
#define addr_map_rpg_pm_xtlq4_limit_r() (0x13e18fffU)
#define addr_map_rpg_pm_xtlq5_base_r() (0x13e19000U)
#define addr_map_rpg_pm_xtlq5_limit_r() (0x13e19fffU)
#define addr_map_rpg_pm_xtlq6_base_r() (0x13e1a000U)
#define addr_map_rpg_pm_xtlq6_limit_r() (0x13e1afffU)
#define addr_map_rpg_pm_xtlq7_base_r() (0x13e1b000U)
#define addr_map_rpg_pm_xtlq7_limit_r() (0x13e1bfffU)
#define addr_map_rpg_pm_xtlq8_base_r() (0x13e1c000U)
#define addr_map_rpg_pm_xtlq8_limit_r() (0x13e1cfffU)
#define addr_map_rpg_pm_xtlq9_base_r() (0x13e1d000U)
#define addr_map_rpg_pm_xtlq9_limit_r() (0x13e1dfffU)
#define addr_map_pcie_c0_ctl0_xalrc_base_r() (0x14080000U)
#define addr_map_pcie_c0_ctl0_xalrc_limit_r() (0x1408ffffU)
#define addr_map_pcie_c0_ctl1_xtlq_base_r() (0x14090000U)
#define addr_map_pcie_c0_ctl1_xtlq_limit_r() (0x1409ffffU)
#define addr_map_pcie_c1_ctl0_xalrc_base_r() (0x140a0000U)
#define addr_map_pcie_c1_ctl0_xalrc_limit_r() (0x140affffU)
#define addr_map_pcie_c1_ctl1_xtlq_base_r() (0x140b0000U)
#define addr_map_pcie_c1_ctl1_xtlq_limit_r() (0x140bffffU)
#define addr_map_pcie_c2_ctl0_xalrc_base_r() (0x140c0000U)
#define addr_map_pcie_c2_ctl0_xalrc_limit_r() (0x140cffffU)
#define addr_map_pcie_c2_ctl1_xtlq_base_r() (0x140d0000U)
#define addr_map_pcie_c2_ctl1_xtlq_limit_r() (0x140dffffU)
#define addr_map_pcie_c3_ctl0_xalrc_base_r() (0x140e0000U)
#define addr_map_pcie_c3_ctl0_xalrc_limit_r() (0x140effffU)
#define addr_map_pcie_c3_ctl1_xtlq_base_r() (0x140f0000U)
#define addr_map_pcie_c3_ctl1_xtlq_limit_r() (0x140fffffU)
#define addr_map_pcie_c4_ctl0_xalrc_base_r() (0x14100000U)
#define addr_map_pcie_c4_ctl0_xalrc_limit_r() (0x1410ffffU)
#define addr_map_pcie_c4_ctl1_xtlq_base_r() (0x14110000U)
#define addr_map_pcie_c4_ctl1_xtlq_limit_r() (0x1411ffffU)
#define addr_map_pcie_c5_ctl0_xalrc_base_r() (0x14120000U)
#define addr_map_pcie_c5_ctl0_xalrc_limit_r() (0x1412ffffU)
#define addr_map_pcie_c5_ctl1_xtlq_base_r() (0x14130000U)
#define addr_map_pcie_c5_ctl1_xtlq_limit_r() (0x1413ffffU)
#define addr_map_pcie_c6_ctl0_xalrc_base_r() (0x14140000U)
#define addr_map_pcie_c6_ctl0_xalrc_limit_r() (0x1414ffffU)
#define addr_map_pcie_c6_ctl1_xtlq_base_r() (0x14150000U)
#define addr_map_pcie_c6_ctl1_xtlq_limit_r() (0x1415ffffU)
#define addr_map_pcie_c7_ctl0_xalrc_base_r() (0x14160000U)
#define addr_map_pcie_c7_ctl0_xalrc_limit_r() (0x1416ffffU)
#define addr_map_pcie_c7_ctl1_xtlq_base_r() (0x14170000U)
#define addr_map_pcie_c7_ctl1_xtlq_limit_r() (0x1417ffffU)
#define addr_map_pcie_c8_ctl0_xalrc_base_r() (0x14180000U)
#define addr_map_pcie_c8_ctl0_xalrc_limit_r() (0x1418ffffU)
#define addr_map_pcie_c8_ctl1_xtlq_base_r() (0x14190000U)
#define addr_map_pcie_c8_ctl1_xtlq_limit_r() (0x1419ffffU)
#define addr_map_pcie_c9_ctl0_xalrc_base_r() (0x141a0000U)
#define addr_map_pcie_c9_ctl0_xalrc_limit_r() (0x141affffU)
#define addr_map_pcie_c9_ctl1_xtlq_base_r() (0x141b0000U)
#define addr_map_pcie_c9_ctl1_xtlq_limit_r() (0x141bffffU)
#define addr_map_pcie_c0_ctl0_xtlrc_base_r() (0x14083000U)
#define addr_map_pcie_c0_ctl0_xtlrc_limit_r() (0x14083fffU)
#define addr_map_pcie_c1_ctl0_xtlrc_base_r() (0x140a3000U)
#define addr_map_pcie_c1_ctl0_xtlrc_limit_r() (0x140a3fffU)
#define addr_map_pcie_c2_ctl0_xtlrc_base_r() (0x140c3000U)
#define addr_map_pcie_c2_ctl0_xtlrc_limit_r() (0x140c3fffU)
#define addr_map_pcie_c3_ctl0_xtlrc_base_r() (0x140e3000U)
#define addr_map_pcie_c3_ctl0_xtlrc_limit_r() (0x140e3fffU)
#define addr_map_pcie_c4_ctl0_xtlrc_base_r() (0x14103000U)
#define addr_map_pcie_c4_ctl0_xtlrc_limit_r() (0x14103fffU)
#define addr_map_pcie_c5_ctl0_xtlrc_base_r() (0x14123000U)
#define addr_map_pcie_c5_ctl0_xtlrc_limit_r() (0x14123fffU)
#define addr_map_pcie_c6_ctl0_xtlrc_base_r() (0x14143000U)
#define addr_map_pcie_c6_ctl0_xtlrc_limit_r() (0x14143fffU)
#define addr_map_pcie_c7_ctl0_xtlrc_base_r() (0x14163000U)
#define addr_map_pcie_c7_ctl0_xtlrc_limit_r() (0x14163fffU)
#define addr_map_pcie_c8_ctl0_xtlrc_base_r() (0x14183000U)
#define addr_map_pcie_c8_ctl0_xtlrc_limit_r() (0x14183fffU)
#define addr_map_pcie_c9_ctl0_xtlrc_base_r() (0x141a3000U)
#define addr_map_pcie_c9_ctl0_xtlrc_limit_r() (0x141a3fffU)
#define addr_map_rpg_pm_ctc0_base_r() (0x13e8d000U)
#define addr_map_rpg_pm_ctc0_limit_r() (0x13e8dfffU)
#define addr_map_rpg_pm_ctc1_base_r() (0x13e8e000U)
#define addr_map_rpg_pm_ctc1_limit_r() (0x13e8efffU)
#define addr_map_c2c0_base_r() (0x13fe2000U)
#define addr_map_c2c0_limit_r() (0x13fe2fffU)
#define addr_map_c2c1_base_r() (0x13fe3000U)
#define addr_map_c2c1_limit_r() (0x13fe3fffU)
#define addr_map_c2c2_base_r() (0x13fe4000U)
#define addr_map_c2c2_limit_r() (0x13fe4fffU)
#define addr_map_c2c3_base_r() (0x13fe5000U)
#define addr_map_c2c3_limit_r() (0x13fe5fffU)
#define addr_map_c2c4_base_r() (0x13fe6000U)
#define addr_map_c2c4_limit_r() (0x13fe6fffU)
#define addr_map_c2c5_base_r() (0x13fe7000U)
#define addr_map_c2c5_limit_r() (0x13fe7fffU)
#define addr_map_c2c6_base_r() (0x13fe8000U)
#define addr_map_c2c6_limit_r() (0x13fe8fffU)
#define addr_map_c2c7_base_r() (0x13fe9000U)
#define addr_map_c2c7_limit_r() (0x13fe9fffU)
#define addr_map_c2c8_base_r() (0x13fea000U)
#define addr_map_c2c8_limit_r() (0x13feafffU)
#define addr_map_c2c9_base_r() (0x13feb000U)
#define addr_map_c2c9_limit_r() (0x13febfffU)
#define addr_map_c2cs0_base_r() (0x13fe0000U)
#define addr_map_c2cs0_limit_r() (0x13fe0fffU)
#define addr_map_c2cs1_base_r() (0x13fe1000U)
#define addr_map_c2cs1_limit_r() (0x13fe1fffU)
#define addr_map_pmc_misc_base_r() (0x0c3a0000U)
#endif

View File

@@ -1,147 +0,0 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_PMASYS_SOC_HWPM_H
#define TH500_PMASYS_SOC_HWPM_H
#define pmasys_cg2_r() (0x13ef1f44U)
#define pmasys_cg2_slcg_f(v) (((v) & 0x1U) << 0U)
#define pmasys_cg2_slcg_m() (0x1U << 0U)
#define pmasys_cg2_slcg_enabled_v() (0x00000000U)
#define pmasys_cg2_slcg_enabled_f() (0x0U)
#define pmasys_cg2_slcg_disabled_v() (0x00000001U)
#define pmasys_cg2_slcg_disabled_f() (0x1U)
#define pmasys_cg2_slcg__prod_v() (0x00000000U)
#define pmasys_cg2_slcg__prod_f() (0x0U)
#define pmasys_channel_control_user_r(i)\
(0x13ef0a20U + ((i)*384U))
#define pmasys_channel_control_user_update_bytes_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_control_user_update_bytes_m() (0x1U << 16U)
#define pmasys_channel_control_user_update_bytes_doit_v() (0x00000001U)
#define pmasys_channel_control_user_update_bytes_doit_f() (0x10000U)
#define pmasys_channel_mem_blockupper_r(i)\
(0x13ef0a3cU + ((i)*384U))
#define pmasys_channel_mem_blockupper_valid_f(v) (((v) & 0x1U) << 31U)
#define pmasys_channel_mem_blockupper_valid_false_v() (0x00000000U)
#define pmasys_channel_mem_blockupper_valid_true_v() (0x00000001U)
#define pmasys_channel_mem_bump_r(i)\
(0x13ef0a24U + ((i)*384U))
#define pmasys_channel_mem_block_r(i)\
(0x13ef0a38U + ((i)*384U))
#define pmasys_channel_mem_block__size_1_v() (0x00000001U)
#define pmasys_channel_mem_block_base_f(v) (((v) & 0xffffffffU) << 0U)
#define pmasys_channel_mem_block_base_m() (0xffffffffU << 0U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles__prod_v() (0x00000004U)
#define pmasys_channel_mem_block_coalesce_timeout_cycles__prod_f() (0x4000000U)
#define pmasys_channel_outbase_r(i)\
(0x13ef0a48U + ((i)*384U))
#define pmasys_channel_outbase_ptr_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outbase_ptr_m() (0x7ffffffU << 5U)
#define pmasys_channel_outbase_ptr_v(r) (((r) >> 5U) & 0x7ffffffU)
#define pmasys_channel_outbaseupper_r(i)\
(0x13ef0a4cU + ((i)*384U))
#define pmasys_channel_outbaseupper_ptr_f(v) (((v) & 0x1ffffffU) << 0U)
#define pmasys_channel_outbaseupper_ptr_m() (0x1ffffffU << 0U)
#define pmasys_channel_outbaseupper_ptr_v(r) (((r) >> 0U) & 0x1ffffffU)
#define pmasys_channel_outsize_r(i)\
(0x13ef0a50U + ((i)*384U))
#define pmasys_channel_outsize_numbytes_f(v) (((v) & 0x7ffffffU) << 5U)
#define pmasys_channel_outsize_numbytes_m() (0x7ffffffU << 5U)
#define pmasys_channel_mem_head_r(i)\
(0x13ef0a54U + ((i)*384U))
#define pmasys_channel_mem_bytes_addr_r(i)\
(0x13ef0a5cU + ((i)*384U))
#define pmasys_channel_mem_bytes_addr_ptr_f(v) (((v) & 0x3fffffffU) << 2U)
#define pmasys_channel_mem_bytes_addr_ptr_m() (0x3fffffffU << 2U)
#define pmasys_channel_config_user_r(i)\
(0x13ef0a44U + ((i)*384U))
#define pmasys_channel_config_user_stream_f(v) (((v) & 0x1U) << 0U)
#define pmasys_channel_config_user_stream_m() (0x1U << 0U)
#define pmasys_channel_config_user_stream_disable_f() (0x0U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_f(v)\
(((v) & 0x7U) << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles_m() (0x7U << 24U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_v()\
(0x00000004U)
#define pmasys_channel_config_user_coalesce_timeout_cycles__prod_f()\
(0x4000000U)
#define pmasys_channel_status_r(i)\
(0x13ef0a00U + ((i)*384U))
#define pmasys_channel_status_engine_status_m() (0x7U << 0U)
#define pmasys_channel_status_engine_status_empty_v() (0x00000000U)
#define pmasys_channel_status_engine_status_empty_f() (0x0U)
#define pmasys_channel_status_engine_status_active_v() (0x00000001U)
#define pmasys_channel_status_engine_status_paused_v() (0x00000002U)
#define pmasys_channel_status_engine_status_quiescent_v() (0x00000003U)
#define pmasys_channel_status_engine_status_stalled_v() (0x00000005U)
#define pmasys_channel_status_engine_status_faulted_v() (0x00000006U)
#define pmasys_channel_status_engine_status_halted_v() (0x00000007U)
#define pmasys_channel_status_membuf_status_f(v) (((v) & 0x1U) << 16U)
#define pmasys_channel_status_membuf_status_m() (0x1U << 16U)
#define pmasys_channel_status_membuf_status_v(r) (((r) >> 16U) & 0x1U)
#define pmasys_channel_status_membuf_status_overflowed_v() (0x00000001U)
#define pmasys_command_slice_trigger_config_user_r(i)\
(0x13ef0afcU + ((i)*384U))
#define pmasys_command_slice_trigger_config_user_pma_pulse_f(v)\
(((v) & 0x1U) << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_m() (0x1U << 0U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_pma_pulse_disable_f() (0x0U)
#define pmasys_command_slice_trigger_config_user_record_stream_f(v)\
(((v) & 0x1U) << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_m() (0x1U << 8U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_v()\
(0x00000000U)
#define pmasys_command_slice_trigger_config_user_record_stream_disable_f()\
(0x0U)
#endif

View File

@@ -1,113 +0,0 @@
/*
* Copyright (c) 2022-2023, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
/*
* Function/Macro naming determines intended use:
*
* <x>_r(void) : Returns the offset for register <x>.
*
* <x>_o(void) : Returns the offset for element <x>.
*
* <x>_w(void) : Returns the word offset for word (4 byte) element <x>.
*
* <x>_<y>_s(void) : Returns size of field <y> of register <x> in bits.
*
* <x>_<y>_f(u32 v) : Returns a value based on 'v' which has been shifted
* and masked to place it at field <y> of register <x>. This value
* can be |'d with others to produce a full register value for
* register <x>.
*
* <x>_<y>_m(void) : Returns a mask for field <y> of register <x>. This
* value can be ~'d and then &'d to clear the value of field <y> for
* register <x>.
*
* <x>_<y>_<z>_f(void) : Returns the constant value <z> after being shifted
* to place it at field <y> of register <x>. This value can be |'d
* with others to produce a full register value for <x>.
*
* <x>_<y>_v(u32 r) : Returns the value of field <y> from a full register
* <x> value 'r' after being shifted to place its LSB at bit 0.
* This value is suitable for direct comparison with other unshifted
* values appropriate for use in field <y> of register <x>.
*
* <x>_<y>_<z>_v(void) : Returns the constant value for <z> defined for
* field <y> of register <x>. This value is suitable for direct
* comparison with unshifted values appropriate for use in field <y>
* of register <x>.
*/
#ifndef TH500_PMMSYS_SOC_HWPM_H
#define TH500_PMMSYS_SOC_HWPM_H
#define pmmsys_perdomain_offset_v() (0x00001000U)
#define pmmsys_control_r(i)\
(0x13e0009cU + ((i)*4096U))
#define pmmsys_control_mode_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_control_mode_m() (0x7U << 0U)
#define pmmsys_control_mode_disable_v() (0x00000000U)
#define pmmsys_control_mode_disable_f() (0x0U)
#define pmmsys_control_mode_a_v() (0x00000001U)
#define pmmsys_control_mode_b_v() (0x00000002U)
#define pmmsys_control_mode_c_v() (0x00000003U)
#define pmmsys_control_mode_e_v() (0x00000005U)
#define pmmsys_control_mode_null_v() (0x00000007U)
#define pmmsys_sys0_enginestatus_r(i)\
(0x13e000c8U + ((i)*4096U))
#define pmmsys_sys0router_enginestatus_r() (0x13ef2050U)
#define pmmsys_sys0router_enginestatus_status_f(v) (((v) & 0x7U) << 0U)
#define pmmsys_sys0router_enginestatus_status_m() (0x7U << 0U)
#define pmmsys_sys0router_enginestatus_status_v(r) (((r) >> 0U) & 0x7U)
#define pmmsys_sys0router_enginestatus_status_empty_v() (0x00000000U)
#define pmmsys_sys0router_enginestatus_status_active_v() (0x00000001U)
#define pmmsys_sys0router_enginestatus_status_paused_v() (0x00000002U)
#define pmmsys_sys0router_enginestatus_status_quiescent_v() (0x00000003U)
#define pmmsys_sys0router_enginestatus_status_stalled_v() (0x00000005U)
#define pmmsys_sys0router_enginestatus_status_faulted_v() (0x00000006U)
#define pmmsys_sys0router_enginestatus_status_halted_v() (0x00000007U)
#define pmmsys_sys0router_cg1_secure_r() (0x13ef2054U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_m() (0x1U << 31U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon__prod_v() (0x00000001U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon__prod_f() (0x80000000U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_disabled_v() (0x00000000U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_disabled_f() (0x0U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_enabled_v() (0x00000001U)
#define pmmsys_sys0router_cg1_secure_flcg_perfmon_enabled_f() (0x80000000U)
#define pmmsys_sys0router_cg2_r() (0x13ef2040U)
#define pmmsys_sys0router_cg2_slcg_m() (0x1U << 31U)
#define pmmsys_sys0router_cg2_slcg_disabled_v() (0x00000001U)
#define pmmsys_sys0router_cg2_slcg_disabled_f() (0x80000000U)
#define pmmsys_sys0router_cg2_slcg_enabled_f() (0x0U)
#define pmmsys_sys0router_perfmon_cg2_secure_r() (0x13ef2058U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_f(v) (((v) & 0x1U) << 31U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_m() (0x1U << 31U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg__prod_v() (0x00000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg__prod_f() (0x0U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_disabled_v() (0x00000001U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_disabled_f() (0x80000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_enabled_v() (0x00000000U)
#define pmmsys_sys0router_perfmon_cg2_secure_slcg_enabled_f() (0x0U)
#define pmmsys_sys0_enginestatus_r(i)\
(0x13e000c8U + ((i)*4096U))
#define pmmsys_sys0_enginestatus_enable_f(v) (((v) & 0x1U) << 8U)
#define pmmsys_sys0_enginestatus_enable_m() (0x1U << 8U)
#define pmmsys_sys0_enginestatus_enable_out_v() (0x00000001U)
#define pmmsys_sys0_enginestatus_enable_out_f() (0x100U)
#define pmmsys_sysrouter_enginestatus_r() (0x13ef2050U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_f(v)\
(((v) & 0x7U) << 8U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_m() (0x7U << 8U)
#define pmmsys_sysrouter_enginestatus_merged_perfmon_status_v(r)\
(((r) >> 8U) & 0x7U)
#endif

Some files were not shown because too many files have changed in this diff Show More