Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h
Move rest of the platform HAL files to common/regops/ as well
Fix all the header includes to include new public header
Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow
Jira NVGPU-620
Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We had to force allocation of physically contiguous memory for
USERD in nvlink case, as a channel's USERD address is computed as
an offset from fifo->userd address, and nvlink bypasses SMMU.
With 4096 channels, it can become difficult to allocate 2MB of
physically contiguous sysmem for USERD on a busy system.
PBDMA does not require any sort of packing or contiguous USERD
allocation, as each channel has a direct pointer to that channel's
512B USERD region. When BAR1 is supported we only need the GPU VAs
to be contiguous, to setup the BAR1 inst block.
- Add slab allocator for USERD.
- Slabs are allocated in SYSMEM, using PAGE_SIZE for slab size.
- Contiguous channels share the same page (16 channels per slab).
- ch->userd_mem points to related nvgpu_mem descriptor
- ch->userd_offset is the offset from the beginning of the slab
- Pre-allocate GPU VAs for the whole BAR1
- Add g->ops.mm.bar1_map() method
- gk20a_mm_bar1_map() uses fixed mapping in BAR1 region
- vgpu_mm_bar1_map() passes the offset in TEGRA_VGPU_CMD_MAP_BAR1
- TEGRA_VGPU_CMD_MAP_BAR1 is called for each slab.
Bug 2422486
Bug 200474793
Change-Id: I202699fe55a454c1fc6d969e7b6196a46256d704
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959032
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
1. Implement the following vgpu functions to support clk-arb:
- vgpu_clk_get_range() to return min and max freqs from
supported frequencies
- implement vgpu_clk_get_round_rate() which sets rounded
rate to input rate. Rounding is handled in RM Server
- modify vgpu_clk_get_freqs() to retrieve freq table in IVM
memory instead of copying the value in array as part of cmd
message.
2. Add support for clk-arb related HALs for vgpu.
3. support_clk_freq_controller is assigned true for vgpu
provided guest VM has the privilege to set clock frequency.
Bug 200422845
Bug 2363882
Jira EVLR-3254
Change-Id: I91fc392db381c5db1d52b19d45ec0481fdc27554
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1812379
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move OS agnostic parts of vgpu clk code out of os/linux specific
path. This includes implementation sending rpc commands to
RM Server. Move Linux specific vgpu clk code to platform vgpu files
keeping it consistent with native implementation.
Bug 2363882
Jira EVLR-3254
Change-Id: I0aae014ef16415bb356c81e9bfd76bc65206d9fd
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1820674
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Using two separate locks (poweron_lock and poweroff_lock)
allows concurrent gpu power-on and power-off. This shall
not happen as driver won't be able to maintain correct
gpu state.
Use a single power_lock to manage gpu power state. This
lock will be used to manage gpu power state from multiple
triggers like gpu idle, gpu gc-off, etc.
JIRA NVGPU-1100
Change-Id: Ia9b4aeda024a5844ae9f182d453cd6341876680a
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1827812
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>