Linux kernel has a default 32-bit segmentation boundary for
any device that doesn't explicitly configure it. When nvgpu
tries to allocate a larger memory > 4GB, iommu_dma_map_sg()
function in the kernel will take this boundary into account
and add an internal padding to the allocated IOVA space:
|<---IOVA space 1--->|<---padding--->|<---IOVA space 2--->|
When DMA reads/writes the memory using this discountinued
IOVA space, it may end up with accessing the padding part,
instead of the IOVA space 2.
So this patch adds dma_set_seg_boundary() to nvgpu driver,
by maximizing the segmentation boundary up to DMA_BIT_MASK
to ensure a continued IOVA space.
Bug 200558567
Change-Id: I979d56681dddca56f1b02fce83dc81147a6b0d82
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2304150
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Pritesh Raithatha <praithatha@nvidia.com>
Reviewed-by: Puneet Saxena <puneets@nvidia.com>
Reviewed-by: Chris Dragan <kdragan@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Previously, unit interrupt enabling/disabling and corresponding MC level
interrupt enabling/disabling was not done at the same time.
With this change, stall and nonstall interrupt for units are programmed
at MC level along with individual unit interrupts. Kept access to MC
interrupt registers through mc.intr_lock spinlock.
For doing this separated CE and GR interrupt mask functions.
mc.intr_enable is only used when there is global interrupt
control to be set. Removed mc_gp10b.c as mc_gp10b_intr_enable
is now removed. Removed following functions - mc_gv100_intr_enable,
mc_gv11b_intr_enable & intr_tu104_enable. Removed intr_pmu_unit_config
as we can use the generic unit interrupt control function.
JIRA NVGPU-4336
Change-Id: Ibd296d4a60fda6ba930f18f518ee56ab3f9dacad
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2196178
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
IRQs can get triggered during nvgpu power-on due to MMU fault, invalid
PRIV ring or bus access etc. Handlers for those IRQs can't access the
full state related to the IRQ unless nvgpu is fully powered on.
In order to let the IRQ handlers know about the nvgpu power-on state
gk20a.power_on_state variable has to be protected through spinlock
to avoid the deadlock due to usage of earlier power_lock mutex.
Further the IRQs need to be disabled on local CPU while updating the
power state variable hence use spin_lock_irqsave and spin_unlock_-
irqrestore APIs for protecting the access.
JIRA NVGPU-1592
Change-Id: If5d1b5e2617ad90a68faa56ff47f62bb3f0b232b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2203860
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
cyclestats_snapshot data and lock is right now stored in struct nvgpu_gr
Use case itself is not specific to GR engine but in general it applies
to other units outside of GR too.
Hence it makes sense to move both data and lock to struct gk20a instead
of keeping them in struct nvgpu_gr
Update all cyclestats_snapshot code to refer data/lock from struct gk20a
Remove gr_priv.h header include from cyclestats_snapshot.c
Some of the functions were mistakenly declared in gr_gk20a.h.
Move them to cyclestats_snapshot.h and rename them to form nvgpu_css_*()
Jira NVGPU-1103
Change-Id: I3fb32fe96f0ca6613f4640c8bd227b9e0e02dca3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2104848
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Removed unused struct from gr_gk20a.h
Change static allocation for struct gr_gk20a to dynamic type.
Change all the files that being affected by that change.
Call gr allocation from corresponding init_support functions, which
are part of the probe functions.
nvgpu_pci_init_support in pci.c
vgpu_init_support in vgpu_linux.c
gk20a_init_support in module.c
Call gr free before the gk20a free call in nvgpu_free_gk20a.
Rename struct gr_gk20a to struct nvgpu_gr
JIRA NVGPU-3132
Change-Id: Ief5e664521f141c7378c4044ed0df5f03ba06fca
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095798
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added new function to add require sw initionaltions. before enabling
gr hw. Added nvgpu_netlist_init_ctx_vars and nvgpu_gr_falcon_init_support
as part of this function:
int nvgpu_gr_prepare_sw(struct gk20a *g)
Moved following structure defs from gr_gk20a.h to gr_falcon.h and
renamed appropriately:
gk20a_ctxsw_ucode_segment -> nvgpu_ctxsw_ucode_segment
gk20a_ctxsw_ucode_segments -> nvgpu_ctxsw_ucode_segments
Moved following struct to gr_falcon_priv.h:
gk20a_ctxsw_ucode_info -> nvgpu_ctxsw_ucode_info
Moved following data from struct gk20a to new structure in gr_falcon_priv.h
struct nvgpu_gr_falcon:
struct nvgpu_mutex ctxsw_disable_lock;
int ctxsw_disable_count;
struct gk20a_ctxsw_ucode_info ctxsw_ucode_info;
Also moved following data from gr_gk20.h to struct nvgpu_gr_falcon:
struct nvgpu_mutex fecs_mutex;
bool skip_ucode_init;
wait_ucode_status
GR_IS_UCODE related enums
eUcodeHandshakeInit enums
Now add a pointer to this new data structure from struct gr_gk20a to
access gr_falcon related data and modified code to reflect this
change:
struct nvgpu_gr_falcon *falcon;
Added following functions to access gr_falcon data:
struct nvgpu_mutex *nvgpu_gr_falcon_get_fecs_mutex(
struct nvgpu_gr_falcon *falcon);
struct nvgpu_ctxsw_ucode_segments *nvgpu_gr_falcon_get_fecs_ucode_segments(
struct nvgpu_gr_falcon *falcon);
struct nvgpu_ctxsw_ucode_segments *nvgpu_gr_falcon_get_gpccs_ucode_segments(
struct nvgpu_gr_falcon *falcon);
void *nvgpu_gr_falcon_get_surface_desc_cpu_va(
struct nvgpu_gr_falcon *falcon);
JIRA NVGPU-1881
Change-Id: I9100891989b0d6b57c49f2bf00ad839a72bc7c7e
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2091358
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
a) free_channel_ctx_header is used to free the channel's underlying subctx
and belongs to the hal.channel unit instead of fifo. Moved the same and
renamed the HAL ops to free_ctx_header. The function
gv11b_free_subctx_header is moved to channel_gv11b.* files and also
renamed to gv11b_channel_free_subctx_header.
b) ch_abort_clean_up is moved to hal.channel unit
c) channel_resume and channel_suspend are used to resume and suspend all
the serviceable channels. This belongs to hal.channel unit and are
moved from the hal.fifo unit.
The HAL ops channel_resume and channel_suspend are renamed to
resume_all_serviceable_ch and suspend_all_serviceable_ch respectively.
gk20a_channel_resume and gk20a_channel_suspend are also renamed to
nvgpu_channel_resume_all_serviceable_ch and
nvgpu_channel_suspend_all_serviceable_ch respectively.
d) set_error_notifier HAL ops belongs to hal.channel and is moved
accordingly.
Jira NVGPU-2978
Change-Id: Icb52245cacba3004e2fd32519029a1acff60c23c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- delete vgpu_is_reduced_bar1(). Current implementation maps only
that portion of BAR1 that is reserved for guest in case of
reduced BAR1. However this code is obsolete and reduced BAR1
check is always false. Delete related function vgpu_is_reduced_bar1()
and conditional mapping.
- move vgpu_mm_bar1_map_userd() delcaration from vgpu.h
to mm_vgpu.h
- move vgpu_gp10b_init_hal() and vgpu_gv11b_init_hal()
declarations from vgpu.h to new header files
vgpu/gp10b/vgpu_hal_gp10b.h and vgpu/gv11b/vgpu_hal_gv11b.h
respectively.
Jira GVSCI-334
Change-Id: I11a297a0aba1afd8b0ad022169ba7f734bcd952c
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081152
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create vgpu unit init. Move init related functions from
vgpu.c to init_vgpu.c under common/vgpu/init path and
create corresponding header file.
Create vgpu child unit init hal. Move functions
vgpu_init_hal() and vgpu_detect_chip() to a new
file init_hal_vgpu.c under common/vgpu/init path and
create corresponding header file.
Also move os specific hal init vgpu function declaration
vgpu_init_hal_osi() to a new file
include/nvgpu/vgpu/os_init_hal_vgpu.h separating it from
generic vgpu.h
Jira GVSCI-334
Change-Id: I07290e3be5061a2349689228265c8b28ebadab88
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081153
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved cbc related code and data from gr to cbc unit.
Ltc and cbc related data is moved from gr header:
1. Ltc related data moved from gr_gk20a -> gk20a and it
will be moved eventually to ltc unit:
u32 slices_per_ltc;
u32 cacheline_size;
2. cbc data moved from gr_gk20a -> nvgpu_cbc
u32 compbit_backing_size;
u32 comptags_per_cacheline;
u32 gobs_per_comptagline_per_slice;
u32 max_comptag_lines;
struct gk20a_comptag_allocator comp_tags;
struct compbit_store_desc compbit_store;
3. Following config data moved gr_gk20a -> gk20a
u32 comptag_mem_deduct;
u32 max_comptag_mem;
These are part of initial config which should be available
during nvgpu_probe. So it can't be moved to nvgpu_cbc.
Modified code to use above updated data structures.
Removed cbc init sequence from gr and added in
common cbc unit. This sequence is getting called
from common nvgpu init code.
JIRA NVGPU-2896
JIRA NVGPU-2897
Change-Id: I1a1b1e73b75396d61de684f413ebc551a1202a57
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033286
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We have 3 header files for FECS tracing support
include/nvgpu/gr/fecs_trace.h : common header
include/nvgpu/ctxsw_trace.h : header that includes both common and
os-specific functions
os/linux/ctxsw_trace.h : linux specific header
Remove the second header since it is not needed.
Move all structures that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all function declarations that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all linux specific declarations in os/linux/ctxsw_trace.h and
rename this file as os/linux/fecs_trace_linux.h
Also rename os/linux/ctxsw_trace.c to os/linux/fecs_trace_linux.c
Jira NVGPU-1880
Change-Id: I05cc4489c4b6a64880b7d59c02b22cd2244d5e22
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070766
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new power/clock gating functions that can be called by
other units.
New clock_gating functions will reside in cg.c under
common/power_features/cg unit.
New power gating functions will reside in pg.c under
common/power_features/pg unit.
Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable
elpg and also in gr_gk20a_elpg_protected macro to access gr registers.
Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled
and slcg_enabled thread safe.
JIRA NVGPU-2014
Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025493
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
vgpu_mm_gp10b files contained gp10b specific code.
- vgpu_gp10b_locked_gmmu_map function is common to all
chips. Rename this function to vgpu_locked_gmmu_map
and move this function implementation to to mm_vgpu
file.
- diable_bigpage variable is set to false in
vgpu_gp10b_init_mm_setup_hw function. This is not related
to mm hw initialization. Move this assignment to
vgpu_init_variables along with other mm specific initialization
as done for native.
Change-Id: I4aba3096a3c945b8b3f4175382ebc78322e1d16e
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2028862
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h
Move rest of the platform HAL files to common/regops/ as well
Fix all the header includes to include new public header
Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow
Jira NVGPU-620
Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We had to force allocation of physically contiguous memory for
USERD in nvlink case, as a channel's USERD address is computed as
an offset from fifo->userd address, and nvlink bypasses SMMU.
With 4096 channels, it can become difficult to allocate 2MB of
physically contiguous sysmem for USERD on a busy system.
PBDMA does not require any sort of packing or contiguous USERD
allocation, as each channel has a direct pointer to that channel's
512B USERD region. When BAR1 is supported we only need the GPU VAs
to be contiguous, to setup the BAR1 inst block.
- Add slab allocator for USERD.
- Slabs are allocated in SYSMEM, using PAGE_SIZE for slab size.
- Contiguous channels share the same page (16 channels per slab).
- ch->userd_mem points to related nvgpu_mem descriptor
- ch->userd_offset is the offset from the beginning of the slab
- Pre-allocate GPU VAs for the whole BAR1
- Add g->ops.mm.bar1_map() method
- gk20a_mm_bar1_map() uses fixed mapping in BAR1 region
- vgpu_mm_bar1_map() passes the offset in TEGRA_VGPU_CMD_MAP_BAR1
- TEGRA_VGPU_CMD_MAP_BAR1 is called for each slab.
Bug 2422486
Bug 200474793
Change-Id: I202699fe55a454c1fc6d969e7b6196a46256d704
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959032
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
1. Implement the following vgpu functions to support clk-arb:
- vgpu_clk_get_range() to return min and max freqs from
supported frequencies
- implement vgpu_clk_get_round_rate() which sets rounded
rate to input rate. Rounding is handled in RM Server
- modify vgpu_clk_get_freqs() to retrieve freq table in IVM
memory instead of copying the value in array as part of cmd
message.
2. Add support for clk-arb related HALs for vgpu.
3. support_clk_freq_controller is assigned true for vgpu
provided guest VM has the privilege to set clock frequency.
Bug 200422845
Bug 2363882
Jira EVLR-3254
Change-Id: I91fc392db381c5db1d52b19d45ec0481fdc27554
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1812379
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move OS agnostic parts of vgpu clk code out of os/linux specific
path. This includes implementation sending rpc commands to
RM Server. Move Linux specific vgpu clk code to platform vgpu files
keeping it consistent with native implementation.
Bug 2363882
Jira EVLR-3254
Change-Id: I0aae014ef16415bb356c81e9bfd76bc65206d9fd
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1820674
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Using two separate locks (poweron_lock and poweroff_lock)
allows concurrent gpu power-on and power-off. This shall
not happen as driver won't be able to maintain correct
gpu state.
Use a single power_lock to manage gpu power state. This
lock will be used to manage gpu power state from multiple
triggers like gpu idle, gpu gc-off, etc.
JIRA NVGPU-1100
Change-Id: Ia9b4aeda024a5844ae9f182d453cd6341876680a
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1827812
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>