struct gk20a from gk20a.h needs defination of struct gk20a_ce_app
and ce2_gk20a.h needs defination of struct gk20a. This creates
a circular dependency.
Fix this by making gk20a_ce_app a pointer to skip knowing the
complete type details and using forward declarations for struct
gk20a_ce_app and struct gk20a.
The gk20a_ce_app pointer is alloc'ed in gk20a_init_ce_support()
and free'ed in gk20a_ce_destroy.
JIRA NVGPU-611
Change-Id: I4d62d5f2b2d1492db73bae69f90a1fe5586fba76
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917945
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We have a h/w bug on some chips and we need to support below additional
HALs to implement a s/w WAR
gops.fifo.init_pdb_cache_war()
gops.fifo.deinit_pdb_cache_war()
gops.fb.apply_pdb_cache_war()
Add new API nvgpu_init_mm_pdb_cache_war() to initialize WAR sequence
and call this from MM initialization and before setting up rest of the
memory management units
Deinitialize WAR while cleaning up MM support
Add pdb_cache_war_mem member to gk20a to hold all the memory needed
for the WAR
Bug 200449545
Change-Id: Id2ac0d940c7881c7b0cf396413273c0f329a1a1f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1834901
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created struct nvgpu_acr to hold acr module related member
within single struct which are currently spread across multiple structs
like nvgpu_pmu, pmu_ops & gk20a.
-Created struct hs_flcn_bl struct to hold ACR HS bootloader specific members
-Created struct hs_acr to hold ACR ucode specific members like bootloader data
using struct hs_flcn_bl, acr type & falcon info on which ACR ucode need to run.
-Created acr ops under struct nvgpu_acr to perform ACR specific operation,
currently ACR ops were part PMU which caused to have always dependence
on PMU even though ACR was not executing on PMU.
-Added acr_remove_support ops which will be called as part of
gk20a_remove_support() method, earlier acr cleanup was part of
pmu remove_support method.
-Created define for ACR types,
-Ops acr_sw_init() function helps to set ACR properties
statically for chip currently in execution & assign ops to point to
needed functions as per chip.
-Ops acr_sw_init execute at early as nvgpu_init_mm_support calls acr
function to alloc blob space.
-Created ops to fill bootloader descriptor & to patch WPR info to ACR uocde
based on interfaces used to bootstrap ACR ucode.
-Created function gm20b_bootstrap_hs_acr() function which is now common
HAL for all chips to bootstrap ACR, earlier had 3 different function for
gm20b/gp10b, gv11b & for all dgpu based on interface needed.
-Removed duplicate code for falcon engine wherever common falcon code can be used.
-Removed ACR code dependent on PMU & made changes to use from nvgpu_acr.
JIRA NVGPU-1148
Change-Id: I39951d2fc9a0bb7ee6057e0fa06da78045d47590
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813231
GVS: Gerrit_Virtual_Submit
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Handle the 21.2 fixes for nvgpu_mem.c and mm.c; this deletes the
'__' prefixes and slightly renames the __nvgpu_aperture_mask()
function since there's a coherent version and a general version.
Change-Id: Iee871ad90db3f2622f9099bd9992eb994e0fbf34
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813623
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Debug page was allocated and programmed to HUB MMU in GR code. This
introduces a dependency from GR to FB and is anyway the wrong place.
Move the code to allocate memory to generic MM code, and the code
to program the addresses to FB.
Change-Id: Ib6d3c96efde6794cf5e8cd4c908525c85b57c233
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1801423
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for calls to nvgpu_mutex_init and
improves related error handling.
JIRA NVGPU-677
Change-Id: I609fa138520cc7ccfdd5aa0e7fd28c8ca0b3a21c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805598
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a WAR for gm20b that allows us to force the PMU VM to use
128K large pages. For some reason setting the small page size
to 64K breaks the PMU boot. Unclear why. Bug needs to be filed
and fixed. Once fixed this patch can and should be reverted.
Bug 200105199
Change-Id: I2b4c9e214e2a6dff33bea18bd2359c33364ba03f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1782769
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
FB fault buffer is enabled on finalize poweron. Disable the buffer
in prepare poweroff. This also eliminates the need to disable
the buffer in fault info mem destroy which otherwise accesses
GPU registers after these are locked in prepare poweroff.
Bug 200427479
Change-Id: I1ca3e6ed4417847731c09b887134f215a2ba331c
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776387
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
membar.sys does synchronization with the whole system (GPU and CPU),
membar.gl does synchronization within the GPU.
In gv11b, fb flush is generating membar.gl instead of membar.sys, which
is an issue. To fix this issue. following WAR is used:
1. Use bar1 engine id and bind it to a particular pdb,
2. Then instead of a fb_flush, issue a tlb invalidate of the bar1 pdb.
Now allocation of vm for bar1 instance block and bar1 binding is done
without check for bar1 support. Only bar1 register mapping is done
based on bar1 support enabled.
Bug 2112790
Change-Id: I76f43f1178a68f10823d48bc9da55d2bd686dd52
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1750257
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When generating the PTE size for a given mapping the code must
consider whether the GPU is being IOMMU'ed. The presence and
usage of an IOMMU implies the buffers will appear contiguous
to the GPU. Without an IOMMU we cannot assume that and therefor
must use small pages regardless of the size of the buffer to
be mapped.
Bug 2011640
Change-Id: I6c64cbcd8844a7ed855116754b795d949a3003af
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1697891
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
TSG/CHANNEL_SET_PRIORITY IOCTLs are deprecated and user space should be using
combination of timeslice and interleave levels to decide the priority
Hence remove the IOCTLs and all corresponding APIs
Jira NVGPU-393
Change-Id: I7cf0785689269536eca0c278c774b0e9e74f8c2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598581
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Simplify the copyengine code by deleting support for the
ce_event_callback feature that has never been used. Similarly, create a
channel without the finish callback to get rid of that Linux dependency,
and delete the finish callback function as it now serves no purpose.
Delete also the submitted_seq_number and completed_seq_number fields
that are only written to.
Jira NVGPU-259
Change-Id: I02d15bdcb546f4dd8895a6bfb5130caf88a104e2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1589320
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move much of the remaining generic MM code to a new common location:
common/mm/mm.c. Also add a corresponding <nvgpu/mm.h> header. This
mostly consists of init and cleanup code to handle the common MM
data structures like the VIDMEM code, address spaces for various
engines, etc.
A few more indepth changes were made as well.
1. alloc_inst_block() has been added to the MM HAL. This used to be
defined directly in the gk20a code but it used a register. As a
result, if this register hypothetically changes in the future,
it would need to become a HAL anyway. This path preempts that
and for now just defines all HALs to use the gk20a version.
2. Rename as much as possible: global functions are, for the most
part, prepended with nvgpu (there are a few exceptions which I
have yet to decide what to do with). Functions that are static
are renamed to be as consistent with their functionality as
possible since in some cases function effect and function name
have diverged.
JIRA NVGPU-30
Change-Id: Ic948f1ecc2f7976eba4bb7169a44b7226bb7c0b5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1574499
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>