Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Disable ELCG as it is not POR for GV100
Disable in Platform data for SKU250
Bug 200446261
Change-Id: I70bddf450c7e41e91498c613f315e0c82ac5e8e2
Signed-off-by: absalam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1828022
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
1) Update header path of gk20a.h files present in os/
to <nvgpu/gk20a.h>
2) os_fence_android_sema.c indirectly was dependent on gk20a.h via
semaphore.h. So, added #include <nvgpu/gk20a.h> in
os_fence_android_sema.c and replaced the header with forward
declaration of struct gk20a in semaphore.h
Jira NVGPU-597
Change-Id: I96e23befeb80713f3a399071eb5498f6f580211d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1842868
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- create common file common/ecc.c which include common functions for add
ecc counters and remove counters.
- common code will create a list of all counter which make it easier to
iterate all counters.
- Add chip specific file for adding ecc counters.
- add linux specific file os/linux/ecc_sysfs.c to export counters to
sysfs.
- remove obsolete code
- MISRA violation for using snprintf is not solved, tracking with
jira NVGPU-859
Jira NVGPUT-115
Change-Id: I1905c43c5c9b2b131199807533dee8e63ddc12f4
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1763536
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Runtime PM is enabled only for iGPU and not for dGPU. For dGPU,
the .probe() of driver pm_runtime_disable()s, if rail-gating is
not enabled. With nvgpu kernel module load/unload, .probe() is
called multiple times for same struct device *. This results
in an overflow of disable_depth (3 bit refcount) and enables
runtime PM on 8th iteration and calls RTPM routines even if it's
disabled.
To effectively manage pm_runtime_disable(), move it from common
nvgpu_remove() to iGPU/dGPU specific routines.
Also, add restore pm_runtime state of device on driver .remove().
Bug 1987855
Change-Id: I781278da546ef9c9ef7d7da7dbea0757df32716f
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1770804
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
While removing nvgpu driver, devm mapped reg mappings
are released on driver_unregister. For iGPU, these
regs are explicitly unmapped with iounmap(). This
results in "Trying to vfree() nonexistent vm area"
warnings on driver removal.
Address this by using devm* variants to map all IO regions
of both iGPU and dGPU and let the driver unregister
release these mappings.
Also, lock out GPU regs in driver removal path.
Bug 1987855
Change-Id: I0388daf90bea3eaf8752255059cfd3ceabf66e7d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730539
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>