This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move implementation of priv_ring HAL to common/priv_ring. Implement
two new HAL APIs to remove illegal dependencies: enable_priv_ring and
enum_ltc.
As enum_ltc can be implemented only gm20b onwards, bump gk20a
implementation to base on gm20b.
JIRA NVGPU-964
Change-Id: I160c2216132aadbcd98bb4a688aeeb2c520a9bc0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797025
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
CBC base needs to be aligned to 64KB. On Linux this is
achieved making compbit backing size multiple of 64KB.
However QNX nvmap alloc function does not allocate
memory aligned to requested size and needs to overallocate
to satisfy alignment requirement. Make cbc alloc function OS
specific to be able to modify QNX code.
Also align cbc base address to 64KB before writing to CBC BASE
register.
Bug 200426427
Change-Id: Ic867501403f2e2a4ba41ad5a8ed6f9c5c8ffa3f4
Signed-off-by: Aparna Das <aparnad@nvidia.com>
(cherry picked from commit 3f1e1133a46ebfc9763c649d7b839d069cae5a36)
Reviewed-on: https://git-master.nvidia.com/r/1786046
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
clk_arb.h and gk20a.h has circular dependencies to each other. This is
removed by forward declaring struct gk20a in clk_arb.h and removing the
header gk20a.h from clk_arb.h and similarly forward declaring struct
nvgpu_clk_arb in gk20a.h and removing the header clk_arb.h from gk20a.h
alongwith putting headers in every execution unit which calls clk_arb.h
related methods.
JIRA NVGPU-597
Change-Id: I7cedca17206c148b21d93e5d7f0d88c2f98b979a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1790915
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Renamed "struct pmu_queue" to "struct
nvgpu_falcon_queue" & moved to falcon.h
-Renamed pmu_queue_* functions to flcn_queue_* &
moved to new file falcon_queue.c
-Created ops for queue functions in struct
nvgpu_falcon_queue to support different queue
types like DMEM/FB-Q.
-Created ops in nvgpu_falcon_engine_dependency_ops
to add engine specific queue functionality & assigned
correct HAL functions in hal*.c file.
-Made changes in dependent functions as needed to replace
struct pmu_queue & calling queue functions using
nvgpu_falcon_queue data structure.
-Replaced input param "struct nvgpu_pmu *pmu" with
"struct gk20a *g" for pmu ops pmu_queue_head/pmu_queue_tail
& also for functions gk20a_pmu_queue_head()/
gk20a_pmu_queue_tail().
-Made changes in nvgpu_pmu_queue_init() to use nvgpu_falcon_queue
for PMU queue.
-Modified Makefile to include falcon_queue.o
-Modified Makefile.sources to include falcon_queue.c
Change-Id: I956328f6631b7154267fd5a29eaa1826190d99d1
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776070
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The use of the _THIS_IP_ macro in nvgpu introduces two separate
MISRA Rule 11.6 violations.
The first is when when the label address (which gcc generates as
a void *) is cast to an unsigned long and the second is when that
unsigned long is cast back to a void * in the timer and kmem code
that track the value.
Skipping the intermediate use of unsigned long eliminates these
violations. To do this, references to _THIS_IP_ are replaced
with a new (compliant) _NVGPU_GET_IP_ macro.
JIRA NVGPU-895 : MISRA Rule 11.6 violations
Change-Id: I5ea999d8e2b467257fa190b485fa971adcbd0a2b
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1774531
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the current code, gk20a.h includes io.h which gets directly included
in a lot of other files. io.h contains methods which uses a struct
gk20a as a parameter leading to a circular dependency between io.h
and gk20a.h. This can be mitigated by removing io.h from gk20a.h as
part of larger effort to moving gk20a.h to nvgpu/gk20a.h
JIRA NVGPU-597
Change-Id: I93e504fa9371b88152737b342a75580c65e8f712
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to avoid the circular dependencies,
rearrange the static inline functions from
gk20a.h file.
Moved gk20a_gr_flush_channel_tlb function to
gr_gk20a.c and removed the #include gr_gk20a.h
from gk20a.h
Added a helper function utils.h to
move all generic static inline functions which
have no reference to gpu related structures.
ptimer related functions are moved to
ptimer.h
Implementations for as and pmu are moved to
corresponding files.
JIRA NVGPU-624
Change-Id: I4e956326e773ba037bf3a1696cc4c462085dbbe5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1781941
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
FB fault buffer is enabled on finalize poweron. Disable the buffer
in prepare poweroff. This also eliminates the need to disable
the buffer in fault info mem destroy which otherwise accesses
GPU registers after these are locked in prepare poweroff.
Bug 200427479
Change-Id: I1ca3e6ed4417847731c09b887134f215a2ba331c
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776387
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- create common file common/ecc.c which include common functions for add
ecc counters and remove counters.
- common code will create a list of all counter which make it easier to
iterate all counters.
- Add chip specific file for adding ecc counters.
- add linux specific file os/linux/ecc_sysfs.c to export counters to
sysfs.
- remove obsolete code
- MISRA violation for using snprintf is not solved, tracking with
jira NVGPU-859
Jira NVGPUT-115
Change-Id: I1905c43c5c9b2b131199807533dee8e63ddc12f4
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1763536
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit 0b02c8589d.
Originally change was reverted as it was making ap_compute test on
embedded-qnx-hv e3550-t194 fail. With fixes related to replacing tsg
preempt with runlist preempt during teardown, preempt timeout set to
100 ms (earlier this was set to 1000ms for t194 and 3000ms for legacy
chips) and not issuing preempt timeout recovery if preempt fails, helped
resolve the issue.
Bug 200426402
Change-Id: If9a68d028a155075444cc1bdf411057e3388d48e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762563
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
LTC register write is followed by a register read
and if data doesn't match code will report the error.
Renamed existing nvgpu_writel_check function as
nvgpu_writel_loop as it loops until the write get success.
nvgpu_writel_check function write and read back and
compare the data.
Bug 2039150
Change-Id: I0a49be36aad23936f2d58aa82872710827da1d32
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762344
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created common falcon function nvgpu_flcn_bl_bootstrap() to
bootstrap falcon bootloader
-Created HAL gk20a_falcon_bl_bootstrap() which does actual
bootloader bootstrap by fetching parameters and loading
code/parameters as needed.
-Created HAL ops bl_bootstrap under nvgpu_falcon_ops.
-Created struct nvgpu_falcon_bl_info to hold info required
for bootloader to pass to common function
-Removed falcons bootstrap code in multiple file & made
changes to fill struct nvgpu_falcon_bl_info & call
nvgpu_flcn_bl_bootstrap().
Change-Id: Iee275233915ff11f9afb5207ac0c3338ca9dacc1
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1756104
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Commit c61e21c868 fixed race codition in PMU
state transition.
- The race condition is such that PMU response(intr callback for messages) can
run faster than kthread posting commands to PMU and thus PMU message callback
may skip important pmu state change.
- Commit c61e21c868 introduced a fix where PMU
state change was only updated from callback while other places can only update
pmu_state variable
- However, this commit introduced a regression as follows:
- When PMU state is PMU_STATE_INIT_RECEIVED, we loop over every engine
supported by GPU --> If state = PMU_STATE_INIT_RECEIVED, change the state
to PMU_STATE_ELPG_BOOTING and init ELPG else If state != PMU_STATE_INIT_RECEIVED
throw an error saying "PMU INIT not received"
- Now, if GPU supports multiple engines, first engine will check that
pmu_state is PMU_STATE_INIT_RECEIVED and change it to PMU_STATE_ELPG_BOOTING
However, from second engine onwards, since state is already changed to
PMU_STATE_ELPG_BOOTING, all engines except first engine start throwing
error "PMU INIT not received"
- This patch fixes the issue by changing pmu state from
PMU_STATE_INIT_RECEIVED to PMU_STATE_ELPG_BOOTING only once.
Bug 200372838
JIRA EVLR-2164
Change-Id: Ic8c954d14acb1d6ec3adcbc4bcf4d4745542d9f0
Signed-off-by: Deepak Bhosale <dbhosale@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1769814
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_mem_rd*() functions were implemented per OS. They also used
nvgpu_pramin_access_batched() and implemented a big portion of logic
for using PRAMIN in OS specific code.
Make the implementation for the functions generic. Move all PRAMIN
logic to PRAMIN and simplify the interface provided by PRAMIN.
Change-Id: I1acb9e8d7d424325dc73314d5738cb2c9ebf7692
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1753708
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_mem_get_addr() gets virtual/phys address depending on the platform.
But we need to explicitly use physical addresses to configure PCI simulation
support since simulator expects physical address only
Hence use nvgpu_mem_get_phys_addr() explicitly to configure msg/send/recv
buffers needed for pci simulation support
Jira NVGPUT-41
Change-Id: I6870feef35fe81d43189fa048dc2f7052926bcc4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1756843
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To finish OS unification of the submit path, move the
gk20a_submit_channel_gpfifo* functions to a file that's accessible also
outside Linux code.
Also change the prefix of the submit functions from gk20a_ to nvgpu_.
Jira NVGPU-705
Change-Id: I8ca355d1eb69771fb016c7a21fc7f102ca7967d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1760421
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove Gr engine reset during ELPG entry.
Engine reset is causing clock gating logic to get reset thus
clock gating gets disabled during ELPG entry sequence.
It leads to higher power numbers observed at light graphics.
Removing GR reset during ELPG entry helped save power.
Bug 2180198
Change-Id: I957951eb93f9d044f4d9a908f2b56a4903dfbfad
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1757695
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
membar.sys does synchronization with the whole system (GPU and CPU),
membar.gl does synchronization within the GPU.
In gv11b, fb flush is generating membar.gl instead of membar.sys, which
is an issue. To fix this issue. following WAR is used:
1. Use bar1 engine id and bind it to a particular pdb,
2. Then instead of a fb_flush, issue a tlb invalidate of the bar1 pdb.
Now allocation of vm for bar1 instance block and bar1 binding is done
without check for bar1 support. Only bar1 register mapping is done
based on bar1 support enabled.
Bug 2112790
Change-Id: I76f43f1178a68f10823d48bc9da55d2bd686dd52
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1750257
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When generating the PTE size for a given mapping the code must
consider whether the GPU is being IOMMU'ed. The presence and
usage of an IOMMU implies the buffers will appear contiguous
to the GPU. Without an IOMMU we cannot assume that and therefor
must use small pages regardless of the size of the buffer to
be mapped.
Bug 2011640
Change-Id: I6c64cbcd8844a7ed855116754b795d949a3003af
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1697891
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add an OS-abstracted API for printing the name of the current process
into a log message and convert the single occurrence of current->comm in
submit path power failure to use it.
Jira NVGPU-705
Change-Id: I1a509dcc5aecc3c89ce4582733888081b3e38f1f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1749833
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The watchdog tracks wall clock time. If the GPU's runlist is heavily
congested, other work can last long enough to trigger the watchdog for
trusted kernel channels too.
We don't expect the CDE work to ever get stuck, so disable wdt there.
Bug 200311892
Change-Id: I58c7d23891bc73aaeea0ccfcead567b3c6c13a52
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1493814
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make ecc sysfs hash table per GPU by adding it as
part of nvgpu_os_linux. Using a single hash table
might give incorrect results as GPUs have same filenames
and a filename is used as a key for a lookup.
Add device_attribute as part of struct gk20a_ecc_stat. Using
a single array of pointers of device attribute for an
ecc_stat results in memory leak and incorrect stats if
multiple GPUs are present on the system. This array of pointers
will always hold info for GPU which created sysfs nodes last.
Fix this by making device attribute array per ecc stat per GPU.
Fix ecc stat removal to consider zero sub-units for a given
number of hwunits. The multiplication with zero results
in not removing any sysfs node at all.
Bug 1987855
Change-Id: Ifcacc5623cede8decfe228c02d72786337cd0876
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735989
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add remove_gr_sys() op to gpu_ops to reverse steps
done in create_gr_sysfs().
Make gv11b_tegra_remove() specific to gv11b instead
to properly remove sysfs nodes. This also helps in
having gv11b specific remove steps.
Also, update platform remove function of dGPU i.e.
nvgpu_pci_tegra_remove() to remove sysfs nodes. This
adds parity with iGPU platform remove.
Bug 1987855
Change-Id: Ibbaffac5c24346709347f86444a951461894354d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735987
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>