Add abstraction of IO aperture accessors. Add new functions
gk20a_io_exists() and gk20a_io_valid_reg() to remove dependencies to
aperture fields from common code.
Implement Linux version of the abstraction by moving gk20a_readl()
and gk20a_writel() to new Linux specific io.c. Move the fields
defining IO aperture to nvgpu_os_linux.
Add t19x specific IO aperture initialization functions and add t19x
specific section to nvgpu_os_linux.
JIRA NVGPU-259
Change-Id: I09e79cda60d11a20d1099a9aaa6d2375236e94ce
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1569698
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For pre-silicon platforms, clock gating
should be skipped as it is not supported.
Added new flags "can_"x"lcg" to check platform
capability before programming SLCG,BLCG and ELCG.
Bug 200314250
Change-Id: Iec7564b00b988cdd50a02f3130662727839c5047
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566251
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename get_physical_addr_bits and related functions to something that
more clearly conveys what they are doing. The basic idea of these
functions is to translate from a physical GPU address to a IOMMU GPU
address. To do that a particular bit (that varies from chip to chip)
is added to the physical address.
JIRA NVGPU-68
Change-Id: I536cc595c4397aad69a24f740bc74db03f52bc0a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1542966
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The basic nvgpu_mem_sgl implementation provides support
for OS specific scatter-gather list implementations by
simply copying them node by node. This is inefficient,
taking extra time and memory.
This patch implements an nvgpu_mem_sgt struct to act as
a header which is inserted at the front of any scatter-
gather list implementation. This labels every struct
with a set of ops which can be used to interact with
the attached scatter gather list.
Since nvgpu common code only has to interact with these
function pointers, any sgl implementation can be used.
Initialization only requires the allocation of a single
struct, removing the need to copy or iterate through the
sgl being converted.
Jira NVGPU-186
Change-Id: I2994f804a4a4cc141b702e987e9081d8560ba2e8
Signed-off-by: Sunny He <suhe@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1541426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The last major item preventing the core MM code in the nvgpu
driver from being platform agnostic is the usage of Linux
scattergather tables and scattergather lists. These data
structures are used throughout the mapping code to handle
discontiguous DMA allocations and also overloaded to represent
VIDMEM allocs.
The notion of a scatter gather table is crucial to a HW device
that can handle discontiguous DMA. The GPU has a MMU which
allows the GPU to do page gathering and present a virtually
contiguous buffer to the GPU HW. As a result it makes sense
for the GPU driver to use some sort of scatter gather concept
so maximize memory usage efficiency.
To that end this patch keeps the notion of a scatter gather
list but implements it in the nvgpu common code. It is based
heavily on the Linux SGL concept. It is a singly linked list
of blocks - each representing a chunk of memory. To map or
use a DMA allocation SW must iterate over each block in the
SGL.
This patch implements the most basic level of support for this
data structure. There are certainly easy optimizations that
could be done to speed up the current implementation. However,
this patches' goal is to simply divest the core MM code from
any last Linux'isms. Speed and efficiency come next.
Change-Id: Icf44641db22d87fa1d003debbd9f71b605258e42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530867
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now remove a channel from TSG list and disable all the channels in
TSG while removing a channel from TSG
With this sequence if any one channel in TSG is closed, rest of the channels
are set as timed out and cannot be used anymore
We need to fix this sequence as below to allow removing a channel from active
TSG so that rest of the channels can still be used
- disable all channels of TSG
- preempt TSG
- check if CTX_RELOAD is set if support is available
if CTX_RELOAD is set on channel, it should be moved to some other channel
- check if FAULTED is set if support is available
- if NEXT is set on channel then it means channel is still active
print out an error in this case for the time being until properly handled
- remove the channel from runlist
- remove channel from TSG list
- re-enable rest of the channels in TSG
- clean up the channel (same as regular channels)
Add below fifo operations to support checking channel status
g->ops.fifo.tsg_verify_status_ctx_reload
g->ops.fifo.tsg_verify_status_faulted
Define ops.fifo.tsg_verify_status_ctx_reload operation for gm20b/gp10b/gp106
as gm20b_fifo_tsg_verify_status_ctx_reload()
This API will check if channel to be released has CTX_RELOAD set, if yes
CTX_RELOAD needs to be moved to some other channel in TSG
Remove static from channel_gk20a_update_runlist() and export it
Bug 200327095
Change-Id: I0dd4be7c7e0b9b759389ec12c5a148a4b919d3e2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560637
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add platform specific operations to enable/disable a TSG and use them instead
of directly calling enable/disable APIs
For gm20b/gp106/gp10b we continue to use gk20a_enable_tsg() and
gk20a_disable_tsg() as platform specific operations
Bug 1739362
Change-Id: I2dd0f38c8303757e8c7a47d8da0e30a790e514f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1560635
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
NVGPU_SUBMIT_GPFIFO_FLAGS_RESCHEDULE_RUNLIST causes host to expire
current timeslice and reschedule from front of runlist.
This can be used with NVGPU_RUNLIST_INTERLEAVE_LEVEL_HIGH to make a
channel start sooner after submit rather than waiting for natural
timeslice expiration or block/finish of currently running channel.
Bug 1968813
Change-Id: I632e87c5f583a09ec8bf521dc73f595150abebb0
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: http://git-master/r/#/c/1537198
Reviewed-on: https://git-master.nvidia.com/r/1537198
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
The vGPU mapping code in gp10b gives very little debugging info when
there's a failure. This change adds much more detail to the error printing
so that thee failures can more easily be debugged without needing to
run the virtual guest locally.
Change-Id: Ibb4412fd4ab322b366f0e08eaa399b7b90ea22c7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1544506
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove the mm.get_iova_addr() HAL and replace it with a new HAL
called mm.gpu_phys_addr(). This new HAL provides the real phys
address that should be passed to the GPU from a physical address
obtained from a scatter list. It also provides a mechanism by
which the HAL code can add extra bits to a GPU physical address
based on the attributes passed in. This is necessary during GMMU
page table programming.
Also remove the flags argument from the various address functions.
This flag was used for adding an IO coherence bit to the GPU
physical address which is not supported.
JIRA NVGPU-30
Change-Id: I69af5b1c6bd905c4077c26c098fac101c6b41a33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1530864
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When two or more apps ran simultaneously
First app context sets power_on flag & starts init
Other app context check the power_on flag
and try to use GPU without init completed
Which makes aother apps to assert
Added mutex to synchronize poweron access
Bug 200297265
Change-Id: Ie138f7f43bb0dd3304ed91ae3649a6a4947bee91
Signed-off-by: skadamati <skadamati@nvidia.com>
Reviewed-on: https://git-master/r/1511436
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Stop defining per-platform default big page size. It's defined via
HAL and inherited from gp10b.
JIRA NVGPU-38
Change-Id: If5eedd5d351d5504bdf87489d1aa091d430c43ba
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master/r/1508069
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Convert a few calls from dev_*() logging to nvgpu_*(). This reduces
dependency to Linux specific struct device pointer.
JIRA NVGPU-38
Change-Id: Ib51a6b1287db25b7dd4d164aec3ac75fa2801ebf
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master/r/1507929
GVS: Gerrit_Virtual_Submit
Remove gk20a support. Leave only gk20a code which is reused by other
GPUs.
JIRA NVGPU-38
Change-Id: I3d5f2bc9f71cd9f161e64436561a5eadd5786a3b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master/r/1507927
GVS: Gerrit_Virtual_Submit
Add vgpu support for ModeE perfbuffers
- VM allocation is handled by the kernel, with final mapping
handled by the RM server
- Enabling/disabling the perfbuffer is handled by the RM server
Bug 1880196
JIRA EVLR-1074
Change-Id: Ifbeb5ede6b07e2e112b930c602c22b66a58ac920
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master/r/1506747
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently callbacks from the PM_QOS framework (for
thermal events), result in a RPC call to set GPU frequency.
Since the governor will now be responsible for setting desired
rate, the max PM_QOS callback will now cap the possible
GPU frequency w/ a new RPC call to the server. The server
is responsible for setting the ultimate frequency
based on the cap & desired rates.
Jira VFND-3699
Change-Id: I806e309c40abc2f1381b6a23f2d898cfe26f9794
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1295543
(cherry picked from commit e81693c6e087f8f10a985be83715042fc590d6db)
Reviewed-on: http://git-master/r/1282467
(cherry picked from commit 7b4e0db647572e82a8d53e823c36b465781f4942)
Reviewed-on: http://git-master/r/1321836
(cherry picked from commit 57dafc08a5)
Reviewed-on: http://git-master/r/1313469
Tested-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add devfreq governor support in order to allow frequency scaling
in virtualization config. GPU clock frequency operations are
re-directed to the server over RPC.
Bug 200237433
Change-Id: I1c8e565a4fff36d3456dc72ebb20795b7822650e
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1295542
(cherry picked from commit d5c956fc06697eda3829c67cb22987e538213b29)
Reviewed-on: http://git-master/r/1280968
(cherry picked from commit 25e2b3cf7cb5559a6849c0024d42c157564a9be2)
Reviewed-on: http://git-master/r/1321835
(cherry picked from commit f871b52fd3)
Reviewed-on: http://git-master/r/1313468
Tested-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
If platform probe fails as a result of DEFER_PROBE, it should be
reported as dev_info instead of dev_err.
Bug 1926777
Change-Id: Iba4392abdd6089da9678695b8ee7f2c92bea1505
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: http://git-master/r/1492711
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
In order to perform timestamps correlation for FECS
traces, we need to collect GPU / GPU timestamps
samples. In virtualization case, it is possible for
a guest to get GPU timestamps by using read_ptimer.
However, if the CPU timestamp is read on guest side,
and the GPU timestamp is read on vm-server side,
then it introduces some latency that will create an
artificial offset for GPU timestamps (~2 us in
average). For better CPU / GPU timestamps correlation,
Added a command to collect all timestamps on vm-server
side.
Bug 1900475
Change-Id: Idfdc6ae4c16c501dc5e00053a5b75932c55148d6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1472447
(cherry picked from commit 56f56b5cd9)
Reviewed-on: http://git-master/r/1489183
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To make do_idle work when nvgpu is built as a module, reverse the order
of call dependencies for do_idle. Don't provide visible
gk20a_do_{idle,unidle}() functions for the kernel but instead call the
kernel for registering and unregistering pointers to them when the
driver loads and unloads.
Refactor the internal __gk20a_do_{idle,unidle} functions to take a
struct gk20a * instead of struct device *, and use the callback api for
providing that g instead of retrieving the plat device from device tree.
Bug 200290850
Change-Id: Ibef8b069302e547b298069cbb97734f461a10cc3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1493774
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also removed deprecated TEGRA_VGPU_ATTRIB_*, but leave a place holder
in case someone wants to use this command in future.
Jira VFND-3796
Change-Id: Ic36a59db238d276b0e3dd68a9d8ec5834a04333d
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1457497
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>