This ioctl can be used on gp10b to set a flag in the context header
indicating this context should be run at elevated clock
frequency. FECS ctxsw ucode will read this flag as part of the context
switch and will request higher GPU clock frequencies from BPMP for the
duration of the context execution.
Bug 1819874
Change-Id: I84bf580923d95585095716d49cea24e58c9440ed
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-on: http://git-master/r/1292746
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Guest doesn't explicitly send command to the RM server
to invalidate tlb which is done implicitly when mapping
or unmapping buffer. Remove support for this call.
Bug 1665111
Change-Id: Icf2edae7feffa35b1dbf87c227b3e98b506e6519
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: http://git-master/r/1287728
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Change the prefix in the semaphore code to 'nvgpu_' since this code
is global to all chips.
Bug 1799159
Change-Id: Ic1f3e13428882019e5d1f547acfe95271cc10da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284628
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Move semaphore_gk20a.c drivers/gpu/nvgpu/common/ since the semaphore
code is common to all chips.
Move the semaphore_gk20a.h header file to drivers/gpu/nvgpu/include/nvgpu
and rename it to semaphore.h. Also update all places where the header
is inluced to use the new path.
This revealed an odd location for the enum gk20a_mem_rw_flag. This should
be in the mm headers. As a result many places that did not need anything
semaphore related had to include the semaphore header file. Fixing this
oddity allowed the semaphore include to be removed from many C files that
did not need it.
Bug 1799159
Change-Id: Ie017219acf34c4c481747323b9f3ac33e76e064c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284627
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Move nvgpu_common.c to drivers/gpu/nvgpu/common since it is a common
C file to all drivers.
Similarly move nvgpu_common.h to drivers/gpu/nvgpu/include/nvgpu since
this follows the new include guidelines.
Bug 1799159
Change-Id: I00ebed289973b27704c2cff073526e36505bf699
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1284612
Reviewed-by: Varun Colbert <vcolbert@nvidia.com>
Tested-by: Varun Colbert <vcolbert@nvidia.com>
Add HAL for enabling and disabling shadow ROM. This removes XVE dependency
from bios code.
Change-Id: Icafec72dae71669376bbfb97077661b7165badb8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1302223
Instead of using a single static kmem_cache for each type of
data structure the allocators may want to allocate each
allocator now has its own instance of the kmem_cache. This is
done so that each GPU driver instance can accurately track how
much memory it is using.
In order to support this on older kernels a new NVGPU API has
been made,
nvgpu_kmem_cache_create(struct gk20a *g, size_t size)
To handle the possibility that caches cannot be created with
the same name.
This patch also fixes numerous places where kfree() was wrongly
used to free kmem_cache allocs.
Bug 1799159
Bug 1823380
Change-Id: Id674f9a5445fde3f95db65ad6bf3ea990444603d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1283826
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Make sure that struct class is at least forward declared so that
inclusing nvgpu_common.h can be done from anywhere with no dependencies.
Bug 1799159
Bug 1823380
Change-Id: Id8feaa5fd456f7a6e12ed85360d5df28f308faa4
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1283141
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Moved pmuif/* headers to drivers/gpu/nvgpu/include/nvgpu folder
to support cross platform feature implementation.
Made changes to files which accessed include pmuif/* to reflect
pmuif/* movement changes.
Deleted includes of gk20a.h/pmu_gk20a.h from pmuif/*.h files.
Jira NVGPU-19
Change-Id: Iace4e107c24bdaff08a407eae3b147959173e485
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1299823
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Use upstream function tegra_get_chip_id and chip id macros,
as downstream function tegra_get_chipid() and chip_id macros
is going to be deprecated.
This is done as a part to removing duplicate code.
Change-Id: I846384955e983a36af0b3501d2b23c47e1d0798c
Signed-off-by: Shardar Shariff Md <smohammed@nvidia.com>
Reviewed-on: http://git-master/r/1299873
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Deleted PMU fecs override interface from pmu_api.h
header file as feature not used anymore
& its dependent code too.
Deleted file pmu_api.h as file dont
have any interfaces left inside
Jira NVGPU-19
Change-Id: I490cf67ae60ce2f1de37da063199ee04835b940d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1297370
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
The driver file gp10b/platform_gp10b_tegra.c is compiled for
T186 SOCs and hence use the T186 power domain macros directly
instead of legacy TEGRA_POWERGATE_* macros.
This helps in kernel unification to not define the TEGRA_POWERGATE_*
bug 200257351
Change-Id: I955c5dd11e6deaaf537377beb6e67a58ab7787ab
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/1300524
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
The driver file includes <linux/tegra-powergate.h> but does
not use anything from this header.
Remove this unnecessarily inclusion of header file.
bug 200257351
Change-Id: Idc9c79bfdcad0081b1121ec746fcc7a70306adf5
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: http://git-master/r/1300555
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved falcon-controller common interface code
from pmu_common.h to flcnif_cmn.h file.
Interfaces are common for falcons irrespective
of F/W on falcon controllers
Jira NVGPU-19
Change-Id: Iad11b2fade8cf6716888773b2b1c23919cbcc07b
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1296695
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved PMU/Falcon interface which are present
in pmu_gk20a.h & pmu_common.h to new files
as per feature
nvgpu_gpmu_cmdif.h - Top-level header-file that defines
the command/message interfaces used to communicate with PMU
gpmuif_pmu.h - PMU Command/Message init interfaces
gpmuif_cmn.h - Common definitions used by interfaces
Jira NVGPU-19
Change-Id: Id8ea6075e4dbba7697036951dcb85487eb861710
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1296415
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Similar to patch 67fc462989 fix
more circular dependencies arising from #include'ing gk20a.h
for no apparent reason.
Bug 200192125
Coverity ID 2011397
Coverity ID 2011398
Change-Id: I75bcb3e4e66b680498b0e20d645ab9543aae6697
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1296947
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove the broke ref counting from as_share. The ref-count is
incremented for every bind channel but never decremented. This
results in VMs never being freed.
Bug 1846718
Change-Id: I6253b3eab7c7471d3ed6feddb3705c49a8704bed
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1296900
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Simplify ref-counting on VMs: take a ref when a VM is bound to a
channel and drop a ref when a channel is freed.
Previously ref-counts were scattered over the driver. Also the CE
and CDE code would bind channels with custom rolled code. This was
because the gk20a_vm_bind_channel() function took an as_share as
the VM argument (the VM was then inferred from that as_share).
However, it is trivial to abtract that bit out and allow a central
bind channel function that just takes a VM and a channel.
Bug 1846718
Change-Id: I156aab259f6c7a2fa338408c6c4a3a464cd44a0c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1261886
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
gk20a_init_sw_bundle() has a couple of places where it continues
even despite an error is returned. Also it does not check the
return value from gops->gr.init_sw_veid_bundle().
Add an error goto label which restores pipeline state. Add gotos
to that label for all error cases.
Coverity ID 490376
Change-Id: I65338272d2817fa831370c8f070019debbfcd673
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1300098
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changes to the context header after the context has been loaded may
not be visible to the GPU when mapped as cacheable memory. Examples
include updating the preemption modes or boosted_ctx bits at runtime.
This patch changes the mapping to non-cacheable.
Bug 1819874
Bug 1852094
Bug 200265538
Change-Id: I3b9e87adeaf32e337ec48e01631ad9dea61cc7da
Signed-off-by: Peter Boonstoppel <pboonstoppel@nvidia.com>
Reviewed-on: http://git-master/r/1297601
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, when reading from ctxsw device node, we are collecting
traces that occurred before enabling tracing. This is not wanted,
and makes testing unpredicatable.
This change drops existing data in FECS ring buffer when enabling
traces, as currently done on vm-server side.
Jira EVLR-991
Change-Id: Idd2544d4667396f90778b7be82bdf73d1f8b8dc8
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1293303
Reviewed-by: Vishnu Reddy Mandalapu <vmandalapu@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GPC2CLK has been replaced with GPCCLK on user API.
Remove related definition from kernel API.
GPCLCK and MCLK are currently assigned EQU values in kernel API.
We want to move to a simple enumeration as used in nvrm_gpu.
During the transition, an alias value will be defined for each
clock, and kernel will accept both.
Jira DNVGPU-210
Jira DNVGPU-211
Change-Id: I944fe78be9f810279f7a69964be7cda9b9c8d40d
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1292593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On PG418, we hard code SW threshold table for over power
monitoring. On PG419, there is a dedicated INA for over
power monitoring. It is programmed in VBIOS devinit.
Added a platform flag to indicate if devinit has already
taken care of programming.
Jira DNVGPU-206
Change-Id: I28e70ac5621b692864a24e0eadb6d24b9957c0af
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1291813
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Allow platforms to choose whether or not to have unified GPU
VA spaces. This is useful for the dGPU where having a unified
address space has no problems. On iGPUs testing issues is
getting in the way of enabling this feature.
Bug 1396644
Bug 1729947
Change-Id: I65985f1f9a818f4b06219715cc09619911e4824b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265303
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove the special VMA that could be used for allocating fixed
addresses. This feature was never used and is not worth maintaining.
Bug 1396644
Bug 1729947
Change-Id: I06f92caa01623535516935acc03ce38dbdb0e318
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265302
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Cleanup and simplify the gk20a_init_vm() function to ease the
implementation of a platform dependent address space unification
decision.
Bug 1396644
Bug 1729947
Change-Id: Id8487d0e3d3c65e3357e3528063fb17c8a85f7da
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265301
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
The basic structure of this patch is to make the small page allocator
and the large page allocator into pointers (where they used to be just
structs). Then assign each of those pointers to the same actual
allocator since the buddy allocator has supported mixed page sizes
since its inception.
For the rest of the driver some changes had to be made in order to
actually support mixed pages in a single address space.
1. Unifying the allocation page size determination
Since the allocation and map operations happen at distinct
times both mapping and allocation of GVA space must agree
on page size. This is because the allocation has to separate
allocations into separate PDEs to avoid the necessity of
supporting mixed PDEs.
To this end a function __get_pte_size() was introduced which
is used both by the balloc code and the core GPU MM code. It
determines page size based only on the length of the mapping/
allocation.
2. Fixed address allocation + page size
Similar to regular mappings/GVA allocations fixed address
mapping page size determination had to be modified. In the
past the address of the mapping determined page size since
the address space split was by address (low addresses were
small pages, high addresses large pages). Since that is no
longer the case the page size field in the reserve memory
ioctl is now honored by the mapping code. When, for instance,
CUDA makes a memory reservation it specifies small or large
pages. When CUDA requests mappings to be made within that
address range the page size is then looked up in the reserved
memory struct.
Fixed address reservations were also modified to now always
allocate at a PDE granularity (64M or 128M depending on
large page size. This prevents non-fixed allocations from
ending up in the same PDE and causing kernel panics or GMMU
faults.
3. The rest...
The rest of the changes are just by products of the above.
Lots of places required minor updates to use a pointer to
the GVA allocator struct instead of the struct itself.
Lastly, this change is not truly complete. More work remains to be
done in order to fully remove the notion that there was such a thing
as separate address spaces for different page sizes. Basically after
this patch what remains is cleanup and proper documentation.
Bug 1396644
Bug 1729947
Change-Id: If51ab396a37ba16c69e434adb47edeef083dce57
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1265300
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Make sure that map_offset is set to the fixed map address or 0)
before determining PTE size. Then use map_offset instead of
offset_align for computing the PTE size since offset_align
could be either an alignment ora fixed mapping offset.
Also is the minimum of the buffer size and the buffer alignment
for computing page size. This is necessary is the GMMU is doing
page gathering (i.e the buffer does not appear as a continguous
IOMMU range to the GPU). Is such cases a large page sized buffer
may be made up of a bunch of discontiguous 4k pages.
Bug 1396644
Bug 1729947
Change-Id: I6464ee6a4ccab2495ccb31cd1ddf1db467d2b215
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1271359
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Use hardware headers instead of hardcoded register numbers in priv
ring. This required updating the priv ring headers to add all the
registers and fields needed.
Incidentally this also gets rid of a lot of GPC priv ring registers
as they're not used in our code.
Also delete duplicate prints for the same information. We were
dumping GPC error also in gk20a_pbus_isr(), and we dumped master
information twice.
Dump status of each GPC separately instead of supporting only GPC0.
Change-Id: Ic50817ecc50892618fa27947fa83b05148b2cd6a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1295481
GVS: Gerrit_Virtual_Submit
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Prune from clock gating list the entries that target units that
do not exist on gp106.
Change-Id: I192219a24d8e67de7c1fc25276dfcccbe041a05f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1294819
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
We did not follow the proper sequence to reset priv ring on error.
Instead we just re-enabled priv ring, which does not reset anything.
Rename the gk20a_reset_priv_ring() to gk20a_enable_priv_ring() to
indicate its proper use. Add another gk20a_reset_priv_ring() which
actually resets priv ring properly.
Change-Id: Ied74465b1215daa447a565b7e9cafef7fbe67d1b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1294681
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit