Modify page table updates to take an aperture flag (up until
gk20a_locked_gmmu_map()), don't hard-assume sysmem and propagate it to
hardware.
Jira DNVGPU-76
Change-Id: I797fdaaf5f42a84fa0446577359147fb6908a720
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1169295
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
add gk20a_aperture_mask() for memory target selection now that buffers
can actually be allocated from vidmem, and use it in all cases that have
a mem_desc available.
Jira DNVGPU-76
Change-Id: Ifd1908808d928155a0cadeff8ca451a151bfc8c5
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1169294
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
To support vidmem, pass g and mem_desc to the buffer memory accessor
functions. This allows the functions to select the memory access method
based on the buffer aperture instead of using the cpu pointer directly
(like until now). The selection and aperture support will be in another
patch; this patch only refactors these accessors, but keeps the
underlying functionality as-is.
JIRA DNVGPU-23
Change-Id: I21d4a54827b0e2741012dfde7952c0555a583435
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1121914
GVS: Gerrit_Virtual_Submit
Reviewed-by: Ken Adams <kadams@nvidia.com>
Function trace in update_gmmu_ptes_locked() cause too much spew on
UART.
Change-Id: I94c79be76394631cdee343b2f77e4bf0f830e0a8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1144808
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Ken Adams <kadams@nvidia.com>
For upcoming vidmem refactor, replace struct gk20a_mm_entry's contents
identical to struct mem_desc, with a struct mem_desc member. This makes
it possible to use the page table buffers like the others too.
JIRA DNVGPU-23
JIRA DNVGPU-20
Change-Id: Ia82da07b5a3bb9fb14a86bcf96a46b3a3c80bf28
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1139696
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Ken Adams <kadams@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
In Tegra GPU, SoC memory has to be accessed as vidmem. In discrete GPU, it
has to be accessed as sysmem.
Change-Id: Id26588df17b4921533804f72bc8c0ac3892ae154
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1122591
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Use physical addresses in PDEs. All page table levels fit in 4k, so no
need for SMMU mapping.
Change-Id: Id9e418f35a79343f4a332a230e04abda5e0dd5d2
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/783748
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Implement support for privileged pages. Use them for kernel allocated buffers.
Change-Id: I24778c2b6063b6bc8a4bfd9d97fa6de01d49569a
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/761920
Use always physical addresses for page tables. In gp10b new format
each level fits in one page, so we do not need SMMU translation.
Change-Id: Ie46b2bce0f7a4e8d2904d74b1df616e389874141
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/758181
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
We were dropping the part of address that span word bounary. The register
generator does not know how to real with multi-word fields, to edit things
in manually.
Bug 1646531
Change-Id: I3ef06d6dfcb0a499ed45456d165fe60c91492250
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/747468
Used 128k comptag spacing, when 64k is the correct one.
Bug 1525976
Change-Id: Ie2f926929fa89cf715b86a57ffbf4dd1e4920473
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/737947
Enable new page table format for all platforms.
Bug 1525976
Change-Id: I9a3cfabdef7dc6ec33e18a8a4f32063c40f680fa
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/737364
Fix sparse warnings of below type by making necessary
symbols static:
warning: symbol '<symbol>' was not declared. Should it be static?
Bug 200088648
Change-Id: I222bebd958e29b3a95d161f05a3052389200fc10
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/736663
GVS: Gerrit_Virtual_Submit
Reviewed-by: Amit Sharma (SW-TEGRA) <amisharma@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Implement the 5-level Pascal page table format. It is enabled
only for simulation.
Change-Id: I6767fac8b52fe0f6a2e2f86312de5fc93af6518e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/682114
Fix sparse warnings of below type by making necessary
symbols static:
warning: symbol '<symbol>' was not declared. Should it be static?
Bug 200088648
Change-Id: Ic20ef3eb73dcbfe5f13506b5afa629c3e1db59d0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/728012
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Add platform specific gp10b_mm_iova_addr() to get
iova/phys address for gp10b
If SMMU is not enabled and IO coherence flag is set,
set 34th bit in the physical address and return the
physical address
If SMMU is enabled, return the iova address
Bug 1605653
Change-Id: I5c91a8c8d85d8a8e422406e3c91fc1dda3cb0870
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/713106
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Use gp10b version of get_physical_addr_bits.
Change-Id: I56d1299e259e91a61fa82dc061e7ca3a5130b9d4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/714402
GP10B physical address width is 37 bits. Use old width for now,
and add gp10b specific definition. We can switch to new definition
once we've verified them.
Bug 1567274
Change-Id: I33cc1b99f14f1a7ee5f6fe3bd3d8b3126c23ecbe
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/601703