If DMA address is not defined, use the physical address.
Bug 1500983
Change-Id: Ic33b21f74c8c2760e43146b87eec7ea467fc87be
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
(cherry picked from commit 8ae9a6567349241ce1cfff383526b0d9d39c28a1)
Reviewed-on: http://git-master/r/415238
Reviewed-by: Riham Haidar <rhaidar@nvidia.com>
Tested-by: Riham Haidar <rhaidar@nvidia.com>
TLB invalidate can have a race if several contexts use the same
address space. One thread starting an invalidate allows another
thread to submit before invalidate is completed.
Bug 1502332
Change-Id: I074ec493eac3b153c5f23d796a1dee1d8db24855
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/407578
Reviewed-by: Riham Haidar <rhaidar@nvidia.com>
Tested-by: Riham Haidar <rhaidar@nvidia.com>
This patch modifies the code to fall-back to 4k pages if the current
VA does not support 128k pages.
Bug 1409151
Change-Id: I94e9ca5953740388db689bc9306b0392191e29d2
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
On 64 bit architecture, we have plenty of vmalloc space
so that we can keep all the ptes mapped always
But on 32 but architecture, vmalloc space is limited and
hence we have to map/unmap ptes when required
Hence add new APIs for arch64 and call them if
IS_ENABLED(CONFIG_ARM64) is true
Bug 1443071
Change-Id: I091d1d6a3883a1b158c5c88baeeff1882ea1fc8b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/387642
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add two fb flushes after bar1 bind. This should hang the thread instead
of whole system in case there is a BAR1 hang.
Change-Id: I2385a243711219297b889daa30c9fc81106e5825
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/390183