The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Not sure if there's an actual bug or JIRA filed for this, but the
change here fixes a long standing bug in the MM code for unit tests.
Te GMMU programming code verifies that the CPU _physical_ address
programmed into the GMMU PDE0 is a valid Tegra SoC CPU physical
address. That means that it's not too large a value.
The POSIX imlementation of the nvgpu_mem related code used the CPU
virtual address as the "phys" address. Obviously, in userspace,
there's no access to physical addresses, so in some sense it's a
meaningless function. But the GMMU code does care, as described
above, about the format of the address.
The fix is simple enough: since the nvgpu_mem_get_addr() and
nvgpu_mem_get_phys_addr() values shouldn't actually be accessed by
the driver anyway (they could be vidmem addresses or IOVA addresses
in real life) ANDing them with 0xffffffff (e.g 32 bits) truncates
the potentially problematic CPU virtual address bits returned by
malloc() in the POSIX environment.
With this, a run of the unit test framework passes for me locally
on my Ubuntu 18 machine.
Also, clean up a few whitespace issues I noticed while I debugged
this and fix another long standing bug where the
NVGPU_DEFAULT_DBG_MASK was not being copied to g->log_mask during
gk20a struct init.
Change-Id: Ie92d3bd26240d194183b4376973d4d32cb6f9b8f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2395953
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
MISRA rule 11.2 doesn't allow conversions of a pointer from or to an
incomplete type. These type of conversions may result in a pointer
aligned incorrectly and may further result in undefined behavior.
This patch addresses rule 11.2 violations related to pointers to and
from struct nvgpu_sgl. This patch replaces struct nvgpu_sgl pointers by
void pointers.
Jira NVGPU-3736
Change-Id: I8fd5766eacace596f2761b308bce79f22f2cb207
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2267876
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 11.2 doesn't allow conversion to or from an incomplete type
pointer, as it may result incorrect point alignment and may further lead
to undefined behavior.
MISRA Rule 16.x requires all switch statements to be well-formed with
terminating break statement for every switch-clause.
This patch fixes 11.2 and 16.x violations in common.mm.nvgpu_mem.
Jira NVGPU-3339
Change-Id: I002393cc64d44826e6954d1bf6af71bd569e862f
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2113096
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make a hal/mm/gmmu sub-unit for the GMMU HAL code. Also move the
gk20a specific HAL code there. gp10b will happen in the next patch.
This change also updates all the GMMU related HAL usage, of which
there is quite a bit. Generally the only change is a .gmmu needs to
be inserted into the HAL path. Each HAL init was also updated.
JIRA NVGPU-2042
Change-Id: I6c46bdfddb8e021f56103d9457fb3e2a226f8947
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a stub POSIX version of __nvgpu_mem_create_from_phys. That allows
building nvgpu code that is behind NVHOST compilation option.
JIRA NVGPU-1734
Change-Id: I12d80e69e78d975141d690bc3f37220ecb5bcc89
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993125
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This adds APIs to the POSIX sgt module to allow unit tests to create
sgt's and sgl's more easily. The new APIs allow the caller to pass in a
table of sgl's and creates the new sgt/sgl as needed.
Since this there is a new API to create an sgl, this also includes a new
nvgpu_sgl_free() API.
JIRA NVGPU-1563
Change-Id: I27ab7654289cbd2212a75625b6dd24d2b742e3bf
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966151
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split the nvgpu_sgt code out from the nvgpu_mem code. Although the
two chunks of code are related the SGT code is distinct and as
such should be its own unit. To do this a new source file has been
added - nvgpu_sgt.c - which contains all the nvgpu_sgt common APIs.
These are the facade APIs to abstract the actual details of how any
given nvgpu_sgt is actually implemented.
An abstract unit - nvgpu_sgt_os - was also defined. This unit
exists solely for the nvgpu_sgt unit to call so that the OS
specific nvgpu_sgt_os_create_from_mem() API can be moved from the
common nvgpu_sgt unit. Note this also updates the name of what the
OS specific units are expected to call. Common code may still use
the generic nvgpu_sgt_create_from_mem() API.
JIRA NVGPU-1391
Change-Id: I37f5b2bbf9f84c0fb6bc296c3e04ea13518bd4d0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946012
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in OS/Posix interface.
JIRA NVGPU-1036
Change-Id: I2da417edc992f16de24cdff536c0538f1fde8b61
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929901
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Background:
In Hypervisor mode dGPU device is configured in pass through mode for
the Guest (QNX/Linux). GMMU programming is handled by the guest which
converts a mapped buffer's GVA into SGLes in IPA (Intermediate/Guest
Physical address) which is then translated into PA (Acutual Physical
address) and programs the GMMU PTEes with correct GVA to PA mapping.
Incase of the vgpu this work is delegated to the RM server which takes care
of the GMMU programming and IPA to PA conversion.
Problem:
The current GMMU mapping logic in the guest assumes that PA range is
continuous over a given IPA range. Hence, it doesn't account for holes being
present in the PA range. But this is not the case, a continous IPA range
can be mapped to dis-contiguous PA ranges. In this situation the mapping
logic sets up GMMU PTEes ignoring the holes in physical memory and
creates GVA => PA mapping which intrudes into the PA ranges which are
reserved. This results in memory being corrupted.
This change takes into account holes being present in a given PA range and
for a given IPA range it also identifies the discontiguous PA ranges and
sets up the PTE's appropriately.
Bug 200451447
Jira VQRM-5069
Change-Id: I354d984f6c44482e4576a173fce1e90ab52283ac
Signed-off-by: aalex <aalex@nvidia.com>
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850972
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
page_idx is an element of the struct nvgpu_semaphore_pool, defined in
include/nvgpu/semaphore.h file.
page_idx can not be negative so changing it from int to u64 and its
related changes in various files.
This also fixes MISRA 10.4 violations in these files.
Jira NVGPU-992
Change-Id: Ie9696dab7da9e139bc31563783b422c84144f18b
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1801632
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>