Remove NVGPU_GPU_IOCTL_ALLOC_AS_FLAGS_USERSPACE_MANAGED and
NVGPU_AS_ALLOC_USERSPACE_MANAGED flags which are used for supporting
userspace managed address-space. This functionality is not implemented
fully in kernel neither going to be implemented in near future.
Jira NVGPU-9832
Bug 4034184
Change-Id: I3787d92c44682b02d440e52c7a0c8c0553742dcc
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882168
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
dmabuf internals that nvgpu relies upon for storing meta-data for
compressible buffers changed in k6.1. For now, disable compression
on all k6.1+ kernels.
Additionally, fix numerous compilation issues due to the bit rotted
compression config. All normal Tegra products support compression
and thus have this config enabled. Over the last several years
compression dependent code crept in that wasn't protected under the
compression config.
Bug 3844023
Change-Id: Ie5b9b5a2bcf1a763806c087af99203d62d0cb6e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2820846
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
Tested-by: Sagar Kamble <skamble@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Add API to query all memhandles used for pde and pte.
- Some direct pde/pte allocation should also add entry to the pd-cache
full list.
- Add OS API for querying MemServ handle from nvgpu_mem.
- Traverse through all pd-cache partial and full lists to get memhandles
for all pde/pte buffers.
Jira NVGPU-8284
Change-Id: I8e7adf1be1409264d24e17501eb7c32a81950728
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2735657
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
Fixed following Coverity Defects:
ioctl_as.c : Bad bit shift operation
mc_tu104.c : Bad bit shift operation
vm.c : Bad bit shift operation
vm_remap.c : Bad bit shift operation
A new linux header file for ilog2 is created.
The files which used the old ilog2 function
have been changed to use the new nvgpu_ilog2
function.
CID 9847922
CID 9869507
CID 9859508
CID 10112314
CID 10127813
CID 10127899
CID 10128004
Signed-off-by: Jinesh Parakh <jparakh@nvidia.com>
Change-Id: Ia201eea7cc426c3d6581e1e5ae3b882dbab3b490
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2700994
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_timeout_init() returns an error code only when the flags parameter
is invalid. There are very few possible values for flags, so extract the
two most common cases - cpu clock based and a retry based timeout - to
functions that cannot fail and thus return nothing. Adjust all callers
to use those, simplfying error handling quite a bit.
Change-Id: I985fe7fa988ebbae25601d15cf57fd48eda0c677
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2613833
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Introduce nvgpu_gmmu_unmap_addr() to unmap a nvgpu_mem that was mapped
at some other address than mem.gpu_va, which can be the case for buffers
that are shared across different address spaces. Delete the address
parameter from nvgpu_gmmu_unmap(), as the common case is to store the
address to mem.gpu_va when mapping the buffer.
Modify some instances of consecutive unmap + free calls to call just
nvgpu_dma_unmap_free().
Change-Id: Iecd7c9aa41d04e9f48e055f6bc0c9227cd759c69
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2601787
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To enable userspace query about comptags allocation status of a buffer,
comptags are to be allocated only during buffer registration done by
nvrm_gpu. Earlier, they were allocated during map.
nvrm_gpu will be sending metadata blob to be associated with the buffer.
This will have to be stored in the dmabuf privdata for all the buffers
registered by nvrm_gpu.
This patch moves the privdata allocation to buffer registration ioctl.
Remove g->mm.priv_lock as it is not needed now. This lock was added
to protect dmabuf private data setup. That private data is now
handled through dmabuf->ops and setup of dmabuf->ops is done
under dmabuf->lock.
To support legacy userspace, this patch still allocates comptags on
demand on map calls for unregistered buffers.
Bug 200586313
Change-Id: I88b2ca04c733dd02a84bcbf05060bddc00147790
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2480761
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add api to translate dmabuf's fmode_t to gk20a_mem_rw_flag
for read only/read write mapping selection.
By default dmabuf fd mapping permission should be a maximum
access permission associated to a particual dmabuf fd.
Remove bit flag MAP_ACCESS_NO_WRITE and add 2 bit values for
user access requests NVGPU_VM_MAP_ACCESS_DEFAULT|READ_ONLY|
READ_WRITE.
To unify map access type handling in Linux and QNX move the
parameter NVGPU_VM_MAP_ACCESS_* check to common function
nvgpu_vm_map.
Set MAP_ACCESS_TYPE enabled flag in common characteristics
init function as it is supported for Linux and QNX.
Bug 200717195
Bug 3250920
Change-Id: I1a249f7c52bda099390dd4f371b005e1a7cef62f
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2507150
Reviewed-by: svc_kernel_abi <svc_kernel_abi@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new MAPPING_MODIFY ioctl to the linux nvgpu driver.
This ioctl is used (for example) by the NvRmGpuMappingModify API to
change the kind of an existing mapping.
For compressed mappings the ioctl can be used to do the following:
* switch between two different compressed kinds
* switch between compressed and incompressed kinds
For incompressed mappings the ioctl can be used to do the following:
* switch between two different incompressed kinds
In order to properly update an existing mapping the nvgpu_mapped_buf
structure has been extended to cache the following state when the
mapping is first created:
* the compression tag offset (if applicable)
* the GMMU read/write flags
* the memory aperture
The unused ctag_lines field in the nvgpu_ctag_buffer_info structure
has been replaced with a new ctag_offset field.
Jira NVGPU-6374
Change-Id: I647ab9c2c272e3f9b52f1ccefc5e0de4577c14f1
Signed-off-by: scottl <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468100
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_has_syncpoints is more general than a channel synchronization
related, so move it to nvhost.c from channel_sync.c. Move the
declaration from gk20a.h to nvhost.h.
As the debugfs knob is Linux related, move it from struct gk20a to
struct nvgpu_os_linux.
Jira NVGPU-4548
Change-Id: I4236086744993c3daac042f164de30939c01ee77
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2318814
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
* Updated types and added error checks
* Modified GR condition for ctxsw disable count
CERT-C error check was added to detect error on integer overflow
But below logic couldn't detect first overflow, so updated condition
INT_MAX < gr->ctxsw_disable_count --> it became true after overflow
So, we didn't detected in first overflow and lead to assert on enable
JIRA NVGPU-3400
Change-Id: I6b0265a464f8f19efa7b0761612c6e9ffb3bd2bd
Signed-off-by: Sagar Kadamati <skadamati@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2206282
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Whitelist 2 MISRA Rule 17.2 violations in MM that were approved as
deviations in TID-278. These two violations are for recursive functions
that handle page table descriptors in the GMMU page table. Both cases
are tightly controlled recursion by limiting the recursion depth to the
number of possible page table levels in the hardware. For current
hardware that is a maximum recursion depth of 5 which is easily an
acceptable depth and should cause no stack issues.
JIRA NVGPU-3489
JIRA NVGPU-3492
JIRA TID-278
Change-Id: I5b801ff77f66bb8698f1d6adcd41ebbad3f86f92
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2230077
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The two MM functions nvgpu_set_pd_level() and nvgpu_vm_do_free_entries()
are simple recursive functions for processing page descriptors (PDs) by
traversing the levels in the PDs. MISRA Rule 17.2 prohibits functions
calling themselves because "unless recursion is tightly controlled, it
is not possible to determine before execution what the worst-case stack
usage could be." So, this change limits the recursion depth of each of
these functions to the maximum number of page table levels by checking
the level against the MM HAL max_page_table_levels().
This also required configuring this HAL in various unit tests as they
were no previously using this HAL.
JIRA NVGPU-3489
Change-Id: Iadee3fa5ba9f45cd643ac6c202e9296d75d51880
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2224450
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When reviewing some VM mapping code I found the name of the function
that actually maps the passed buffer to be confusing. It was:
nvgpu_vm_map_compression_comptags()
This function does actually do the map and work out some comptags
related stuff. But I think this would be easier to read in the code
as:
nvgpu_vm_do_map()
As this is much more literally what the function is actually doing.
Change-Id: I9a945b198887f5fb23b8be622c8411f97358dbed
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2220339
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are 2 instances where some code is unreachable in vm.c
- The nvgpu_insert_mapped_buf function always returned 0, so any
associated error handling was unreachable. This patch changes
the function to return void instead.
- A cleanup section to unmap a buffer in case of error was also
unreachable.
JIRA NVGPU-909
Change-Id: I6d8343b2994d314992a61dd640b10e68fbbc5e1e
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2217677
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add macros for whitelisting coverity violations. These macros use pragma
directives. The pragma directives and whitelisting macros are only
enabled when a coverity scan is being run.
The whitelisting macros have been added to a new header called
static_analysis.h. The contents of safe_ops.h (CERT C safe ops) have
been moved into static_analysis.h because this will be the new header
for static analysis related macros/defines/etc.
JIRA NVGPU-3820
Change-Id: I9c63f20f670880b420415535738034619314b7c3
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2180600
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For getting the buffer size qnx issues a devctl to nvmap which can fail
as well. So, check the size that is returned by nvgpu_os_buf_get_size.
If 0 size is returned then return -EINVAL to the caller.
Jira NVGPU-3891
Change-Id: Id13e7612b044e9228d78469ab4e43961a6877ce8
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2174458
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Advisory Rule 2.7 states that there should be no unused
parameters in functions.
This patch removes unused function parameters from the following:
* release_as_share_id() -> remove 'id' param
* nvgpu_pd_cache_look_up() -> remove 'g' param
* nvgpu_vm_get_pte_size_fixed_map() -> remove 'size' param
Jira NVGPU-3178
Change-Id: Id2c3b5378bba9bdc0312742fd8393fb3ec67c4df
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2178650
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rule 2.2 doesn't allow unused variable assignments. The reason is
presence of unused variable assignments may indicate error in program's
logic.
Rule 21.x doesn't allow reserved identifier or macro names starting with
'_' to be reused or defined.
Jira NVGPU-3864
Change-Id: I8ee31c0ee522cd4de00b317b0b4463868ac958ef
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2163723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>