MISRA rule 2.2 defines dead code as "operations which are executed but
removal of these operations has no effect on program behavior".
Variable initializations violate this rule if initialized value is not
used and replaced.
This patch fixes some of these reported violations.
Jira NVGPU-858
Change-Id: I694517ace8884c78c63f6346e455078d19b70b4d
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2110459
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are many miscellaneous HALs for various MM related functionality.
This patch aims to migrate all the remaining MM code from the <chip>/
mm_<chip>.[ch] files in HAL files under hal/.
Much of this is fairly straightforward copy/paste and updates to the
HAL init files.
The exception to that is the move of the left over gv11b MMU fault
handling code in mm_gv11b.c. Having both a hal/mm/mm/mm_gv11b.c and
a gv11b/mm_gv11b.c file causes tmake to choke so the gv11b/mm_gv11b.c
file was moved to gv11b/mmu_fault_gv11b.c. This will be cleaned up in
a subsequent patch.
JIRA NVGPU-2042
Change-Id: I12896de865d890a61afbcb71159cff486119ffb8
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This "HAL" exists to handle the vGPU specific bind channel operation.
This patch moves the native function implementation to common/mm/vm.c
and renames the gk20a to nvgpu to follow the convention for vGPU vs
native HAL functions.
JIRA NVGPU-2042
Change-Id: I02b9ebf0d53d58a6d2ede544e34f2b8ff1b1eb42
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2104540
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, buddy, page and bitmap allocators have individual init()
functions. This patch creates common nvgpu_alloc_allocator_init()
function to trigger the individual functions based on allocator type
argument. This makes writing requirements for the allocators easier.
Jira NVGPU-991
Change-Id: If94e3496f46f036460ef9f1831852e6fc19d3a0b
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2097962
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make a hal/mm/gmmu sub-unit for the GMMU HAL code. Also move the
gk20a specific HAL code there. gp10b will happen in the next patch.
This change also updates all the GMMU related HAL usage, of which
there is quite a bit. Generally the only change is a .gmmu needs to
be inserted into the HAL path. Each HAL init was also updated.
JIRA NVGPU-2042
Change-Id: I6c46bdfddb8e021f56103d9457fb3e2a226f8947
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved cbc related code and data from gr to cbc unit.
Ltc and cbc related data is moved from gr header:
1. Ltc related data moved from gr_gk20a -> gk20a and it
will be moved eventually to ltc unit:
u32 slices_per_ltc;
u32 cacheline_size;
2. cbc data moved from gr_gk20a -> nvgpu_cbc
u32 compbit_backing_size;
u32 comptags_per_cacheline;
u32 gobs_per_comptagline_per_slice;
u32 max_comptag_lines;
struct gk20a_comptag_allocator comp_tags;
struct compbit_store_desc compbit_store;
3. Following config data moved gr_gk20a -> gk20a
u32 comptag_mem_deduct;
u32 max_comptag_mem;
These are part of initial config which should be available
during nvgpu_probe. So it can't be moved to nvgpu_cbc.
Modified code to use above updated data structures.
Removed cbc init sequence from gr and added in
common cbc unit. This sequence is getting called
from common nvgpu init code.
JIRA NVGPU-2896
JIRA NVGPU-2897
Change-Id: I1a1b1e73b75396d61de684f413ebc551a1202a57
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033286
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
vgpu_vm_init and vgpu_vm_remove are called directly from
common code if virtualization is supported. Introduce mm
HAL ops vm_as_alloc_share and vm_as_free_share and call
these functions through these HAL ops. Also rename these functions
from vgpu_vm_init to vgpu_vm_as_alloc_share and vgpu_vm_remove to
vgpu_vm_as_free_share as these function names are too generic and
rename to reflect their actual functionality.
For now these HAL ops are initialized only for vgpu.
Jira GVSCI-517
Change-Id: I7c5af1ab5a64ce562092f75b1488524e93e8f53f
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032310
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create Compression Bit Cache(CBC) unit to have comptags
cache related functionality in one place. In this patch
Moved following gpu ops from ltc to cbc and renamed accordingly:
void (*init)(struct gk20a *g, struct gr_gk20a *gr);
u64 (*get_base_divisor)(struct gk20a *g);
int (*alloc_comptags)(struct gk20a *g, struct gr_gk20a *gr);
int (*ctrl)(struct gk20a *g, enum gk20a_cbc_op op,
u32 min, u32 max);
u32 (*fix_config)(struct gk20a *g, int base);
To avoid ambiguity renamed function pointer from
init_comptags to alloc_comptags.
Moved following function from ltc.h to cbc.h:
nvgpu_ltc_alloc_cbc -> nvgpu_cbc_alloc
Also changed file name that implemented
nvgpu_cbc_alloc functionality from
os/ltc.c -> os/linux-cbc.c
JIRA NVGPU-2897
Change-Id: Ide32a98567e9a3f0a784d62221a6f484f8343e53
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030194
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new directory mm under common vgpu path moving
all vgp common mm files under that directory. This follows
native directory structure.
Move vgpu vm functions from mm_vgpu.c to a new file vm_vgpu.c.
Rename corresponding header file from vm.h to vm_gpu.h
Jira GVSCI-334
Change-Id: Ib77efca0b919478284101894ab16919ba03f71d2
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013352
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)
JIRA NVGPU-677
Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The file semaphore.c is now split into 4 units namely
semaphore, semaphore_hw, semaphore_pool and semaphore_sea.
Each of the above units now have separate compilation units under
common/semaphore/. The public APIs corresponding to each unit is
present in include/nvgpu/semaphore.h. The dependency graph of the
below units is as follows where '->' indicates left depends on right.
semaphore -> semaphore_hw -> semaphore_pool -> semaphore_sea
Some of the other major changes made in this patch are as follows
i) Renamed some of the functions.
ii) Some functions are changed from private to public.
iii) Public header for semaphore contains only the declaration of the
corresponding structs as an opaque structure.
iv) Constructed a private header to contain internal functions common
to all the units and struct definitions corresponding to each unit.
v) Added new functions to provide access to internal members of the
units.
Jira NVGPU-2076
Change-Id: I6f111647ba9a9a9f8ef9c658f316cd5d6276c703
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022782
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When nvgpu_vm_unmap_sync fails, nvgpu_unmap_sync currently bails
out without decreasing the buffer refcount. This prevents from
releasing the buffer, in case a deferred job completes after the
timeout (which was observed 2 times during overnight
stress tests). This also means that the fixed address is not
re-useable.
Throw out a warning when nvgpu_vm_unmap_sync fails, but proceed
with decreasing refcount.
Bug 200492802
Change-Id: I4b7c90ffac6fd479b91d96a3b82c36d17b85ecdc
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2023097
(cherry picked from commit c8ccc87998afc599303857a85cd4553796034164)
Reviewed-on: https://git-master.nvidia.com/r/2024304
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function parameter of nvgpu_vm_map function is fixed for MISRA
where implicit assignment of objects to a narrower or different
essential type not allowed.This fixes few enum violations.
JIRA NVGPU-1584
Change-Id: I2353f7501c3326f792f5942b2e247badf03349cf
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986509
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in vm.c:
* nvgpu_mapped_buf_from_ref
* vm_gk20a_from_ref
The implementation of the following routine has also been updated
using the same approach to eliminate the direct reference to
container_of():
* gk20a_from_as
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I530f8d0cbe37445535b82578953eb5193ccf682c
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989570
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a flag that let's userspace enable the unified VM functionality
on a selective bassis. This feature is working for all cases except
a single MODS trace. This will allow test coverage to be selectively
added in certain userspace tests as well to help prevent this feature
from bit rotting (as it has historically done).
Also update the unit test for the page table management in the GMMU
to reflect this new flag. It's been set to false since the target
platform for safety is currently not using unified address spaces.
Bug 200438879
Change-Id: Ibe005472910d1668e8372754be8dd792773f9d8c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951864
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split the nvgpu_sgt code out from the nvgpu_mem code. Although the
two chunks of code are related the SGT code is distinct and as
such should be its own unit. To do this a new source file has been
added - nvgpu_sgt.c - which contains all the nvgpu_sgt common APIs.
These are the facade APIs to abstract the actual details of how any
given nvgpu_sgt is actually implemented.
An abstract unit - nvgpu_sgt_os - was also defined. This unit
exists solely for the nvgpu_sgt unit to call so that the OS
specific nvgpu_sgt_os_create_from_mem() API can be moved from the
common nvgpu_sgt unit. Note this also updates the name of what the
OS specific units are expected to call. Common code may still use
the generic nvgpu_sgt_create_from_mem() API.
JIRA NVGPU-1391
Change-Id: I37f5b2bbf9f84c0fb6bc296c3e04ea13518bd4d0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946012
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: Ia96f3bc6ca645ba8538faf7a9fa3a9ccf9df40d3
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fixed the erroneous timeout message in sync-unmap and corrected the
condition for returning ETIMEDOUT.
In the original codeflow, after waiting for mapped_buffer release,
nvgpu_timeout_expired is called to check whether to return ETIMEDOUT.
However, if there is a delay between the end of the waiting and the
nvgpu_timeout_expired check, ETIMEDOUT is returned regardless if the
mapped_buffer is released with the timeout message printed. This is
an incorrect behavior. This patch fixes it by letting the refcount of
the mapped_buffer be the only source to determine the return value.
Bug 200434475
Change-Id: I8ca170c811da415c24045ab643da26476bc7463c
Signed-off-by: Kyle Guo <kyleg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945388
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in common code.
JIRA NVGPU-1036
Change-Id: Id6dea92df371e71b22b54cd7a521fc22812f9b69
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929899
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update the PD cache code to handle 64KB pages. To do this the
number of partial/full lists is expanded for when we have 64K
pages. Currently we only explicitly support 4K and 64K page
sizes. Other pages sizes (16K for example) will fail compilation
during preprocessing.
This change also cleans up the definitions for some of the
internal structs. They have been moved into pd_cache.c since
they are not used outside of pd_cache.c.
This allows the following functions to be removed from the global
context:
__nvgpu_pd_cache_alloc_direct()
__nvgpu_pd_cache_free_direct()
They have been replaced by calls to nvgpu_pd_{alloc,free}().
The nvgpu_pd_mem_entry alloc_map also had to be expanded to a
real bitmap. 32 or 64 bits is not sufficient for packing 256
byte PDs into a 64K page (there's 256 PDs per nvgpu_pd_mem_entry
in that case). To prevent doing too many find_first_zero
operations on the bitmap an 'allocs' field was also added which
tracks how many allocs are done. We can use this instead of
comparing a mask against the bitmap to determine if an
nvgpu_pd_mem_entry is full.
Note: there's still a limitation with the TLB invalidate code:
it simply assumes an nvgpu_mem is a 1 to 1 with a PDB. This means
we can't invalidate a PDB allocated at an offset greater than 0
in a nvgpu_pd_mem_entry. This in turn means we must always use a
full page size for a context's PDB.
Bug 1977822
Change-Id: I6a7a3a95b7c902bc6487cba05fde58fbc4a25da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718755
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule-14.4 doesn't allow the usage of function pointers & integer
types as booleans in the controlling expression of an if statement or
an iteration statement.
Fix violations where a function pointer or a function whose return
value is an integer, is used as a boolean in the controlling expression
of if and loop statements.
JIRA NVGPU-1021
Change-Id: Ic5336268394ba4396ce80744c25930d2fb44dc42
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932147
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fixed timeout checking flow in sync-unmap where timeout message could
be printed while there is not a timeout.
The original codeflow doesn't check refcount of a mapped_buffer again
after 10ms waiting. So if the buffer is released during the last
round of waiting, the timeout message will still be printed. The new
codeflow combines refcount and timeout checking. So that there won't
be an inconsistency between the two.
Bug 200434475
Change-Id: Id0a0aebcb24906a1ec7113e669227788e729564b
Signed-off-by: Kyle Guo <kyleg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1930236
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch moves the increment and decrement of the user mapped
buffer count to the insert/remove mapped buffer functions since
this value should only ever change when these functions are called.
Bug 200105199
Change-Id: I5b0a86d00e9e948c48e313153a668eb2e10fca49
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917791
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Fixes for all the pd_cache functions that use '__' prefixes. This
was trivial: the '__' prefix was simply deleted.
JIRA NVGPU-1029
Change-Id: Ia91dabe3ef97fb17a2a85105935fb3a72d7c2c5e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813643
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
Handle the 21.2 fixes for nvgpu_mem.c and mm.c; this deletes the
'__' prefixes and slightly renames the __nvgpu_aperture_mask()
function since there's a coherent version and a general version.
Change-Id: Iee871ad90db3f2622f9099bd9992eb994e0fbf34
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813623
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
This fixes places in the public allocator APIs. This consists of
the various init routines which are used to create an allocator
and the debug macro used within the allocator code.
The buddy allocator was handled by collapsing the internal
'__' prepended version with the non-prefixed version. The only
required change was in the page_allocator code which now had to
pass in a NULL vm pointer (since the VM is not needed for managing
VIDMEM).
JIRA NVGPU-1029
Change-Id: I484a144e61789bf594c525c1ca307b96d120830f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813578
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The issues are:
1. Non-fixed allocs must take into account explicit PTE size
requests. Previously the PTE size was determines from the
allocation size which was incorect. To do this, the PTE size
is now plumbed through all GPU VA allocations. This is what
the new alloc_pte() op does.
2. Fix buddy PTE size assignment. This changes a '<=' into a
'<' in the buddy allocation logic. Effectively this is now
leaving the PTE size for buddy blocks equal to the PDE block
size as 'ANY'.
This prevents a buddy block of PDE size which has yet to be
allocated from having a specific PDE size. Without this its
possible to do a fixed alloc that fails unexpectedly due to
mismatching PDE sizes.
Consider two PDE block sized fixed allocs that are contained
in one buddy twice the size of a PDE block. Let's call these
fixed allocs S and B (small and big). Let's assume that two
fixed allocs are done, each targeting S and B, in that order.
With the current logic the first alloc, when we create the
two buddies S and B, causes both S and B to have a PTE size of
SMALL. Now when the second alloc happens we attempt to find
a buddy B with a PTE size of either BIG or ANY. But we cannot
becasue B already has size SMALL. This casues us to appear
like we have a conflicting fixed alloc despite this not being
the case.
3. Misc cleanups & bug fixes:
- Clean up some MISRA issues
- Delete an extraneous unlock that could have caused a
deadlock.
Bug 200105199
Change-Id: Ib5447ec6705a5a289ac0cf3d5e90c79b5d67582d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768582
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for calls to nvgpu_mutex_init and
improves related error handling.
JIRA NVGPU-677
Change-Id: I609fa138520cc7ccfdd5aa0e7fd28c8ca0b3a21c
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1805598
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>