MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in GPU specific files.
JIRA NVGPU-1036
Change-Id: Iefadc38bdbea4f02de3c24b6ad1c71d6eb0af4bd
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929903
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All the netlist parsing code is currently under GR unit, but netlist
ucode parsing does not really have any logical dependency to GR
Hence separate out a new unit common/netlist/ that parses the netlist
image and stores/exposes its content through netlist_vars structure
Structure nvgpu_netlist_vars is added to structure gk20a
Move netlist parsing code to common/netlist/netlist.c and chip
specific files to common/netlist/netlist_<chip>.c
Move simulation netlist parsing to common/netlist/netlist_sim.c
Rename g.ops.gr_ctx HAL to g.ops.netlist
Rename all the exported structures to be in the form of nvgpu_*
Rename all exported functions to be in the form of nvgpu_netlist_*()
Add netlist initialization to GPU boot path, and add deinitialization
to GPU remove path
Jira NVGPU-1317
Change-Id: I9af86e3b3230a89db5260cc8ed96ff5f72938c9a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936454
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a/gr_ctx_gk20a.c we right now directly read the GR register
gr_fecs_ctx_state_store_major_rev_id_r() which adds the dependency
to GR h/w header
Add a new HAL g.ops.gr.get_fecs_ctx_state_store_major_rev_id() to
read this register and use this instead
Also remove h/w header from gr_ctx_gk20a.c
Jira NVGPU-1317
Change-Id: Iab64fbfacff4d7ce4f3b61ca90b00ddc77e29551
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936453
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix some mistakes from commit 0fbc1a2652 (gpu: nvgpu: avoid recursion in
runlist construction) and commit 998bf379df (gpu: nvgpu: add
runlist_append_tsg) for MISRA rules 10.3 and 10.4.
- cast a sizeof to u32 in a calculation to match in size,
- make the NVGPU_FIFO_RUNLIST_INTERLEAVE_LEVEL_* constants unsigned to make
comparisons match in signedness.
Jira NVGPU-1174
Change-Id: I00aa9758ca4352d8eb53a0e8ded42a1ba3b14561
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1938069
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reduce the size of memory allocations in the channel debug dump by
capturing only the necessary values from the instance block. This also
simplifies the allocation path slightly with the downside of having to
add a capture_channel_ram_dump HAL for reading the interesting parts
explicitly beforehand to the now smaller staging buffer.
Also rename struct ch_state to struct nvgpu_channel_dump_info.
Jira NVGPU-886
Change-Id: I5d7518d9d474b0b728b183383bc83d89ecf91b98
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1928207
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 18.7 doesn't allow flexible array members. To work around
that, modify the instance block member in struct ch_state to be an
explicit pointer and allocate it separately for simplicity.
Jira NVGPU-886
Change-Id: I34299bec79bf7706f9cdfa42dee7fba765c9f312
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1928205
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.4 doesn't allow arithmetic conversions on operands of
different essential type category.
Fix violations where an arithmetic conversion is performed on boolean
and non-boolean types.
JIRA NVGPU-994
Change-Id: I2af9937678462b632bb6ec6178e10d02104fc3bc
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1832337
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update the PD cache code to handle 64KB pages. To do this the
number of partial/full lists is expanded for when we have 64K
pages. Currently we only explicitly support 4K and 64K page
sizes. Other pages sizes (16K for example) will fail compilation
during preprocessing.
This change also cleans up the definitions for some of the
internal structs. They have been moved into pd_cache.c since
they are not used outside of pd_cache.c.
This allows the following functions to be removed from the global
context:
__nvgpu_pd_cache_alloc_direct()
__nvgpu_pd_cache_free_direct()
They have been replaced by calls to nvgpu_pd_{alloc,free}().
The nvgpu_pd_mem_entry alloc_map also had to be expanded to a
real bitmap. 32 or 64 bits is not sufficient for packing 256
byte PDs into a 64K page (there's 256 PDs per nvgpu_pd_mem_entry
in that case). To prevent doing too many find_first_zero
operations on the bitmap an 'allocs' field was also added which
tracks how many allocs are done. We can use this instead of
comparing a mask against the bitmap to determine if an
nvgpu_pd_mem_entry is full.
Note: there's still a limitation with the TLB invalidate code:
it simply assumes an nvgpu_mem is a 1 to 1 with a PDB. This means
we can't invalidate a PDB allocated at an offset greater than 0
in a nvgpu_pd_mem_entry. This in turn means we must always use a
full page size for a context's PDB.
Bug 1977822
Change-Id: I6a7a3a95b7c902bc6487cba05fde58fbc4a25da5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718755
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The context switch timeout works by triggering a hardware timeout at 10
Hz. When handling these, we check whether a channel has actually timed
out. Currently the timeout limit can be shorter than the 10 Hz interval
which always causes us to recover a channel but would also cause
detection of progress if there was any in the interval.
Handling both situations at the same time would reuse the channel
pointer local to the function after a loop has finished and would cause
memory corruption. Fix this by making the two branches mutually
exclusive, and move the recover case to happen first because that's how
our tests assume things to work.
Jira NVGPU-967
Change-Id: I26aa0fa7fd80ab42a9a1a93a6cca2cd29c9d3f3f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932449
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 8.2 makes it mandatory for all function prototypes
to have named parameters. There were few instances where parameter
name(s) for function prototypes were omitted. This patch will
fix the same.
JIRA NVGPU-861
Change-Id: I6cb28482becc2938c574b7d8c6f22463d346d27a
Signed-off-by: smadhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917939
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
There are two places where C standard identifiers are being
redefined. This is forbidden by MISRA rule 21.2. This patch will
fix two violations 'div' and 'exp' by renaming them 'divisor'
and 'exponent' respectively. Renamed 'man' as 'mantissa' for
uniformity.
JIRA NVGPU-1035
Change-Id: I0bbd38e5021c0047f9ce646dd6f90a30a3e4f3a5
Signed-off-by: smadhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1852429
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 17.2 forbids recursion as a hazard on the stack space. To
comply and additionally to make the code somewhat more straightforward
to read, rewrite the runlist construction with three explicit functions
that work as the three levels of the earlier recursion. These levels map
to the three priority levels of TSGs and having more than that is
unlikely.
When "runlist interleaving" is enabled, TSGs with higher priorities get
interleaved between the switch of each pair of lower-level priority
TSGs, so that the latency for a job at priority level X is no more than
all jobs' timeslices of priority X and higher, plus at most one job at a
lower level.
This can be illustrated as follows (low, medium, high TSGs 1 and 2):
L1 L2 (only low-priority TSGs)
H1 H2 (only high-priority TSGs)
H1 H2 M1 H1 H2 M2 (no low-priority TSGs)
M1 M2 L1 M1 M2 L2 (no high-priority TSGs)
H1 H2 L1 H1 H2 L2 (no medium-priority TSGs)
H1 H2 M1 H1 H2 M2 H1 H2 L1 H1 H2 M1 H1 H2 M2 H1 H2 L2 (no empty levels)
Without interleaving, the items are simply grouped by priority.
Jira NVGPU-1174
Change-Id: Ic3b5106945df7105633730ecd1d150af770a5e83
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1918226
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add nvgpu_memcpy/nvgpu_memcmp which are MISRA-compliant versions
(Rule 21.15) of memcpy/memcmp.
Also convert some clk/gr calls over to use the new routines;
all of the remaining calls will be converted in subsequent patches.
JIRA NVGPU-849
Change-Id: Ib3a602cd08886764ba9a50285462a8b07bfb18ba
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1919470
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule-14.4 doesn't allow the usage of function pointers & integer
types as booleans in the controlling expression of an if statement or
an iteration statement.
Fix violations where a function pointer or a function whose return
value is an integer, is used as a boolean in the controlling expression
of if and loop statements.
JIRA NVGPU-1021
Change-Id: Ic5336268394ba4396ce80744c25930d2fb44dc42
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932147
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gk20a/dbg_gpu_gk20a.c to common/debugger.c and make it a
separate common unit
Also rename gk20a/dbg_gpu_gk20a.h to include/nvgpu/debugger.h
We had two different HALs for debugger - gops.debugger and
gops.dbg_session_ops
Combine them into one single HAL gops.debugger and remove
gops.dbg_session_ops
Rename all exported APIs from debugger.h to be in the form of
nvgpu_*()
Jira NVGPU-1013
Change-Id: I136dc7786e3b2065921eb03b99f16049212f3cd2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1920075
Reviewed-by: Sachin Jadhav <sachinj@nvidia.com>
Tested-by: Sachin Jadhav <sachinj@nvidia.com>
Address MISRA Rule 20.7 violation: Macro parameter expands into an
expression without being wrapped by parentheses.
Some of the exception the coverity is not able to catch are:
1. Macro parameters passed as parameter to another macro.
i.e NVGPU_ACCESS_ONCE. Fixing these by additional parantheses.
2. Macro parameter used as type. i.e. type parameter in container_of.
While at it, update copyright date rage for list.h and types.h.
JIRA NVGPU-841
Change-Id: I4b793981d671069289720e8c041bad8125961c0c
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929823
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gk20a_gr_isr() we right now post BPT events irrespective of if
recovery was triggered or not
But posting of these events is not necessary in case we've triggered
recovery. These events are needed for debugger use cases where we
don't recover after hitting SM exceptions.
Fix this by skipping gk20a_gr_post_bpt_events() call in case recovery
was triggered
Bug 200456343
Change-Id: I726d46228baccd6b298eefd5a27618d42bbbd494
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1922777
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For a long time now, the ALLOC_GPFIFO_EX channel IOCTL has done much
more than just gpfifo allocation, and its signature does not match
support that's needed soon. Add a new one called SETUP_BIND to hopefully
cover our future needs and deprecate ALLOC_GPFIFO_EX.
Change nvgpu internals to match this new naming as well.
Bug 200145225
Change-Id: I766f9283a064e140656f6004b2b766db70bd6cad
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1835186
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The chid member of the channel_gk20a struct was being used as a unsigned
value. By being declared as an int, it was causing MISRA 10.3 violations
for implicit assignment of different types.
JIRA NVGPU-647
Change-Id: I7477fad6f0c837cf7ede1dba803158b1dda717af
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1918470
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 10.3 rule disallows implicit assignments between different
essential types. This adds casts to address some of these violations
in fifo_gk20a.
This also removes unnecessary bar1 test in
gk20a_fifo_handle_sched_error() (rather than add messy casting).
JIRA NVGPU-647
Change-Id: Ic8700459e47a59dc03e0149f6efb060efa4d4e42
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917635
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA 10.3 forbids assigning an object with a narrower essential type
or of a different essential type. This addresses the file
fifo_gk20a.c where constants were in violation.
JIRA NVGPU-647
Change-Id: I0ecf9b0ce40de76464efbde9e9c9b6aa69d80ec0
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917630
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
struct gk20a from gk20a.h needs defination of struct gk20a_ce_app
and ce2_gk20a.h needs defination of struct gk20a. This creates
a circular dependency.
Fix this by making gk20a_ce_app a pointer to skip knowing the
complete type details and using forward declarations for struct
gk20a_ce_app and struct gk20a.
The gk20a_ce_app pointer is alloc'ed in gk20a_init_ce_support()
and free'ed in gk20a_ce_destroy.
JIRA NVGPU-611
Change-Id: I4d62d5f2b2d1492db73bae69f90a1fe5586fba76
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917945
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new separate unit common/perf/cyclestats_snapshot.c and add
corresponding header file include/nvgpu/cyclestats_snapshot.h
This unit is h/w independent and simply calls gops.perf.* HALs
exposed by perf unit to do the h/w configurations
Also remove gv11b/css_gr_gv11b.* files as h/w specific sequence
implemented in them is already moved to perf unit
Rename all cyclestats_snapshot HALs in the form nvgpu_css_*()
Jira NVGPU-1103
Change-Id: I303f6becb313ac918e06c495a5fe299947a1f0b1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1916652
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now acquire rulist_submit_mutex to submit runlist and also
to wait for submit completion
But locking is only needed to atomically configure the runlist submit
registers, hence move the locking to inside of
gk20a_fifo_runlist_hw_submit() where we program the registers
Also convert the mutex to spinlock at the same time
Note that similar locking is not required for
tu104_fifo_runlist_hw_submit() since the runlist submit registers
are per-runlist beginning Turing
Bug 200452543
Change-Id: I53d6179b80cb066466b64c6efa9393e55e381bfc
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1919058
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>