MISRA rule 10.1 mandates that the correct data types are used as
operands of operators. For example, only unsigned integers can be used
as operands of bitwise operators.
This patch fixes rule 10.1 vioaltions for gv11b.
JIRA NVGPU-777
JIRA NVGPU-1006
Change-Id: Idda45b86c95b535154b4d4632fdc23a18950e380
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971168
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Extract non-chip-specific code that manages the runlists (init, update,
reschedule etc.) to a new file in the common directory. Move the
declarations to a new matching runlist.h header.
Jira NVGPU-1309
Change-Id: I3c7e0032899516487037f47ddc9a7e7aa4b0b33a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any tsg specific functions that does high-level software-centric
operations below to the TSG unit and not the FIFO unit.
Move the below public functions as well as their dependent
static functions to common/fifo/tsg.c and also rename them to use the
prefix nvgpu_tsg_*
gk20a_fifo_set_ctx_mmu_error_tsg
gk20a_fifo_abort_tsg
gk20a_fifo_error_tsg
gk20a_fifo_check_tsg_ctxsw_timeout
Jira NVGPU-1237
Change-Id: I4e3da821a878d4b4a0a0b53fbb7f4c10f135f58d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1934299
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We had to force allocation of physically contiguous memory for
USERD in nvlink case, as a channel's USERD address is computed as
an offset from fifo->userd address, and nvlink bypasses SMMU.
With 4096 channels, it can become difficult to allocate 2MB of
physically contiguous sysmem for USERD on a busy system.
PBDMA does not require any sort of packing or contiguous USERD
allocation, as each channel has a direct pointer to that channel's
512B USERD region. When BAR1 is supported we only need the GPU VAs
to be contiguous, to setup the BAR1 inst block.
- Add slab allocator for USERD.
- Slabs are allocated in SYSMEM, using PAGE_SIZE for slab size.
- Contiguous channels share the same page (16 channels per slab).
- ch->userd_mem points to related nvgpu_mem descriptor
- ch->userd_offset is the offset from the beginning of the slab
- Pre-allocate GPU VAs for the whole BAR1
- Add g->ops.mm.bar1_map() method
- gk20a_mm_bar1_map() uses fixed mapping in BAR1 region
- vgpu_mm_bar1_map() passes the offset in TEGRA_VGPU_CMD_MAP_BAR1
- TEGRA_VGPU_CMD_MAP_BAR1 is called for each slab.
Bug 2422486
Bug 200474793
Change-Id: I202699fe55a454c1fc6d969e7b6196a46256d704
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959032
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
preempt_channel needs to use the channel to pass it to other
public functions, get access to a tsg etc. This qualifies it to take a
pointer to a channel as an input parameter instead of a chid.
Increment the channel ref counter using the function
gk20a_channel_from_id in functions where we get the chid from the h/w
registers directly. Once the prempt_channel function call is done,
use a gk20a_channel_put on the referenced channel.
Jira NVGPU-1461
Change-Id: I6c87c8104cfcb418d468c8c590087fd4aeabf4bd
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1963200
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fifo scheduling APIs require the HW reg mask accessor
fifo_sched_disable_runlist_m() to be used even from high-level logic.
Restructure the APIs to take in an explicit bitmap of runlist IDs and
translate the bitmap to units of fifo_sched_disable_runlist_m() (which
happens to be an identical bitmap) only just before accessing hardware.
Jira NVGPU-1309
Change-Id: I5d6ce5b719ef467172c07c8d7589d83942365025
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1960225
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function gk20a_fifo_recover_tsg has to pass a valid struct tsg to
other functions from within. This qualifies it to have a pointer to
struct tsg_gk20a as an input parameter.
Tsg specific parts of the gk20a_fifo_preempt_timeout_rc are now moved
into another function gk20a_fifo_preempt_timeout_rc_tsg
that takes a tsg as an input and passes it to gk20a_fifo_recover_tsg.
The pointer to a tsg is also used to enumerate channels from within.
The function gk20a_fifo_preempt_timeout_rc now contains only channel
specific code.
Jira NVGPU-1461
Change-Id: Ice0a9921567841fb5586a7e4e010c442ca6cf172
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1961675
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gv11b_fifo_preempt_tsg needs to access the runlist_id of the tsg as
well as pass the tsg pointer to other public functions such as
gk20a_fifo_disable_tsg_sched. This qualifies the preempt_tsg to use a
pointer to a struct tsg_gk20a instead of just using the tsgid.
Jira NVGPU-1461
Change-Id: I01fbd2370b5746c2a597a0351e0301b0f7d25175
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959068
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
replace tsgid with a pointer to a struct tsg_gk20a in the function
gk20a_fifo_tsg_abort(). gk20a_fifo_tsg_abort needs to enumerate through
all the channels within the tsg as well as pass the tsg pointer to
other functions, qualifying the need to use a pointer instead as an
input parameter.
Jira NVGPU-1461
Change-Id: I59cec05d5d778f733d0c3e9ffadf46e74e249080
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1956567
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I4f2d2b960b705690d5d23d2945816fd8f3f8fb75
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1831885
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 12.2 states that the right hand operand of a shift
operator shall lie in the range zero to one less than the width
in bits of the essential type of the left hand operand. This
patch will fix these violations by casting them to an appropriate
type or using the relevant BITxx() macros.
JIRA NVGPU-666
Change-Id: I57b6081e9bd98c45ca9f7aa5f35e1d2d66ed0134
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945655
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the result of a bitwise operation is used as a
boolean in the controlling expression of if and loop statements.
JIRA NVGPU-1020
Change-Id: I6a756ee1bbb45d43f424d2251eebbc26278db417
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1936334
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: I957f8ca1fa0eb00928c476960da1e6e420781c09
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1941002
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a fifo HAL for querying the doorbell token of a specific channel and
call it instead of doing the calculation directly. For Volta the token
is just the channel id plus the possible base number.
Bug 200145225
Change-Id: Ifbb150191575fdc72e413a14c799cab7e52d8c14
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1849639
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reduce the size of memory allocations in the channel debug dump by
capturing only the necessary values from the instance block. This also
simplifies the allocation path slightly with the downside of having to
add a capture_channel_ram_dump HAL for reading the interesting parts
explicitly beforehand to the now smaller staging buffer.
Also rename struct ch_state to struct nvgpu_channel_dump_info.
Jira NVGPU-886
Change-Id: I5d7518d9d474b0b728b183383bc83d89ecf91b98
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1928207
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 18.7 doesn't allow flexible array members. To work around
that, modify the instance block member in struct ch_state to be an
explicit pointer and allocate it separately for simplicity.
Jira NVGPU-886
Change-Id: I34299bec79bf7706f9cdfa42dee7fba765c9f312
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1928205
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule-14.4 doesn't allow the usage of function pointers & integer
types as booleans in the controlling expression of an if statement or
an iteration statement.
Fix violations where a function pointer or a function whose return
value is an integer, is used as a boolean in the controlling expression
of if and loop statements.
JIRA NVGPU-1021
Change-Id: Ic5336268394ba4396ce80744c25930d2fb44dc42
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932147
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of integer types as booleans
in the controlling expression of an if statement or an iteration
statement.
Fix violations where the integer variables err, ret, status are used
as booleans in the controlling expression of if and loop statements.
JIRA NVGPU-1019
Change-Id: I8c9ad786a741b78293d0ebc4e1c33d4d0fc8f9b4
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921260
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For a long time now, the ALLOC_GPFIFO_EX channel IOCTL has done much
more than just gpfifo allocation, and its signature does not match
support that's needed soon. Add a new one called SETUP_BIND to hopefully
cover our future needs and deprecate ALLOC_GPFIFO_EX.
Change nvgpu internals to match this new naming as well.
Bug 200145225
Change-Id: I766f9283a064e140656f6004b2b766db70bd6cad
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1835186
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move implementation of MC HAL to common/mc. Also bump gk20a
implementation to gm20b.
gk20a_mc_boot_0 was used via a HAL, but we have only one possible
implementation. It also has to be anyway called directly to detect
which HALs to assign, so make it a true common function.
mc_gk20a_handle_intr_nonstall was also used only in os/linux/intr.c
so move it there.
JIRA NVGPU-954
Change-Id: I79aedc9158f90d578db0edc17b714617b52690ac
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1813519
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Since RC is triggered due to fatal interrupt always skip
waiting for runlist preemption to complete in RC path.
Also poll PBDMA status to verify TSG is no longer on the PDBMA
before doing engine reset.
Bug 2322245
Change-Id: I3cce3a2c50b527a407ad967b6b9ac964c42f6df9
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1806317
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all loop bodies must be enclosed in braces
including single statement loop bodies. This patch fix the MISRA
violations due to single statement loop bodies without braces by adding
them.
JIRA NVGPU-989
Change-Id: If79f56f92b94d0114477b66a6f654ac16ee8ea27
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1791194
Reviewed-by: Adeel Raza <araza@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the current code, gk20a.h includes io.h which gets directly included
in a lot of other files. io.h contains methods which uses a struct
gk20a as a parameter leading to a circular dependency between io.h
and gk20a.h. This can be mitigated by removing io.h from gk20a.h as
part of larger effort to moving gk20a.h to nvgpu/gk20a.h
JIRA NVGPU-597
Change-Id: I93e504fa9371b88152737b342a75580c65e8f712
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1787316
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to avoid the circular dependencies,
rearrange the static inline functions from
gk20a.h file.
Moved gk20a_gr_flush_channel_tlb function to
gr_gk20a.c and removed the #include gr_gk20a.h
from gk20a.h
Added a helper function utils.h to
move all generic static inline functions which
have no reference to gpu related structures.
ptimer related functions are moved to
ptimer.h
Implementations for as and pmu are moved to
corresponding files.
JIRA NVGPU-624
Change-Id: I4e956326e773ba037bf3a1696cc4c462085dbbe5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1781941
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-During teardown issue runlist preempt
-preempt_ch_tsg hal is removed as it is no more required.
This hal was added to be called from teardown so that if
there is preempt timeout, preempt timeout recovery is not
triggered.
Bug 200426402
Change-Id: I679e3306aa890ff0cfa211cfcc7d5405b7cb1211
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1775443
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Since preempt timeout per pbdam/eng/runlist is set to
fifo_eng_timeout_us converted to ms, there could be a
scenario where preempt might time out. In case of preempt
time out, do not issue recovery for si platform.
ctxsw timeout will trigger recovery if needed. For non
si platforms, issue preempt timeout rc if preempt times out.
Bug 200426402
Change-Id: Ifd921280c0443ee9eda31157aaa03b481a529239
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1775441
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit 0b02c8589d.
Originally change was reverted as it was making ap_compute test on
embedded-qnx-hv e3550-t194 fail. With fixes related to replacing tsg
preempt with runlist preempt during teardown, preempt timeout set to
100 ms (earlier this was set to 1000ms for t194 and 3000ms for legacy
chips) and not issuing preempt timeout recovery if preempt fails, helped
resolve the issue.
Bug 200426402
Change-Id: If9a68d028a155075444cc1bdf411057e3388d48e
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1762563
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Enable replayable fault only for contexts where they are requested.
This required moving the code to initialize subcontexts to happen
later.
Fix signedness issues in definition of flags.
JIRA NVGPU-714
Change-Id: I472004e13b1ea46c1bd202f9b12d2ce221b756f9
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1773262
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In some use cases client will disable and preempt TSG and then re-enable it
using IOCTLs provided
In case there is only one context getting re-enabled and there is no other job
submission in parallel runlist fetcher will just sleep until doorbell is
received next time
This causes above mentioned test cases to stall after re-enabling TSG until
some one submits a new job and triggers a doorbell
Fix this by explicitly triggering doorbell from gv11b_fifo_enable_tsg() after
we enable all channels in TSG
Bug 2205192
Change-Id: I08e70e3d0f7e4dc6471e63809e246430cc4200c1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1772378
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>