Following functions are moved from gr_gk20a.c to common gr_falcon.c
gr_gk20a_disable_ctxsw -> nvgpu_gr_falcon_disable_ctxsw
gr_gk20a_enable_ctxsw -> nvgpu_gr_falcon_enable_ctxsw
gr_gk20a_halt_pipe -> nvgpu_gr_falcon_halt_pipe
Added new gr falcon hal to control ctxsw:
int gm20b_gr_falcon_ctrl_ctxsw(struct gk20a *g, u32 fecs_method,
u32 data, u32 *ret_val)
Parameters:
fecs_method: will be specified by a generic define provided in gr_falcon.h
header.
data: input data parameter (if any), set it to zero, if method did not
require any data input.
ret_val: pointer to expected output.
Added following ops for gr falcon:
int (*halt_pipe)(struct gk20a *g); -> this is moved from gr
int (*disable_ctxsw)(struct gk20a *g);
int (*enable_ctxsw)(struct gk20a *g);
JIRA NVGPU-1881
Change-Id: Idb3b7355b5a0bd3b9bb01f9f424c5d607616f540
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081308
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common.gr.setup that provides runtime setup interfaces to
other units outside of GR unit or to OS-specific code
Move zcull setup call to this unit.
New unit now exposes nvgpu_gr_setup_bind_ctxsw_zcull() to setup zcull
This API internally calls common.gr.zcull API nvgpu_gr_zcull_ctx_setup()
Add new hal g->ops.gr.setup.bind_ctxsw_zcull() and remove
g->ops.gr.zcull.bind_ctxsw_zcull()
Remove nvgpu_channel_gr_zcull_setup() from channel unit
Also remove ctx/subctx header includes sicne channel code need not
configure zcull
Remove gm20b_gr_bind_ctxsw_zcull() since binding is done from common
code
Jira NVGPU-1886
Change-Id: I6f04d19a8b8c003734702c5f6780a03ffc89b717
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2086602
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved enable/disable HALs from fifo to tsg:
- tsg.enable
- tsg.disable
gk20a_tsg_enable and gv11b_tsg_enable are moved to HAL,
since they are chip specific, even though they do not
directly access chip registers.
Removed vgpu_gv11b_tsg_enable as it was identical to
gv11b_tsg_enable.
Changed gv11b_fifo_locked_abort_runlist_active_tsgs and
gv11b_fifo_teardown_ch_tsg to use tsg.enable HAL instead
of calling directly gk20a_disable_tsg HAL implementation.
Jira NVGPU-2979
Change-Id: I721650c64dcf8cd158652e362292af45df43819f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083156
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On gp10b, ramfc contains information related to syncpoint
protection, which restricts the syncpoint increment operation
to a safe set of syncpoints. This information must be
updated when a syncpoint is assigned to a channel.
Added the following ramfc HALs
- ramfc.get_syncpt
- ramfc.set_syncpt
And replaced
- fifo.resetup_ramfc
With
- channel.set_syncpt
Use new ramfc HALs, move resetup_ramfc implementation
from fifo to common channel code:
- nvgpu_channel_set_syncpt
NVGPU-1750
Change-Id: I036a0b7b2d9fd6ccd9f30094ae33e6c38a96e0cc
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075938
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
timeout_ms_max is renamed as ctxsw_timeout_max_ms
timeout_debug_dump is renamed as ctxsw_timeout_debug_dump
timeout_accumulated_ms is renamed as ctxsw_timeout_accumulated_ms
timeout_gpfifo_get is renamed as ctxsw_timeout_gpfifo_get
gk20a_channel_update_and_check_timeout is renamed as
nvgpu_channel_update_and_check_ctxsw_timeout
JIRA NVGPU-1312
Change-Id: Ib5c8829c76df95817e9809e451e8c9671faba726
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2076847
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add nvgpu_tsg_set_error_notifier function for setting error_notifier
for all channels of a tsg.
Add nvgpu_tsg_timeout_debug_dump_state function for finding if
timeout_debug_dump is set for any of the channels of a tsg.
Add nvgpu_tsg_set_timeout_accumulated_ms to set
timeout_accumulated_ms for all the channels of a tsg.
JIRA NVGPU-1312
Change-Id: Ib2daf2d462c2cf767f5a6e6fd3436abf6860091d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077626
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move g->ops.fecs_trace.*() HAL operations under gr operations as
g->ops.gr.fecs_trace.*()
Also rename gk20a_ctxsw_*() functions used in common code to the
format nvgpu_gr_fecs_trace_*()
Jira NVGPU-1880
Change-Id: Idf2f8fb3d7ba2832bf1837fd97b70b3cee412123
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070767
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We have 3 header files for FECS tracing support
include/nvgpu/gr/fecs_trace.h : common header
include/nvgpu/ctxsw_trace.h : header that includes both common and
os-specific functions
os/linux/ctxsw_trace.h : linux specific header
Remove the second header since it is not needed.
Move all structures that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all function declarations that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all linux specific declarations in os/linux/ctxsw_trace.h and
rename this file as os/linux/fecs_trace_linux.h
Also rename os/linux/ctxsw_trace.c to os/linux/fecs_trace_linux.c
Jira NVGPU-1880
Change-Id: I05cc4489c4b6a64880b7d59c02b22cd2244d5e22
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070766
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add fifo sub-unit to common.fifo to handle init/deinit code
and global support functions.
Split init into:
- nvgpu_channel_setup_sw
- nvgpu_tsg_setup_sw
- nvgpu_fifo_setup_sw
- nvgpu_runlist_setup_sw
- nvgpu_engine_setup_sw
- nvgpu_userd_setup_sw
- nvgpu_pbdma_setup_sw
Split de-init into
- nvgpu_channel_cleanup_sw
- nvgpu_tsg_cleanup_sw
- nvgpu_fifo_cleanup_sw
- nvgpu_runlist_cleanup_sw
- nvgpu_engine_cleanup_sw
- nvgpu_userd_cleanup_sw
- nvgpu_pbdma_cleanup_sw
Added the following HALs
- runlist.length_max
- fifo.init_pbdma_info
- fifo.userd_entry_size
Last 2 HALs should be moved resp. to pbdma and userd sub-units,
when available.
Added vgpu implementation of above hals
- vgpu_runlist_length_max
- vgpu_userd_entry_size
- vgpu_channel_count
Use hals in vgpu_fifo_setup_sw.
Jira NVGPU-1306
Change-Id: I954f56be724eee280d7b5f171b1790d33c810470
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029620
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gr_reset_mutex to engines_reset_mutex and acquire it
before initiating recovery. Recovery running in parallel with
engine reset is not recommended.
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
Use deferred_reset_mutex to protect deferred_reset_pending variable
If deferred_reset_pending is true then acquire engines_reset_mutex
and call gk20a_fifo_deferred_reset.
gk20a_fifo_deferred_reset would also check the value of
deferred_reset_pending before initiating reset process
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I47de669a6203e0b2e9a8237ec4e4747339b9837c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022373
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move below calls to gr/fecs_trace unit
gk20a_fecs_trace_bind_channel()
gk20a_fecs_trace_unbind_channel()
And rename them to
nvgpu_gr_fecs_trace_bind_channel()
nvgpu_gr_fecs_trace_unbind_channel()
We are not accessing any fifo/ch/tsg construct in gr/fecs_trace unit
hence update parameter list of above APIs to receive inst_block,
gr_ctx, subctx pointers directly instead of receiving channel_gk20a
Delete gk20a/fecs_trace_gk20a.* files since they are no longer
required. All the contents in those files are now moved to gr/fecs_trace
unit
Jira NVGPU-1880
Change-Id: I7ef9f0b66781b45155035237172ae400f02740e4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any recovery that goes through gk20a_fifo_recover path e.g. gr error,
mmu fault or any recovery that involves engine recovery as well, will
still dump the full debug dump. This change will just avoid dumping debug
dump for force reset channels and pbdma intr if they do not involve
engine recovery. For FIFO_ERROR_IDLE_TIMEOUT error notifiers that
involves tsg recovery only, debug_dump will happen only if
timeout_debug_dump is set. timeout_debug_dump by default is set to true
but can be changed using NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX.
Bug 2092051
Change-Id: Ibbf3cd2c44c586d9deb9e61ffbf37945b8d9e428
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033068
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)
JIRA NVGPU-677
Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The file semaphore.c is now split into 4 units namely
semaphore, semaphore_hw, semaphore_pool and semaphore_sea.
Each of the above units now have separate compilation units under
common/semaphore/. The public APIs corresponding to each unit is
present in include/nvgpu/semaphore.h. The dependency graph of the
below units is as follows where '->' indicates left depends on right.
semaphore -> semaphore_hw -> semaphore_pool -> semaphore_sea
Some of the other major changes made in this patch are as follows
i) Renamed some of the functions.
ii) Some functions are changed from private to public.
iii) Public header for semaphore contains only the declaration of the
corresponding structs as an opaque structure.
iv) Constructed a private header to contain internal functions common
to all the units and struct definitions corresponding to each unit.
v) Added new functions to provide access to internal members of the
units.
Jira NVGPU-2076
Change-Id: I6f111647ba9a9a9f8ef9c658f316cd5d6276c703
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022782
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The type for the timeout parameter to the NVGPU_COND_WAIT and
NVGPU_COND_WAIT_INTERRUPTIBLE macros was too weak. This updates these
macros to require a u32 for the timeout.
Users of the macros are updated to be compliant as necessary.
This addresses MISRA 10.3 violations for implicit conversions of types
of different size or essential type.
JIRA NVGPU-1008
Change-Id: I12368dfa81b137c35bd056668c1867f03a73b7aa
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017503
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split out ops that belong to channel unit to a new section called
channel. Channel is a broad concept; this includes just the code that
accesses channel registers (ccsr_*). This is effectively just renaming;
the implementation still stays put.
The word "channel" is also dropped from certain HAL entries to avoid
redundancy (e.g., channel.disable_channel -> channel.disable).
fifo.get_num_fifos gets an entirely new name: channel.count.
Jira NVGPU-1307
Change-Id: I9a08103e461bf3ddb743aa37ababee3e0c73c861
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A comment for gk20a_fifo_update_runlist() says:
/* add/remove a channel from runlist
special cases below: runlist->active_channels will NOT be changed.
(ch == NULL && !add) means remove all active channels from runlist.
(ch == NULL && add) means restore all active channels on runlist. */
Those special cases call for a new function, so add that. Delete the
update_runlist HAL op and add update_for_channel (like update_runlist
without the special cases) and reload (no channel to add or remove, just
the special cases).
While at it, rename gk20a_fifo_update_runlist_ids to
nvgpu_runlist_reload_ids. It's common across chips and does what the
reload HAL does but for a list of several IDs.
Jira NVGPU-1922
Change-Id: I9a99ab03a636a1214c021faad359d2b304a9472f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/subctx.c to manage GR subcontext
This unit provides interfaces to allocate/free/load GR subcontext
Add new header file include/nvgpu/gr/subctx.h to declare all the
interfaces.
Right now channel_gk20a structure directly includes a nvgpu_mem
for context header.
Declare a new structure nvgpu_gr_subctx for subcontext and include
this from channel_gk20a
Make all necessary changes to refer ctx_header from subctx instead
of directly referencing it from channel
Jira NVGPU-1613
Change-Id: I9eb1ee8f26fa88d2881f9b294935b65e9cbcc9b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990129
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Remove handling for channels that are no more bound to tsg
as channel could be referenceable but no more part of a tsg
- Use tsg_gk20a_from_ch to get pointer to tsg for a given channel
- Clear unhandled gr interrupts
Bug 2429295
JIRA NVGPU-1580
Change-Id: I9da43a2bc9a0282c793b9f301eaf8e8604f91d70
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972492
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With PREEMPT_RT kernel, regular spinlocks are mapped onto sleeping
spinlocks (rt_mutex locks), and raw spinlocks retain their behaviour.
Schedule while atomic can occur in gk20a_channel_timeout_start,
as it acquires ch->timeout.lock raw spinlock, and then calls
functions that acquire ch->ch_timedout_lock regular spinlock.
Bug 200484795
Change-Id: Iacc63195d8ee6a2d571c998da1b4b5d396f49439
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2004100
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A naked channel ID does not carry good information about the channel
validity and is a very low level construct for an API of this level.
Refactor the runlist updating fifo APIs to take a channel pointer.
While at it, delete the channel and wait_for_finish parameters from
gk20a_fifo_update_runlist_ids() - the only caller is suspend and resume
and the parameters were always null for channel and true for wait.
Jira NVGPU-1309
Jira NVGPU-1737
Change-Id: Ied350bc8e482d8e311cc708ab0c7afdf315c61cc
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The nvgpu managed syncpoint is not needed for anything if a channel uses
usermode submits; in that case the channel would allocate an
user-managed syncpoint and use that. Create the channel sync in
nvgpu_channel_setup_bind() only if usermode submit is not enabled.
Bug 200466905
Change-Id: I976f4b4fd0c3131cb310c72b286329fb16f1f29a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990270
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The channel timeout ends up in a strange state during timeout handling
for a brief moment; it can become stopped and started again, and the
timeout lock is released in the middle. Add a more explicit rewind
function to reset the timeout to start if it's active. The active check
allows to use this from gk20a_channel_timeout_restart_all_channels(), so
that's also modified.
Also replace the return statements with more readable control flow in
gk20a_channel_timeout_handler().
Change-Id: Ia7d67242dfc149ace1f4f841a837e90b6c985308
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989327
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.1 prohibits using signed values with bitwise operators.
Make fifo invalid ID macros compliant with this MISRA rule.
Also use these macros in source code instead of hardcoded numbers to
make the code more readable.
JIRA NVGPU-1006
Change-Id: I2f336d1decbc53b08f93587f2e00ea2cce47f72b
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1983700
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Extract non-chip-specific code that manages the runlists (init, update,
reschedule etc.) to a new file in the common directory. Move the
declarations to a new matching runlist.h header.
Jira NVGPU-1309
Change-Id: I3c7e0032899516487037f47ddc9a7e7aa4b0b33a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_fifo_recover_ch does high-level calls and invokes
gk20a_fifo_recover. This function belongs to the channel unit and is
moved to the file channel.c. Also, the function is renamed to
nvgpu_channel_recover.
Jira NVGPU-1237
Change-Id: I31890f85fdb2c42648cc063dd9c4e7e35930dcef
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970033
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any channel specific functions having high-level software-centric
operations belong to the channel unit and not the FIFO unit.
Move the below public functions as well as their dependent
static functions to common/fifo/channel.c. Also, rename the functions
to use the prefix nvgpu_channel_*.
gk20a_fifo_set_ctx_mmu_error_ch
gk20a_fifo_error_ch
gk20a_fifo_check_ch_ctxsw_timeout
Jira NVGPU-1237
Change-Id: Id6b6d69bbed193befbfc4c30ecda1b600d846199
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932358
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We had to force allocation of physically contiguous memory for
USERD in nvlink case, as a channel's USERD address is computed as
an offset from fifo->userd address, and nvlink bypasses SMMU.
With 4096 channels, it can become difficult to allocate 2MB of
physically contiguous sysmem for USERD on a busy system.
PBDMA does not require any sort of packing or contiguous USERD
allocation, as each channel has a direct pointer to that channel's
512B USERD region. When BAR1 is supported we only need the GPU VAs
to be contiguous, to setup the BAR1 inst block.
- Add slab allocator for USERD.
- Slabs are allocated in SYSMEM, using PAGE_SIZE for slab size.
- Contiguous channels share the same page (16 channels per slab).
- ch->userd_mem points to related nvgpu_mem descriptor
- ch->userd_offset is the offset from the beginning of the slab
- Pre-allocate GPU VAs for the whole BAR1
- Add g->ops.mm.bar1_map() method
- gk20a_mm_bar1_map() uses fixed mapping in BAR1 region
- vgpu_mm_bar1_map() passes the offset in TEGRA_VGPU_CMD_MAP_BAR1
- TEGRA_VGPU_CMD_MAP_BAR1 is called for each slab.
Bug 2422486
Bug 200474793
Change-Id: I202699fe55a454c1fc6d969e7b6196a46256d704
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959032
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
preempt_channel needs to use the channel to pass it to other
public functions, get access to a tsg etc. This qualifies it to take a
pointer to a channel as an input parameter instead of a chid.
Increment the channel ref counter using the function
gk20a_channel_from_id in functions where we get the chid from the h/w
registers directly. Once the prempt_channel function call is done,
use a gk20a_channel_put on the referenced channel.
Jira NVGPU-1461
Change-Id: I6c87c8104cfcb418d468c8c590087fd4aeabf4bd
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1963200
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
replace tsgid with a pointer to a struct tsg_gk20a in the function
gk20a_fifo_tsg_abort(). gk20a_fifo_tsg_abort needs to enumerate through
all the channels within the tsg as well as pass the tsg pointer to
other functions, qualifying the need to use a pointer instead as an
input parameter.
Jira NVGPU-1461
Change-Id: I59cec05d5d778f733d0c3e9ffadf46e74e249080
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1956567
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.
The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.
However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.
The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.
Jira NVGPU-1460
Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
poll_timeouts and timeout_restart_all_channels should
only handle channels that have not been recovered/aborted.
Check ch_timedout status of the channel to make sure
channel is still alive to be used. A channel reference
could still be available even if it is recovered but not
closed.
Bug 2404865
Change-Id: I016c8b9952ef1d4c349c2a2a2ca55cb81326d380
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929339
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>