Refactor user managed syncpoints out of the channel sync infrastructure
that deals with jobs submitted via the kernel api. The user syncpt only
needs to expose the id and gpu address of the reserved syncpoint. None
of the rest (fences, priv cmdbufs) is needed for that, so it hasn't been
ideal to couple with the user-allocated syncpts.
With user syncpts now provided by channel_user_syncpt, remove the
user_managed flag from the kernel sync api.
This allows moving all the kernel submit sync code to be conditionally
compiled in only when needed, and separates the user sync functionality
in a more clear way from the rest with a minimal API.
[this is squashed with commit 5111caea601a (gpu: nvgpu: guard user
syncpt with nvhost config) from
https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325009]
Jira NVGPU-4548
Change-Id: I99259fc9cbd30bbd478ed86acffcce12768502d3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321768
(cherry picked from commit 1095ad353f5f1cf7ca180d0701bc02a607404f5e)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2319629
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
If the os fence is the only kind that's supported, fail a submit if the
user wants fences but doesn't explicitly request sync fences, expecting
syncpoints. Syncpoint support is advertised to userspace in the gpu
characteristics, so userspace already has the knowledge to request the
correct sync type.
Do this check at the ioctl level. The in-kernel stuff that needs submits
(cde, copyengine) can work without syncpoints and sync fences are used
only in userspace.
Fail a submit also if CONFIG_SYNC is not set and sync fences are
requested. Lack of kernel support doesn't guarantee that userspace would
still wrongly want that.
Clarify the deferred cleanup requirements. The sync framework is needed
only for post sync fences, but deferred cleanup is still always needed
with semaphores because the internal tracking is done with dynamically
allocated (although small) objects.
Jira NVGPU-4548
Change-Id: I2e5a6554930cb413b2bb46ddfe388e41390bc7e4
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321715
(cherry picked from commit d870956170906eae1088846ec05266c859669771)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2318157
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reduce the number of branches and make the code flow more
straightforward by having two complete paths for the gpfifo entry
writes: one when job tracking is done and another when not. Although
this adds some very minor duplication (of the user gpfifo append call),
this way it's easier to read what happens to the job metadata, and when
do we even have one.
Jira NVGPU-4548
Change-Id: I6be8bc5afaf139e7c49d5e44837e04f642dd5721
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321761
(cherry picked from commit 9a3d3c8d556d563b9d67b370636791d6a1dd57ee)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2324253
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reduce complexity of the big gpfifo submit function by adding another
function to perform channel-global and driver-global sanity checks that
don't depend on submit parameters.
The nvgpu_channel_check_unserviceable() check was in the middle of the
submit function because there used to be a blocking wait just before it
when the hw gpfifo would be full. The blocking wait could exit with the
channel recovered from a timeout. Now it's ok to check this only once in
the beginning because the submit is non-blocking.
Jira NVGPU-4548
Change-Id: Idf19a560ca58a4f7da776c420dc9c6299cd7f7e7
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321760
(cherry picked from commit 5359a2180f13505f57c62b9f639344913716370a)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2324252
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Each submitted job has held a reference to the channel where the job
runs. This is not necessary: all that the refs do is prevent the channel
from getting freed before the jobs are done in case the channel file is
closed early. However, that is already taken care of, so remove the
per-job get/put pair.
The channel closure path needs to unbind the channel from its tsg if
that hasn't done by the channel's user. Unbind gets the channel off the
runlist and forces all fences to expire, then enqueues the channel for
final job cleanup. No jobs can outlive this.
Delete also the extra get/put pair in job cleanup. The caller (either
the channel worker thread or the submit path in case of deterministic
channels) will always hold a reference.
Jira NVGPU-4548
Change-Id: I3a01759e1b2caf66c46cff19f6557645489ca8f4
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2322541
(cherry picked from commit 8af6260b8fcfd7bf393f50addb681b5353cbae38)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2324255
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Channel debug_dump hal function does not involve
any register related code.
Move gv11b_channel_debug_dump hal function to
common code nvgpu_channel_info_debug_dump function.
Check gpu hw version to limit instance variables
dump that differs between socs.
Add new hal pointer syncpt_debug_dump for pbdma.
Jira NVGPU-5109
Signed-off-by: Vinod G <vinodg@nvidia.com>
Change-Id: Icfca837ce8e4117387cffa6fadf6c094c7da5946
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321016
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Enable build flags for dGPU in safety, when
NVGPU_FORCE_DGPU_SAFETY_PROFILE is set.
Use libnvgpu-dgpu_safe.exports for dGPU safety build.
Add build flags for tu104 HAL initialization (to solve
undefined symbols in safety build).
Temporarily add non-fusa files needed to build dGPU in safety.
related functions will have to move to fusa files.
Jira NVGPU-4611
Change-Id: I41db0c039c7f15d9191cdb811b4906e779d5cc88
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2310276
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- "include/trace/events/gk20a.h" file was having GPL2 license
(which should not used for QNX code). This file was used for
compiling linux userspace driver("libnvgpu-drv.so") and was used for
unit testing on QNX.
- This patch removes stubs in "include/trace/events/gk20a.h" file.
(which were used for linux userspace driver.)
- For QNX driver, "nvgpu_rmos/trace/events/gk20a.h" was used.
This patch moves that file to "include/nvgpu/posix/trace_gk20a.h" and
does relevant license change. This same file will be used for linux
userspace driver.
- This patch also creates a new file "include/nvgpu/trace.h" which
selects proper trace file depending on the config.
Bug 2802414
Change-Id: Icdfb251e5698073f986753a969e804161af3ecc5
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2286388
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update SW quiesce as follows:
- After waking up sw_quiesce_thread, nvgpu_sw_quiesce
masks interrupts, then disables and preempts runlists
without lock. There could be still a concurrent thread
that would re-enable the runlist by accident. This is
very unlikely and would mean we are not in mission mode
anyway.
- In sw_quiesce_thread, wait NVGPU_SW_QUIESCE_TIMEOUT_MS,
to leave some time for interrupt handler to set error
notifier (in case of HW error interrupt). Then disable
and preempt runlists, and set error notifier for remaining
channels before exiting the process.
Also modified nvgpu_can_busy to return false in case
SW quiesce is pending. This will make subsequent
devctl to fail.
Jira NVGPU-4512
Change-Id: I36dd554485f3b9b08f740f352f737ac4baa28746
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2266389
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch adds boundary value check for common.fifo parameters as
listed below.
1. nvgpu_channel_setup_bind() includes a condition to check that value
of num_gpfifo_entries does not exceed 2^31. Otherwise prints message and
returns error.
2. nvgpu_tsg_bind_channel() includes a condition to check if channel
subctx had ASYNC id. If true, runqueue selector is set to 1 and 0
otherwise. This check is to be moved from devctl to common.fifo.
Jira NVGPU-4817
Change-Id: Id1c9253945859c245e584b5c42b3285a6b620055
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2278613
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_engine_is_valid_runlist_id already iterates the list of
active engines, therefore the engine_id is already known to
be valid.
Remove call to nvgpu_engine_get_active_eng_info (which iterates
all engines), and fetch f->engine_info[engine_id] instead.
Also remove non-NULL test for engine_info, which could not
be true.
Also make sure to reset num_engines in nvgpu_cleanup_sw, to avoid
accessing uninitialized active_engines_list in unit test corner
cases (targetting init/remove support).
Jira NVGPU-4511
Change-Id: Ia6b904a7f3ca46e5097f06770b4caad317ec967b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2263618
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_engine_get_gr_runlist_id gets the first instance of
active GR engine using nvgpu_engine_get_ids. Therefore the
engine_id is already known to be valid.
Remove call to nvgpu_engine_get_active_eng_info (which iterates
all engines), and fetch f->engine_info[engine_id] instead.
Also remove non-NULL test for engine_info, which could not
be true.
Jira NVGPU-4511
Change-Id: Ifcc0851e3d14d862e2ed7b21ea57f17a66eca9dd
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2263617
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
event_id_list and event_id_list_locks fields are only
needed in nvgpu_tsg when CONFIG_NVGPU_CHANNEL_TSG_CONTROL
is defined.
Conditionally compile those fields and related code,
so that they are removed from safety build.
Jira NVGPU-4376
Change-Id: I8678aa1b8cd4166aa37bcb42cda1eb9c703fd32f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2273261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, GPU fifo sw_ready flag is not reset after fifo clean_up
execution. This patch resets g->fifo.sw_ready flag in
nvgpu_fifo_cleanup_sw_common() to indicate fifo attributes are reset.
Also, pbdma setup and cleanup functions are optional and may not be
populated. This patch modifies nvgpu_fifo_cleanup_sw_common() to
executes nvgpu_pbdma_cleanup_sw() if pbdma.cleanup_sw is populated.
Jira NVGPU-4339
Change-Id: I6fd53577afdd0a15c75f15b54a916e70e850d1b0
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2237809
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix bug where the CE mask includes other engine types besides just CEs
in nvgpu_ce_engine_interrupt_mask().
The intent of this API is to return mask of CE interrupts. However, the
if clause in the for loop is only excluding engine interrupts if the CE
stall or non-stall ISR is NULL. So, it does not distinquish between CE
or GR engine interrupts if the CE ISR is non-null.
Since the expectation is to not return CE interrupts if the ISRs are
NULL, just return a 0 mask if either ISR is NULL without having to
bother with the loop.
If the ISRs are set in the CE HAL, within the loop, only add interrupts
to the mask returned if the engine type is actually a CE engine (i.e. do
not include GR engine interrupts).
JIRA NVGPU-2224
Change-Id: Ic0048b00f16590fec50bb0858bd3f4498a00650d
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2256269
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Previously, unit interrupt enabling/disabling and corresponding MC level
interrupt enabling/disabling was not done at the same time.
With this change, stall and nonstall interrupt for units are programmed
at MC level along with individual unit interrupts. Kept access to MC
interrupt registers through mc.intr_lock spinlock.
For doing this separated CE and GR interrupt mask functions.
mc.intr_enable is only used when there is global interrupt
control to be set. Removed mc_gp10b.c as mc_gp10b_intr_enable
is now removed. Removed following functions - mc_gv100_intr_enable,
mc_gv11b_intr_enable & intr_tu104_enable. Removed intr_pmu_unit_config
as we can use the generic unit interrupt control function.
JIRA NVGPU-4336
Change-Id: Ibd296d4a60fda6ba930f18f518ee56ab3f9dacad
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2196178
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a.h will include gops_mc.h to contain the mc ops definitions. Add
doxygen comments for the HAL functions that are called directly.
Also move mc_gp10b_intr_pmu_unit_config to non-fusa HAL file.
JIRA NVGPU-2524
Change-Id: I4f326332d7842211b004b372d79fac9fe6ed40e7
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2226017
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
IRQs can get triggered during nvgpu power-on due to MMU fault, invalid
PRIV ring or bus access etc. Handlers for those IRQs can't access the
full state related to the IRQ unless nvgpu is fully powered on.
In order to let the IRQ handlers know about the nvgpu power-on state
gk20a.power_on_state variable has to be protected through spinlock
to avoid the deadlock due to usage of earlier power_lock mutex.
Further the IRQs need to be disabled on local CPU while updating the
power state variable hence use spin_lock_irqsave and spin_unlock_-
irqrestore APIs for protecting the access.
JIRA NVGPU-1592
Change-Id: If5d1b5e2617ad90a68faa56ff47f62bb3f0b232b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2203860
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This change eliminates MISRA Advisory Rule 18.4 violations in the
following cases:
* nvgpu_submit_append_gpfifo_user_direct()
* nvgpu_submit_append_gpfifo_common()
- use array-indexing to access gpfifo entry lists
* gv11b_gr_intr_record_sm_error_state()
- use array-indexing to access sm_error_states table
Advisory Rule 18.4 states that the +, -, +=, and -= operators should
not be applied to an expression of pointer type.
JIRA NVGPU-3798
Change-Id: I736930e4ba09a88888b0ef48f62496c4082ea5a1
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2210173
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>