Replaced ch->mmu_debug_mode_enabled with ch->mmu_debug_mode_refcnt.
If channel is enabled multiple times by userspace, then ref count is
updated accordingly. There is an expectation that enable/disable
calls are balanced for setting channel's mmu debug mode.
When unbinding the channel, decrease refcnt for the channel until it
reaches 0.
Also, removed tsg parameter from nvgpu_tsg_set_mmu_debug_mode as it
can be retrieved from ch.
Bug 2515097
Bug 2713590
Change-Id: If334e374a55bd14ae219edbfd3b1fce5ff25c226
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2184702
(cherry picked from commit f422aee393)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2208772
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Kajetan Dutka <kdutka@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: Kajetan Dutka <kdutka@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Typically, the PMU init thread will finish up long
before the golden context image has been initialized,
which means that ELPG hasn't truly been enabled at that
point.
Create a new function, nvgpu_pmu_reenable_pg(), which
checks if elpg had been enabled (non-zero refcnt), and
if so, disables then re-enables it.
Call this function from gk20a_alloc_obj_ctx() after
the golden context image has been initialized to ensure
that elpg is truly enabled.
Manually ported from dev-main
Bug 200543218
Change-Id: I0e7c4f64434c5e356829581950edce61cc88882a
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2245768
(cherry picked from commit 077b6712b5a40340ece818416002ac8431dc4138)
Reviewed-on: https://git-master.nvidia.com/r/2250091
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_fecs_trace_poll() right now calls gk20a_fecs_trace_ring_read()
to read the trace ring buffer written by FECS
gk20a_fecs_trace_ring_read() returns number of trace entries written
to local buffer if successful, otherwise returns error
In case there is really an invalid entry, gk20a_fecs_trace_poll()
will just stop reading more entries, write current read pointer to
h/w and return
When gk20a_fecs_trace_poll() is called next time, we again read that
invalid entry, and again skip it, and again return
This keeps happening, and we never move on to read new entries
Fix this by always continuing to read next entry irrespective
of current entry is valid or not
gk20a_fecs_trace_poll() now just prints a debug message instead of
breaking the loop
Bug 200491708
Bug 200542611
Reviewed-on: https://git-master.nvidia.com/r/2020167
(cherry picked from commit decbbf3504)
Change-Id: If8b3c8af63ce662a41ada93a6986fa149e34f664
Signed-off-by: seshendra <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2190151
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- This patch fixes enable/disable fecs trace logic.
- Added enable_lock and enable_count to handle multiple
enable/disable of fecs trace logic.
- If user does trace disable twice, enable_count will become
negative and when user tries to re-enable it, fecs trace
will not be enabled.
Bug 2672760
Bug 200542611
Change-Id: Ic7d4883b899f01dcf43058d0e7c9d1223a716c9b
Signed-off-by: seshendra <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2189371
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Winnie Hsu <whsu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- To enable FECS trace support, nvgpu should set the MSB
of the read pointer (MAILBOX1).
- The ucode will check if the feature is enabled/disabled
before writing a record into the circular buffer. If the
feature is disabled, it will not write the record.
- If the feature is enabled and the buffer is not allocated,
HW will throw a page fault error.
Bug 2459186
Bug 200542611
Change-Id: I6f181643737d1cf1bda02077eaa714a3f4ef3d8c
Signed-off-by: seshendra <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2189250
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The current code reads the pbdma_status info after clearing the
interrupt. Other interrupts/sleep can cause enough delay between
clearing the interrupt and pbdma switching the channel leading to
invalid channel/tsg ID. Correct that by reading the pbdma_status info
register before clearing of the pbdma interrupt to correctly read the
context information before the pbdma can switch out the context.
Bug 200533450
Change-Id: Ic2f0682526e00d14ad58f0411472f34388183f2b
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2165047
(cherry-picked from 0ef96e4b1a
in dev-main)
Reviewed-on: https://git-master.nvidia.com/r/2188861
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For a long time now, the ALLOC_GPFIFO_EX channel IOCTL has done much
more than just gpfifo allocation, and its signature does not match
support that's needed soon. Add a new one called SETUP_BIND to hopefully
cover our future needs and deprecate ALLOC_GPFIFO_EX.
Change nvgpu internals to match this new naming as well.
Bug 200145225
Bug 200541476
Change-Id: I766f9283a064e140656f6004b2b766db70bd6cad
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1835186
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry-picked from e0c8a16c8d
in dev-main)
Reviewed-on: https://git-master.nvidia.com/r/2169882
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- In GV11B, read fuse_status_opt_tpc_gpc register
to read which TPCs are floorswept.
- The driver will also read sysfs node: tpc_pg_mask
- Based on these two values "can_tpc_powergate" will
be set to true or false and mask will be used to write to
fuse_ctrl_opt_tpc_gpc register to powergate the TPC.
- can_tpc_powergate = true indicates that the mask value
sent from userspace is valid and can be used to power gate
the desired TPC
- can_tpc_powergate = false indicates that the mask value
sent from userspace is not valid and cannot be used to
power gate the desired TPC.
Bug 200532639
Change-Id: Ib0806e4c96305a13b3574e8063ad8e16770aa7cd
Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2159219
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In some cases, we would get deadlock issue due to there are two locks
acquisition on common clk driver's lock and nvgpu driver's locks. At
the bug, inconsistent lock ordering problem will come with one thread
gets "nvgpu lock -> clk lock" and the other thread gets "clk lock ->
nvgpu lock".
Slove the latter path with one-time initializing clk_parent entry
and use cached data afterward.
Bug 2555115
Change-Id: I31c5c2728f406307e7cfd4e555f4db0c163234d8
Signed-off-by: Jeremy Ho <jeremyh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2146727
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aleksandr Frid <afrid@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gr_reset_mutex to engines_reset_mutex and acquire it
before initiating recovery. Recovery running in parallel with
engine reset is not recommended.
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
Use deferred_reset_mutex to protect deferred_reset_pending variable
If deferred_reset_pending is true then acquire engines_reset_mutex
and call gk20a_fifo_deferred_reset.
gk20a_fifo_deferred_reset would also check the value of
deferred_reset_pending before initiating reset process
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I47de669a6203e0b2e9a8237ec4e4747339b9837c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022373
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry-picked from cb91bf1e13
in dev-main)
Reviewed-on: https://git-master.nvidia.com/r/2024901
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
set gr.initialized to false in the beginning of gk20a_gr_reset() and
set it to true at the end of successful execution of gk20a_gr_reset.
Use gk20a_gr_wait_initialized() to enable/disable cg/pg
functions to make sure engine is out of reset and initialized.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: Ic7b0b71382c6d852a625c603dad8609c43b7f20f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry-picked from 7e2f124fd1 in
dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2111038
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
if fecs is sent stop_ctxsw method, elpg entry/exit cannot happen
and may timeout. It could manifest as different error signatures
depending on when stop_ctxsw fecs method gets sent with respect
to pmu elpg sequence. It could come as pmu halt or abort or
maybe ext error too.
If ctxsw failed to disable, do not read engine info and just abort tsg.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I5f3ba07663bcafd3f0083d44c603420b0ccf6945
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014914
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018156
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new power/clock gating functions that can be called by
other units.
New clock_gating functions will reside in cg.c under
common/power_features/cg unit.
New power gating functions will reside in pg.c under
common/power_features/pg unit.
Use nvgpu_pg_elpg_disable and nvgpu_pg_elpg_enable to disable/enable
elpg and also in gr_gk20a_elpg_protected macro to access gr registers.
Add cg_pg_lock to make elpg_enabled, elcg_enabled, blcg_enabled
and slcg_enabled thread safe.
JIRA NVGPU-2014
Change-Id: I00d124c2ee16242c9a3ef82e7620fbb7f1297aff
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025493
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry-picked from c905858565 in
dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2108406
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For handle_sched_error, change err to info print for failing eng
id returned as -1 i.e. FIFO_INVAL_ENGINE_ID as no engine is found
busy doing ctxsw. May be ctxsw already finished for the context
for which ctxsw timeout intr was triggered.
Possible Causes:
a)
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
This is just a side effect of how gv100 and earlier versions of
ctxsw_timeout behave.
With gv10b and later, h/w snaps the context at the point of error
so that s/w can see the tsg_id which caused the HW timeout.
b)
If engines are not busy and ctxsw state is valid then intr occurred
in the past and if the ctxsw state has moved on to VALID from LOAD
or SAVE, it means that whatever timed out eventually finished
anyways. The problem with this is that s/w cannot conclude which
context caused the problem as maybe more switches occurred before
intr is handled.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: Ia79bee6e860fb179ee39024c963671d4f8245227
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030866
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry-picked from d27f875d2c
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2076126
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any recovery that goes through gk20a_fifo_recover path e.g. gr error,
mmu fault or any recovery that involves engine recovery as well, will
still dump the full debug dump. This change will just avoid dumping debug
dump for force reset channels and pbdma intr if they do not involve
engine recovery. For FIFO_ERROR_IDLE_TIMEOUT error notifiers that
involves tsg recovery only, debug_dump will happen only if
timeout_debug_dump is set. timeout_debug_dump by default is set to true
but can be changed using NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX.
Bug 2092051
Change-Id: Ibbf3cd2c44c586d9deb9e61ffbf37945b8d9e428
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033068
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 5222d0ff4f
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2076117
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Multiple threads could be unbinding different channels from
the same tsg at the same time. At the point where we
remove the channel from the tsg's channel list, call
disable_channel again, in case another thread had
re-enabled the channel after we had disabled it.
Bug 200404549
Change-Id: I9abbc08dc11fe1f7a0abada88376c0ef96b56610
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083337
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Satish Arora <satisha@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently has_timedout variable is protected by wmb at places
where it is being set and there is no correspoding rmb whenever
has_timedout variable is read. This is prone to errors for
concurrent execution. This change is supposed to fix this issue.
Rename has_timedout variable of channel struct to ch_timedout.
Also to avoid rmb every time ch_timedout is read,
ch_timedout_spinlock is added to protect ch_timedout
variable for taking care of concurrent execution.
Bug 2404865
Bug 2092051
Change-Id: I0bee9f50af0a48720aa8b54cbc3af97ef9f6df00
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1930935
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 1f54ea09e3
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2016975
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
preempt_channel needs to use the channel to pass it to other
public functions, get access to a tsg etc. This qualifies it to take a
pointer to a channel as an input parameter instead of a chid.
Increment the channel ref counter using the function
gk20a_channel_from_id in functions where we get the chid from the h/w
registers directly. Once the prempt_channel function call is done,
use a gk20a_channel_put on the referenced channel.
Jira NVGPU-1461
Change-Id: I6c87c8104cfcb418d468c8c590087fd4aeabf4bd
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1963200
(cherry picked from commit 9abe9fe062
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2013728
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function gk20a_fifo_recover_tsg has to pass a valid struct tsg to
other functions from within. This qualifies it to have a pointer to
struct tsg_gk20a as an input parameter.
Tsg specific parts of the gk20a_fifo_preempt_timeout_rc are now moved
into another function gk20a_fifo_preempt_timeout_rc_tsg
that takes a tsg as an input and passes it to gk20a_fifo_recover_tsg.
The pointer to a tsg is also used to enumerate channels from within.
The function gk20a_fifo_preempt_timeout_rc now contains only channel
specific code.
Jira NVGPU-1461
Change-Id: Ice0a9921567841fb5586a7e4e010c442ca6cf172
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1961675
(cherry picked from commit e19cea7ab3
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2013726
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.
The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.
However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.
The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.
Jira NVGPU-1460
Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 7df3d58750
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/2008515
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The context switch timeout works by triggering a hardware timeout at 10
Hz. When handling these, we check whether a channel has actually timed
out. Currently the timeout limit can be shorter than the 10 Hz interval
which always causes us to recover a channel but would also cause
detection of progress if there was any in the interval.
Handling both situations at the same time would reuse the channel
pointer local to the function after a loop has finished and would cause
memory corruption. Fix this by making the two branches mutually
exclusive, and move the recover case to happen first because that's how
our tests assume things to work.
Jira NVGPU-967
Bug 2502074
Change-Id: I26aa0fa7fd80ab42a9a1a93a6cca2cd29c9d3f3f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932449
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
(cherry picked from commit 8ac9a53d816a3d012a6948a9a96ac6db699c662di
in dev-kernel)
Reviewed-on: https://git-master.nvidia.com/r/1997597
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Tested-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a reboot handler to make sure that nvgpu does not try to busy
the GPU if the system is going down. If the system is going down
then any number of subsystems nvgpu depends on may already have
been deinitialized.
Bug 200333709
Bug 200454316
Change-Id: I2ceaf7ca4fb88643310874b5b26937ef44c6e3dd
Signed-off-by: Kary Jin <karyj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1927018
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinayak Pane <vpane@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gops.fb.dump_vpr_wpr_info() accesses both VPR and WPR registers.
Split this into two different HALs gops.fb.dump_vpr_info() and
gops.fb.dump_wpr_info()
Also unset HALs accessing VPR registers on dGPUs
We don't support VPR on dGPUs
Remove fb_mmu_vpr_info_r() register and all its accessors from
dGPU headers
Bug 2173122
Change-Id: I5b2712f8c5389e422a84c375a7e836add48bfd1c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850947
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We don't support big page size beginning Pascal, so set HAL
gops.fb.set_mmu_page_size() to NULL on all those platforms
Also remove these accessors from corresponding platforms
fb_mmu_ctrl_use_pdb_big_page_size_v()
fb_mmu_ctrl_use_pdb_big_page_size_true_f()
fb_mmu_ctrl_use_pdb_big_page_size_false_f()
Bug 2173122
Change-Id: I7353412860a7a6f8a993ca9184a0dc3ca9d749af
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850946
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created struct nvgpu_sec2 to hold members
related to SEC2-RTOS ucode support in header file
sec2.h
-Created nvgpu_sec2 variable under struct gk20a.
-Created NVGPU_SUPPORT_SEC2_RTOS enable flag
to enable SEC2 RTOS support.
-Defined method nvgpu_init_sec2_support() to
init SEC2 RTOS support by performing s/w setup like
mutex-init, sequence-init & add support
for remove_support.
-Defined method nvgpu_sec2_destroy() to deinit
SEC2 RTOS support.
-Added nvgpu_init_sec2_support()/nvgpu_sec2_destroy()
as part gk20a_finalize_poweron()/gk20a_prepare_poweroff()
sequence based on NVGPU_SUPPORT_SEC2_RTOS enable flag
-Assigned g->sec2->flcn to point to g->sec2_flcn to access
falcon.
-Made Makefile changes to include sec2.c to build
JIRA NVGPUT-80
Change-Id: Icdc8c25994e305427ad465a5a20e9ce533759a9e
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1791955
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Nvgpu uses many ways to check if sync points are enabled. The four
ways used to be:
platform->has_syncpoints
g->has_syncpoints
nvgpu_is_enabled(g, NVPGU_HAS_SYNCPOINTS)
gk20a_platform_has_syncpoints()
This patch standardizes all usage to now be nvgpu_has_syncpoints()
which is based on gk20a_platform_has_syncpoints() - just renamed to
be general to nvgpu.
All usage of the other forms have now been consolidated. However,
under the hood nvgpu_has_syncpoints() does check the is_enabled
flag. This flag is now set where g->has_syncpoints used to be set
based on the platform data.
The basic dependency chain is this:
nvgpu_has_syncpoints -> NVGPU_HAS_SYNCPOINTS ->
platform->has_syncpoints
However, note: there are several places where syncpoints can be
disabled if some other driver initialization fails (for ex. host1x).
Also note that nvgpu_has_syncpoints() also considers a disable
variable set by debugfs.
Bug 2327574
Change-Id: Ia2375a80f5f2e27285e6175568dd13e6bb25fd33
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803975
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- TPC powergating should be done before
calling gk20a_enable_gr_hw.
gk20a_enable_gr_hw() issues a GR engine reset.
Without this fix, enabling 1 TPC from each PES
causes ctxsw timeout error while running GFX Benchmark.
- Adds valid tpc-pg mask for 1/2/3/4 active TPC configs.
TPC Config - TPC-MASK
4 TPC configuration - 0x0
3 TPC configuration - 0x1/0x2/0x4/0x8
2 TPC configuration - 0x5/0x9/0x6/0xa
- We should not write to gr_fe_tpc_pesmask_r()
as part of TPC-PG sequence. This register is for
debug purpose only.
Bug 200442360
Change-Id: I6fbe1ad8fbc836ace8cbaf00ec3d21a12c73e0bd
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1809772
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Defined FALCON_ID_GSPLITE for GSP falcon.
- Created variable gsp_flcn of struct nvgpu_falcon
for GSP falcon & registered to falcon module to access
falcon functions.
- Created HAL file gsp_gv100.c/h for GSP.
- Modified Makefile & Makefile.sources files to include
gsp_gv100 HAL file.
- Enabled GSP falcon support for GV100 by registering
to common falcon module.
- Defined function gv100_gsp_reset() & assigned to
falcon reset as GSP engine reset.
- Updated falcon HAL init code not to return error
if requested falcon is not supported, instead log
the info and return non-error.
JIRA NVGPU-1160
Change-Id: Ice032cf443ae87254375265628b3c022f41544cd
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1804551
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>