If the dmabuf attachment is not stored in the dmabuf priv data, on
re-mapping the buffer, new attachment is prepared and the same is
detached on finding existing mapping. Prior attachment is not
detached in nvgpu_vm_find_mapping.
However mapped_buffer stores the recent attachment that is detached.
Hence on last unmap it will result in NULL access. Hence, detach the
attachment stored in mapped_buffer in nvgpu_vm_find_mapping.
Bug 2834141
Change-Id: I0c3c0b1c684b4984dba4f39edb8610b94961291e
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2290313
Tested-by: Debarshi Dutta <ddutta@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- "include/trace/events/gk20a.h" file was having GPL2 license
(which should not used for QNX code). This file was used for
compiling linux userspace driver("libnvgpu-drv.so") and was used for
unit testing on QNX.
- This patch removes stubs in "include/trace/events/gk20a.h" file.
(which were used for linux userspace driver.)
- For QNX driver, "nvgpu_rmos/trace/events/gk20a.h" was used.
This patch moves that file to "include/nvgpu/posix/trace_gk20a.h" and
does relevant license change. This same file will be used for linux
userspace driver.
- This patch also creates a new file "include/nvgpu/trace.h" which
selects proper trace file depending on the config.
Bug 2802414
Change-Id: Icdfb251e5698073f986753a969e804161af3ecc5
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2286388
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Xavier Chip Product POR was updated to 20G only. No more qual work
happening for 16G. So we do not plan to support 16G. Now that we have
a single speed left, remove the code added to support nvlink speed from
VBIOS as it is redundant.
JIRA NVGPU-2964
Change-Id: Icd71ebb8271240818e36d40bf73c60f0c5beb6bf
Signed-off-by: tkudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2284175
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created perf.h file and moved all private functions
and structures into it
-Created single sw_setup/pmu_setup for whole perf
unit
-Changed public function and structure names as per
standard format
-Deleted lpwr unit specific file from make file as
it is no longer used
-Removed support_vfe and support_changeseq flags as
it is no longer used
-Removed clk_set_boot_fll_clks_per_clk_domain function
as it is no longer used for tu10a
-Removed perf unit headers from pmuif folder
NVGPU-4448
Change-Id: Ia29e5b5a1a960b5474a929d8797542bf6c0eccf1
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2283587
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The atomic counter in interrupt handler can overflow and result in
calling of BUG() which will crash the process. The equivalent
functionality can be implemented with just setting an atomic variable at
start of handler and resetting at end of handler. The wait can be longer
in case there is constant interrupts coming but ultimately it will end.
Generally the wait path is not time critical so it should not be an
issue. Also, fix the unit tests for mc.
Change-Id: I9b8a236f72e057e89a969d2e98d4d3f9be81b379
Signed-off-by: shashank singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2247819
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
event_id_list and event_id_list_locks fields are only
needed in nvgpu_tsg when CONFIG_NVGPU_CHANNEL_TSG_CONTROL
is defined.
Conditionally compile those fields and related code,
so that they are removed from safety build.
Jira NVGPU-4376
Change-Id: I8678aa1b8cd4166aa37bcb42cda1eb9c703fd32f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2273261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Advisory Directive 4.5 states that identifiers in the same
name space with overlapping visibility should be typographically
unambiguous.
The presence of both the roundup(x,y) and round_up(x,y) macros in
the posix utils.h header incurs a violation of this rule.
These macros were added to keep in sync with the linux kernel variants.
However, there is a key distinction between how these two macros
work in the linux kernel; roundup(x,y) can handle any y alignment while
round_up(x,y) is intended to work only when y is a power-of-two.
Passing a non-power-of-two alignment to round_up(x,y) results in an
incorrect value being returned (silently).
Because all current uses of roundup(x,y) and round_up(x,y) in
nvgpu specify a y value that is a power-of-two and the underlying
posix macro implementations assume as much, it is best to remove
roundup(x,y) from nvgpu altogether to avoid any confusion.
So this change converts all uses of roundup(x,y) to round_up(x,y).
Jira NVGPU-3178
Change-Id: I0ee974d3e088fa704e251a38f6b7ada5a7600aec
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2271385
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created ucode_perf_vfe_inf.h and moved all VFE
interface structs and MACROs into this header
-Created nvgpu_clk_fll_get_fmargin_idx to get
freq margin index
-Created nvgpu_vfe_var_get_s_param to read s_param
-Removed MACROs and header includes which are
not needed
NVGPU-4448
Change-Id: I89f946d555bcbc7823665d2a5a761049f7a5e963
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2260150
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 11.2 doesn't allow conversions of a pointer from or to an
incomplete type. These type of conversions may result in a pointer
aligned incorrectly and may further result in undefined behavior.
This patch addresses rule 11.2 violations related to pointers to and
from struct nvgpu_sgl. This patch replaces struct nvgpu_sgl pointers by
void pointers.
Jira NVGPU-3736
Change-Id: I8fd5766eacace596f2761b308bce79f22f2cb207
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2267876
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The disable_syncpoints debugfs knob allows the user to disable syncpt
support at runtime. This knob was incorrectly defined as a u32. Convert
it into a boolean variable.
JIRA NVGPU-3873
Change-Id: If1cfe07fa7b795c0d1b507395bd6e4fa547e3615
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2262193
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute
on different hardware resources.
This feature requires a change in software to make it possible
to modify the virtual SM id to TPC mapping across mission and
redundant contexts.
This CL adds only SM diversity flags which are exposed to
its clients through ioctl/devctl interfaces.
Actual virtual SM id to TPC mapping implementation
will be part of upcoming patch sets.
Added NvGpu CFLAGS to identify the safety build
"CONFIG_NVGPU_BUILD_CONFIGURATION_IS_SAFETY"
JIRA NVGPU-4133
Change-Id: I5a18256780e6726e399e39c1c8d155d2ef07d7bd
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2250461
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Previously, unit interrupt enabling/disabling and corresponding MC level
interrupt enabling/disabling was not done at the same time.
With this change, stall and nonstall interrupt for units are programmed
at MC level along with individual unit interrupts. Kept access to MC
interrupt registers through mc.intr_lock spinlock.
For doing this separated CE and GR interrupt mask functions.
mc.intr_enable is only used when there is global interrupt
control to be set. Removed mc_gp10b.c as mc_gp10b_intr_enable
is now removed. Removed following functions - mc_gv100_intr_enable,
mc_gv11b_intr_enable & intr_tu104_enable. Removed intr_pmu_unit_config
as we can use the generic unit interrupt control function.
JIRA NVGPU-4336
Change-Id: Ibd296d4a60fda6ba930f18f518ee56ab3f9dacad
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2196178
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The pmu init thread typically returns immediately
without calling nvgpu_thread_should_stop().
pmu_pg_kill_task() checks if the thread is running, and
if it is, calls nvgpu_thread_stop().
However, there's a race condition where the init thread could
have exited between the time that kill_task() checked the
running flag and the time we actually stop the thread, leading
to a kernel crash.
Fix this by making the running flag in the nvgpu_thread struct
atomic. Both the thread proxy function and the thread_stop()
function will set the flag to false.
In the case of nvgpu_thread_proxy(), if the flag is already false,
then nvgpu_thread_stop() has already reset it, at which point we
just wait for nvgpu_thread_should_stop() to return true.
In the case of nvgpu_thread_stop(), if the flag is already false,
then the thread proxy function has already exited, and there is
nothing more to do.
Bug 2591298
Change-Id: I9ba6b63c30a5c3e1df11e790094836b44373122b
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2230358
GVS: Gerrit_Virtual_Submit
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
tegra_get_chip_id is going to be deprecated soon and this patch removes
calls to it. Chip-ID is already read via DT and call to
tegra_get_chip_id() can be avoided by adding metadata to store the
Chip-ID information in struct gk20a_platform.
Bug 200524194
Bug 200551105
Change-Id: I5f9f5abf679cf9afe98840e20144d76eb0238426
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2236311
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add support in nvgpu to parse and get the freq cap from DT.
The patch does below
Parse the DT and gets the freq cap value during probe.
During clk_arb init compare this with P0.Max and takes the lowest.
Send change_seq with the new value and set dgpu freq.
Use the lowest for "get points","get default","set VF".
Bug 200556366
Change-Id: Ie10243f9bf83cb5ae07ebcc4cdc8efaffa56c309
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2204644
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_pm_runtime_suspend can fail and invoke gk20a_pm_finalize_poweron
that can cause double mapping of the usermode mmap region via
io_remap_pfn_range(). Avoid this by using a boolean variable to track
whether the region is already mapped.
Bug 2707416
Change-Id: I4d8cbe427400a5b986348a19af145367cc08ffc6
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2229312
GVS: Gerrit_Virtual_Submit
Reviewed-by: Dinesh T <dt@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The devinit executes in parallel with PCIE link training
to reduce exit latency. Therefore, all PCIE settings that
normally occur during devinit after the PCIE link is up are
deferred until nvgpu has resumed control.
Bug 2661545
Change-Id: Ifdd4f645b2e1791d93567cc34d6ab0691a25d101
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2210625
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move graphics related defs and functions under CONFIG_NVGPU_GRAPHICS
switch.
Move classes not supported in GV11B under CONFIG_NVGPU_NON_FUSA
switch.
Add missing valid class numbers to gpu_class.is_valid HAL.
Also remove un-used class defs from class.h header.
Lot of qnx safety tests are still using graphics 3d class.
Until those tests got fixed, allowing 3d graphics class
as valid class for safety build.
JIRA NVGPU-4301
Change-Id: Ifd2a13bee3210821799c2bca10e7245eb3c79121
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2224658
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a.h will include gops_mc.h to contain the mc ops definitions. Add
doxygen comments for the HAL functions that are called directly.
Also move mc_gp10b_intr_pmu_unit_config to non-fusa HAL file.
JIRA NVGPU-2524
Change-Id: I4f326332d7842211b004b372d79fac9fe6ed40e7
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2226017
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Writing same value to railgate_enable_store should be treated as nop
and made successfully. Doing so is not only an optimization for the
operation but also convention that users expect for "settings". This
change is primary for fixing a peculiar situation in the driver:
root@localhost:/sys/devices/17000000.gp10b# cat railgate_enable
0
root@localhost:/sys/devices/17000000.gp10b# echo 0 > railgate_enable
bash: echo: write error: Invalid argument
Attempt to disable railgating on a platform where railgating isn't
supported shouldn't be treated as 'invalid'. It's disabled after all.
Bug 200562094
Change-Id: I3c04934bdbaf337c33d7de9cac6d53c96b4dacae
Signed-off-by: Leon Yu <leoyu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2225476
(cherry picked from commit 10b3b5b1d5)
Reviewed-on: https://git-master.nvidia.com/r/2226185
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bibek Basu <bbasu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently nvgpu reads the temperature by reading the
NV_THERM_I2CS_SENSOR_00 register. Below are the issues
with current approach
1) NV_THERM_I2CS_SENSOR_00 doesn't support
fractional precision which is POR.
2) It doesn't support negative temperatures which
is required for Auto.
3) It doesn't take into account the right POR
sensor in VFE VBIOS tables.
From therm channel get status interface we can read the
current temperature from PMU.
NVBUG - 200549047
Change-Id: I2fb21926208876f3d3bebe3f2dee08edafedbc7d
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2196224
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu.common.unit was just an enum used for passing to nvgpu.common.mc
APIs. So, move the enum into mc.h, and replace the include of unit.h
with mc.h where appropriate. And update the yaml arch.
JIRA NVGPU-4144
Change-Id: I210ea4d3b49cd494e43add1b52f3fbcdb020a1e3
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2216106
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When virtualized, the guest OS has no direct access to
PMU functionality:
- Don't create debugfs entries that rely on PMU access
- Clean up PMU vgpu HAL entries that imply that PMU access
is supported
Bug 200543218
Change-Id: I12730b600802448a240f3de042760041d3ae7d29
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2213650
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 17.1 forbids use of stdarg.h features which are defined for
variable arguments.
This patch modifies logging macros to use slogf function for QNX builds.
This avoids use of variable argument functions used for formatting log
message.
Jira NVGPU-4075
Change-Id: I5b6bb1107a7e431afaa960003858193a477b2ee6
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2192016
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
IRQs were not enabled before nvgpu_finalize_poweron, so debugging early
init issues such as MMU fault, invalid PRIV ring or bus access etc.
triggered during nvgpu power-on was cumbersome. Hence, Enable the
IRQs before nvgpu_finalize_poweron is called.
In HUB (MMU fault) ISR, MMU fault handling is only limited to snapped
in priv reg in case of fault during nvgpu power-on.
In HUB (MMU fault) ISR, access to fault buffers is synchronized as
nvgpu driver reads the fault buffer registers before proceeding
with fault handling. However, additional MMU fault handling
needs to be synchronized with GR/FIFO/quiesce/recovery setup
through nvgpu power-on state.
JIRA NVGPU-1592
Change-Id: I8a5f2fcd79cb7ad8e215359e7a9fad50bfd46d67
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2203861
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
IRQs can get triggered during nvgpu power-on due to MMU fault, invalid
PRIV ring or bus access etc. Handlers for those IRQs can't access the
full state related to the IRQ unless nvgpu is fully powered on.
In order to let the IRQ handlers know about the nvgpu power-on state
gk20a.power_on_state variable has to be protected through spinlock
to avoid the deadlock due to usage of earlier power_lock mutex.
Further the IRQs need to be disabled on local CPU while updating the
power state variable hence use spin_lock_irqsave and spin_unlock_-
irqrestore APIs for protecting the access.
JIRA NVGPU-1592
Change-Id: If5d1b5e2617ad90a68faa56ff47f62bb3f0b232b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2203860
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently after sending change seq RPC, nvgpu waits for a fixed time
of 20ms.
This CL replaces this with pmu_wait_message_cond, which will return
immediately after getting change seq completion event.
Also added debug fs node to get the change seq execution time.
Bug 200545366
Change-Id: Iba283f65d4949858be9cbff88de4d21a8c92ff81
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2202423
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
golden image size will be set when memory allocated.
See function:
- nvgpu_gr_obj_ctx_init
If golden image size is 0, gr_golden_image should be a NULL
pointer in most cases. So add NULL pointer checking in
tpc_pg_mask_store to avoid NULL pointer exception.
Bug 2403210
Change-Id: I14df5cd94d7a4418c3089c5f84b6eab93c485ba6
Signed-off-by: Sunny Li <sunnyl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2161280
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>