clk_vin data structures updated as new calibration type (v20) is added.
GP106 header does not have vin calibration type.
Assuming V10 if calibration type is not V20.
Add fuse calibration for V20 type.
Bug 200399373
Change-Id: I9449de1ecb0d0873f3bc16f46660f93fab5b9eac
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1687591
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Made change to pass correct VOLT RPC param to get
voltage request.
- Change VOLT_SET_VOLTAGE request to blocking call
to make sure, set voltage request completes in
PMU & ACK's
- Created rail count define for pascal & volta then
made changes to use define as needed.
Change-Id: I2662fadbe32b82585711f2568c4f800162899206
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1693402
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below rc_types to be passed to gk20a_fifo_recover
MMU_FAULT
PBDMA_FAULT
GR_FAULT
PREEMPT_TIMEOUT
CTXSW_TIMEOUT
RUNLIST_UPDATE_TIMEOUT
FORCE_RESET
SCHED_ERR
This is nice to have to know what triggered recovery.
Bug 2065990
Change-Id: I202268c5f237be2180b438e8ba027fce684967b6
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662619
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently gv100 is using very large timeout for polling
eng preempt done. This will prevent eng preempt polling
loop from timing out. Also before eng preempt loop times
out, there could be channel timeout getting kicked in.
Avoid this by using gr_idle_timeout defined by driver.
Change-Id: Icff5ae37f95f58f3195b9d630bdae42c08ced9a6
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1701059
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The return value from NVGPU_COND_WAIT_INTERRUPTIBLE in
channel poll worker is wrongly compared with 0 and the
boolean result is assigned to ret value used subsequently.
Instead, the direct return value from
NVGPU_COND_WAIT_INTERRUPTIBLE should be used.
This bug seems remnant of the following patch which moved
the handling from 'wait_event_timeout' to 'NVGPU_COND_WAIT'.
commit 301965fb77
gpu: nvgpu: Use nvgpu_cond in channel worker
Change-Id: Id48e197756a6855b35a9ee0dc26d014b62ed3860
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1705976
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- total DMA memory allocation is currently tracked by adding page aligned
size of nvgpu_mem
- The sequence is roughly as follows:
- total dma memory used += mem->aligned_size
- mem->aligned_size = PAGE_ALIGN(size)
- In above sequence, nvgpu_mem structure is initially zero when it is added
to total dma memory used after which it is assigned page aligned value
- This patch fixes total dma memory usage tracking.
Change-Id: Ibb879c8d38ae9077c3d198d9bb008a72e9208b4d
Signed-off-by: Deepak Bhosale <dbhosale@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1685312
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Clk arbiter code contains two significant portions -
the one which interacts with userspace and is OS specific,
and the other which does the heavylifting work which can
be moved to the common OS agnostic code.
Split the code into two files in prep towards refactoring
the clk arbiter.
Jira VQRM-3741
Change-Id: I47e2c5b18d86949d02d6963c69c2e2ad161626f7
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1699240
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fifo.runlist_hw_submit() to submit a new runlist to hardware
gops.fifo.runlist_wait_pending() to wait until runlist write is successful
Set existing API gk20a_fifo_runlist_wait_pending() to
gops.fifo.runlist_wait_pending HAL
Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w
and set it to gops.fifo.runlist_hw_submit HAL
Jira NVGPUT-20
Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700548
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Allow a potential IOMMU'ed GMMU mapping for all SYSMEM buffers
inlcuding coherent sysmem. Typically this won't actually happen
since IO coherent mappings will also often be accessed over
NVLINK which is physically addressed.
Also update the comments surrounding this code to take into
account the new NVLINK nuances. Since NVLINK buffers are
directly mapped even when the IOMMU is enabled this is very
deserving of a comment explaining what's going on.
Lastly add some simple functions for checking if an nvgpu_mem
(or a particular aperture field) is a sysmem aperture. Currently
this includes SYSMEM and SYSMEM_COH.
JIRA EVLR-2333
Change-Id: I992d3c25d433778eaad9eef338aa5aa42afe597e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665185
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Cache the rate used in clk_set_rate().
Return that cached rate on clk_get_rate(), don't read from hardware.
This cached rate is used to avoid duplicate requests to clk_set_rate().
Motivation is to support multiple governors for gpu clk.
Reading clock from hardware is unreliable in multi-governor situation.
Relying on hardware clock value could mislead the kernel gpu governor
in its scaling calculations.
Bug 2051688
Change-Id: I43fc056eea6f69fe0889c45640fcb892b658071c
Signed-off-by: Arun Kannan <akannan@nvidia.com>
(cherry picked from commit 7f819a9ba7)
Reviewed-on: https://git-master.nvidia.com/r/1662759
Reviewed-on: https://git-master.nvidia.com/r/1668919
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch deals with cleanups meant to make things simpler for the
upcoming os abstraction patches for the sync framework. This patch
causes some substantial changes which are listed out as follows.
1) sync_timeline is moved out of gk20a_fence into struct
nvgpu_channel_linux. New function pointers are created to facilitate os
independent methods for enabling/disabling timeline and are now named
as os_fence_framework. These function pointers are located in the struct
os_channel under struct gk20a.
2) construction of the channel_sync require nvgpu_finalize_poweron_linux()
to be invoked before invocations to nvgpu_init_mm_ce_context(). Hence,
these methods are now moved away from gk20a_finalize_poweron() and
invoked after nvgpu_finalize_poweron_linux().
3) sync_fence creation is now delinked from fence construction and move
to the channel_sync_gk20a's channel_incr methods. These sync_fences are
mainly associated with post_fences.
4) In case userspace requires the sync_fences to be constructed, we
try to obtain an fd before the gk20a_channel_submit_gpfifo() instead of
trying to do that later. This is used to avoid potential after effects
of duplicate work submission due to failure to obtain an unused fd.
JIRA NVGPU-66
Change-Id: I42a3e4e2e692a113b1b36d2b48ab107ae4444dfa
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1678400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
As part of debug session unification following changes are
required.
-Including bug.h header file to fix the compilation issue
on QNX
- The mechanism of posting debug events is OS specific. In Linux
this works through poll fd, wherein we can make use of nvgpu_cond
variables to poll and trigger the corresponding wait_queue
via nvgpu_cond_broadcast_interruptible() call.
The post event functionality on QNX doesn't work on poll though.
It uses iofunc_notify_trigger to post the debug events to calling
process. As such QNX can't work with nvgpu_cond's.
To overcome this issue, it is proposed to create a OS specific
interface for posting debugger events. Linux can call
nvgpu_cond_broadcast_interruptible() in its implementation, which
makes sense since these are already initialized and poll'ed in the
Linux specific code only.
QNX can implement this interface to call iofunc_notify_* functions,
as per its need
Jira VQRM-2363
Change-Id: I0abdc0787f771040b8aff5384290d7e6549f81fb
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Signed-off-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1696368
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, hyp_read_ipa_pa_info() only translates IPA for RAM
mappings. It fails for MMIO mappings. In particular, it will
fail when attempting to translate addresses in the syncpoint
shim aperture. As a workaround, assume 1:1 IPA to PA mapping
when hyp_read_ipa_pa_info fails, and address is in syncpt
shim aperture.
Bug 2096877
Change-Id: I5267f0a8febf065157910ad3408374cacd398731
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1687796
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
linux driver runs in user's process but qnx driver has dedicate driver
process, so they have different way to get user pid. nvgpu common code
expect calls from os specific code pass pid/tid.
ce/cde open channel for internal use, we use driver pid.
Jira VQRM-3534
Change-Id: I892372ac5f1dc4d25f9928d16992bcc659d12a56
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694145
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs
and just calculate the unicast list assuming all FBPAs are present
This generates incorrect list of unicast addresses
Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr
Set gr_gv100_get_active_fpba_mask() for GV100
Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips
gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate
unicast list only for active FBPAs
Bug 200398811
Jira NVGPU-556
Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694444
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>