Earlier implementation of railgate disable config is disabling
runtime pm during pm_init. This is causing multiple issues:
1. gpu rail will be on as soon as nvgpu driver probe is called.
Actual gpu hw init may happen at much later point of time.
2. This is breaking railgate_enable sysfs node functionality.
railgate_enable is not working if runtime pm is disabled.
To avoid all these issues for railgate disable, enable runtime pm
during pm_init and set auto-suspend delay to negative (-1), which
will disable runtime pm suspend calls.
Also fixed following issues along with this:
1. Updated railgate_enable debugfs implementation to use auto-suspend delay.
To disable railgating:
Set auto-suspend delay with negative value(-1) which will disable runtime
pm suspend.
To enable railgating:
Set auto-suspend delay with railgate_delay value.
Also removed redundant user_railgate_disabled gk20a device data and
replaced with can_railgate, where ever it is applicable.
2. Initialized default railgate_delay to 500msec to avoid railgate
on/off transitions with railigate enable from disabled state.
3. Created railgate_residency debug fs node irrespective of can_railgate
initial state. This is helping with the case, where initial state of
railgate state off and then railgate enable is done through sysfs node.
Bug 2073029
Change-Id: I531da6d93ba8907e806f65a1de2a447c1ec2665c
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694944
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
RXDET is supported only on nvlink 2.2 devices and forward.
Add HAL to run RXDET selectively based on chip. RXDET needs to be
done after the links are out of reset but before any other link
level initialization.
minion_send_cmd is also made non-static to support RXDET
functionality.
JIRA NVLINK-160
Change-Id: Ic65b8dbc7281743f62072089ff3c805521ac9b38
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1729525
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Integrity already typedefs these and complains if you override them
even with the same underlying type.
Since we only use these in the regops_gk20a.h header file (outside of
the Linux specific code, that is) this patch just changes the __uXX to
uXX. With that we can delete the now unnecessary __uXX defs.
JIRA NVGPU-525
Change-Id: I01dd2723b68db2170449342f73c711ee5a589adb
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1721186
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The forced PRAMIN reads and writes for sysmem buffers haven't worked in
a while since the PRAMIN access code was refactored to work with
vidmem-only sgt allocs. This feature was only ever meant for testing and
debugging PRAMIN access and early dGPU support, but that is stable
enough now so just delete the broken feature instead of fixing it.
Change-Id: Ib31dae4550f3b6fea3c426a2e4ad126864bf85d2
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1723725
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The mechanism of posting events to userspace is OS specific.
In linux this works through poll fd, wherein we can make use
of nvgpu_cond variables to poll and trigger the corresponding
wait_queue.
The post event functionality on QNX doesn't work on poll though.
It uses iofunc_notify_trigger to post the events to the calling
process. As such QNX can't work with nvgpu_cond's.
To overcome this issue, it is proposed to create OS specific
interface function for posting clk arb events. Linux can call
nvgpu_cond based implementation, which makes sense since these
are already initialized and poll'ed in Linux specific code only.
QNX can implement this interface to call iofunc_notify_*
functions, as per its need.
Jira VQRM-3741
Change-Id: I7d9f71dae2ae7f6a09cd56662003fd1b7e50324c
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1709656
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add NVGPU_IOCTL_CHANNEL_RESCHEDULE_RUNLIST ioctl to reschedule runlist,
and optionally check host and FECS status to preempt pending load of
context not belonging to the calling channel on GR engine during context
switch.
This should be called immediately after a submit to decrease worst case
submit to start latency for high interleave channel.
There is less than 0.002% chance that the ioctl blocks up to couple
miliseconds due to race condition of FECS status changing while being read.
For GV11B it will always preempt pending load of unwanted context since
there is no chance that ioctl blocks due to race condition.
Also fix bug with host reschedule for multiple runlists which needs to
write both runlist registers.
Bug 1987640
Bug 1924808
Change-Id: I0b7e2f91bd18b0b20928e5a3311b9426b1bf1848
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This header doesn't have anything in it yet but the header is
now present. A change recently went in that only checked for
__KERNEL__ before falling back to including the QNX header.
This caused the POSIX build in GVS to attempt to include the
QNX header. The QNX src is not synced in userspace dev-kernel
tests builds resulting in a missing header.
JIRA NVGPU-525
Change-Id: I60f29ad69cbed38b6ea47f95ca504dab51fa01e7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1714083
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For most of the builds we have in GVS userspace is 64 bits. But
it seems like at least some L4T userspace builds are either not
32 bits or have an UL that only covers 32 bits. This is seen in
GVS:
/dvs/git/dirty/git-master_modular/kernel/nvgpu/drivers/gpu/nvgpu/gv11b/mm_gv11b.c: In function 'gv11b_gpu_phys_addr':
/dvs/git/dirty/git-master_modular/kernel/nvgpu/drivers/gpu/nvgpu/gv11b/mm_gv11b.c:273:3: error: left shift count >= width of type
make[2]: *** [/dvs/git/dirty/git-master_modular/tmake/artifacts/CommonRules.tmk:318: mm_gv11b.o] Error 1
This patch simply bumps the UL to ULL in BIT() to make sure that
we always have at least 64 bits available for the BIT() macro.
JIRA NVGPU-525
Change-Id: I67de4338afc5bee4f1fa16faee6116e0e7dbf108
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1718564
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added vf change inject support for gv10x
- Updated clk_pmu_vf_inject() to fill required data
for pascal or volta vf change inject support
- Added new ctrl clk interface for gv10x clk domain list
- Added pmu interface for gv10x clk domain list &
vf change inject request
- Modified clk cmd, msg & RPC id's to match
with chips_a_23609936 branch
Bug 200399373
Change-Id: Ib9dc10073386f63bdfd92110c7ec3e09b1c484ce
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700746
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
NVGPU_DBG_GPU_IOCTL_PC_SAMPLING ioctl is not handled properly for HV
case for both Linux and QNX. Currently guest vm is trying to perform
gpu memory read and write operations which supposed to be done by RM
server, causing the crash. This patch is supposed to fix ioctl failure.
Bug 2052040
Change-Id: Ia0773959b84739a1bced858331764751520a3561
Signed-off-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1708102
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Replaced all instances of sync_fence in gk20a_fence* code with
nvgpu_os_fence. Added the API install_fence for the nvgpu_os_fence
abstraction. sync_fence mechanism and its dependencies are completely
removed from the fence_gk20a methods. Due to the recent os_fence changes
and the changes to fence_gk20a, we can finally get rid of all the CONFIG_SYNCS
present in the submit path.
JIRA NVGPU-66
Change-Id: I3551dab04b93b1e94db83fc102a41872be89e9ed
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1701245
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch constructs an abstraction to hide the sync_fence
functionality from the common code. struct nvgpu_os_fence acts as an
abstraction for struct sync_fence.
struct nvgpu_os_fence consists of an ops structure named nvgpu_os_fence_ops
which contains an API to do pushbuffer programming to generate wait
commands for the fence.
The current implementation of nvgpu only allows for wait method on a
sync_fence which was generated using a similar backend(i.e. either
Nvhost Syncpoints or Semaphores). In this patch, a
generic API is introduced which will decide the type of the underlying
implementation of the struct nvgpu_os_fence at runtime and run the
corresponding wait implementation on it.
This patch changes the channel_sync_gk20a's semaphore specific
implementation to use the abstract API. A subsequent patch will make
the changes for the nvhost_syncpoint based implementations as well.
JIRA NVGPU-66
Change-Id: If6675bfde5885c3d15d2ca380bb6c7c0e240e734
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667218
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split sim initialization to two parts,
first part gets invoked as part of probe and
second part gets invoked in the finalize_poweron
after the hal has been initialized.
This is done because some of the sim init
code uses mm api's which are assigned as
part of hal init.
replaced sim buffer allocation api's
with nvgpu_dma_sys_alloc.
Change-Id: Ib019fbb747bdf6dcc74e7deba732ab41f0869e96
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1705424
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
segregated os-agnostic function from linux/sim.c and linux/sim_pci.c
to sim.c and sim_pci.c, while retaining os-specific functions.
renamed all gk20a_* api's to nvgpu_*.
renamed hw_sim_gk20a.h to nvgpu/hw_sim.h
moved hw_sim_pci.h to nvgpu/hw_sim_pci.h
JIRA VQRM-2368
Change-Id: I040a6b12b19111a0b99280245808ea2b0f344cdd
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1702425
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode is updated to include LDIV slowdown factor in gr_init_param command.
- Defined a new version gr_init_param_v2.
- Updated the PMU FW version code.
- Set the LDIV slowdown factor to 0x1e by default.
- Added sysfs entry to program ldiv_slowdown factor at runtime.
Bug 200391931
Change-Id: Ic66049588c3b20e934faff3f29283f66c30303e4
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1674208
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to enable the movement of clk arbitrator to common
code, we need to remove the NVGPU_GPU_EVENT_* defines (which
are present in uapi) and instead use the common code defines.
Add a conversion function for the same.
With this the uapi header is no longer required to be included
inside clk_arb.c
Jira VQRM-3741
Change-Id: If01614b01733876046f98b97e70285c52bc33e45
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1699241
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add support for compiling nvgpu in a POSIX compliant userspace.
This code adds all of the necessary abstraction interfaces (mostly
stubbed) to enabled extremely limited and basic functionality in
nvgpu.
The goal of this code is to facilitate unit testing of the nvgpu
common core. By doing this in userspace it is much easier to write
tests that rely on very particular states within nvgpu since a user
can very precisely control the state of nvgpu.
JIRA NVGPU-525
Change-Id: I30e95016df14997d951075777e0585f912dc5960
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683914
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
clk_vin data structures updated as new calibration type (v20) is added.
GP106 header does not have vin calibration type.
Assuming V10 if calibration type is not V20.
Add fuse calibration for V20 type.
Bug 200399373
Change-Id: I9449de1ecb0d0873f3bc16f46660f93fab5b9eac
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1687591
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Clk arbiter code contains two significant portions -
the one which interacts with userspace and is OS specific,
and the other which does the heavylifting work which can
be moved to the common OS agnostic code.
Split the code into two files in prep towards refactoring
the clk arbiter.
Jira VQRM-3741
Change-Id: I47e2c5b18d86949d02d6963c69c2e2ad161626f7
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1699240
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Allow a potential IOMMU'ed GMMU mapping for all SYSMEM buffers
inlcuding coherent sysmem. Typically this won't actually happen
since IO coherent mappings will also often be accessed over
NVLINK which is physically addressed.
Also update the comments surrounding this code to take into
account the new NVLINK nuances. Since NVLINK buffers are
directly mapped even when the IOMMU is enabled this is very
deserving of a comment explaining what's going on.
Lastly add some simple functions for checking if an nvgpu_mem
(or a particular aperture field) is a sysmem aperture. Currently
this includes SYSMEM and SYSMEM_COH.
JIRA EVLR-2333
Change-Id: I992d3c25d433778eaad9eef338aa5aa42afe597e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665185
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>