The submit gpfifo flags are splattered everywhere inside the nvgpu
code. Though the usage is inside nvgpu Linux code only, still it
needs to be gotten rid of and replaced with the defines
present in common code.
VQRM-3465
Change-Id: I901b33565b01fa3e1f9ba6698a323c16547a8d3e
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1691979
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove the usage of nvgpu_gpfifo splattered across nvgpu,
and replace with a struct defined in common code.
The usage is still inside Linux, but this helps the
subsequent unification efforts, e.g. to unify the submit
path.
VQRM-3465
Change-Id: I9e5ac697a0c7f85239ddba319085c09481d20d6b
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1691978
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove the usage of nvgpu_fence splattered across nvgpu,
and replace with a struct defined in common code.
The usage is still inside Linux, but this helps the
subsequent unification efforts, e.g. to unify the submit
path.
VQRM-3465
Change-Id: Ic3737450123dfc5e1c40ca5b6b8d8f6b3070aa0d
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1691977
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Enabled internal temperature sensor read for gv100
dgpu.
- Added check to temperature read support before
proceeding to read temperature from H/W
- Assigned GP106 temperature HAL's for GV100 as no changes
between GP106 & GV100 H/W registers.
Bug 200352328
Change-Id: I86b5a1859b87ace49a07d0ff3749bb5b085bba91
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673347
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The pmgr code is in theory common code. However there were uses
of Linux stuff within this code.
This patch cleans that up by deleting the unnecessary os_linux.h
includes, usage of kfree() and adds several platform fields to
the gk20a struct. The platform data is copied to the gk20a struct
in the platform initialization code so that this common code can
access said data without requiring any knowledge of the OS platform
data.
JIRA NVGPU-525
Change-Id: Ic4bb6021f60b0a0778779ab5f3e15b7e5ca98306
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673825
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The patch defines 'struct nvgpu_gpfifo_args' to be filled
by alloc_gpfifo(_ex) ioctls and passed to the
gk20a_channel_alloc_gpfifo function. This is required as a
prep towards having the usermode submission support in the
core channel core.
Change-Id: I72acc00cc5558dd3623604da7d716bf849f0152c
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683391
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
BIT() is defined as returning a 64-bit value. We use it to create the
log mask values, but the functions that accept log mask take only
u32 as parameter.
Use u64 as log mask parameter for the logging functions to match the
sizes.
Change-Id: I6f0803a7d04ee6a2ee725b5defc4cc14b5b7acf5
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683818
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When runtime pm is disabled, then gpu rail will be on as soon as
nvgpu module is loaded. If pm suspend/resume called before gpu
hw initialization(g->poweron = false) then pm suspend is skipping
gpu railgate, which is causing issues with SC7 entry/exit.
To fix this issue:
1. During pm suspend, if g->poweron is false, check for runtime pm
disable to railgate gpu rail.
2. On pm resume, check for runtime pm disable to enable gpu rail,
though gpu driver not initialized.
Bug 2073029
Change-Id: I7631109d79cda5882d2864557f1b7b3d2d89c9f6
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679010
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Accept submits on deterministic channels even when the prefence is a
syncfd, but only if it has just one fence inside.
Because NVGPU_SUBMIT_GPFIFO_FLAGS_SYNC_FENCE is shared between pre- and
postfences, a postfence (SUBMIT_GPFIFO_FLAGS_FENCE_GET) is not allowed
at the same time though.
The sync framework is problematic for deterministic channels due to
certain allocations that are not controlled by nvgpu. However, that only
applies for postfences, yet we've disallowed FLAGS_SYNC_FENCE for
deterministic channels even when a postfence is not needed.
Bug 200390539
Change-Id: I099bbadc11cc2f093fb2c585f3bd909143238d57
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1680271
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MAX/threshold value of user managed syncpoint is not tracked by nvgpu
So if channel is reset by nvgpu there could be waiters still waiting on some
user syncpoint fence
Fix this by setting a large safe value to user managed syncpoint when aborting
the channel and when closing the channel
We right now increment the current value by 0x10000 which should be sufficient
to release any pending waiter
Bug 200326065
Jira NVGPU-179
Change-Id: Ie6432369bb4c21bd922c14b8d5a74c1477116f0b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1678768
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Delete the proxy waiter for non-semaphore-backed syncfds in sema wait
path to simplify code, to remove dependencies to the sync framework (and
thus Linux) and to support upcoming refactorings. This feature has never
been used for actually foreign fences.
Jira NVGPU-43
Jira NVGPU-66
Change-Id: I2b539aefd2d096a7bf5f40e61d48de7a9b3dccae
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665119
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
get_cycles is a linux specific API used in common code. This API
is being used, it seems, as a method to generate time stamps. So
add an API to generate 'high resolution' time stamps. This API
returns an opaque time stamp: that is not something one may use
directly as a time since in the Linux implementation we just use
this cycle counter.
Other implementations will, of course, be free to implement as a
real time stamp.
JIRA NVGPU-525
Change-Id: I237aac9bd6c795d000459025bdb4fce92e8aaa3d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673811
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
VGPU has set aggressive_sync_destroy_thresh even for GV11B, and that breaks
allocation of user managed syncpoint on VGPU
Remove this check for now until some solution is finalized
Bug 200397265
Bug 200326065
Change-Id: Idd765cfdd40b9055d9e083d59c85c84d8b213ee9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1675678
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mikko Perttunen <mperttunen@nvidia.com>
Add IPA to PA translation for GV100 nvlink / pass-through mode
- define platform->phys_addr(g, ipa) method
- call nvgpu_init_soc_vars from nvgpu_tegra_pci_probe
- in nvgpu_init_soc_vars, define set platform->phys_addr to
nvgpu_tegra_hv_ipa_pa, if hypervisor is present.
- in __nvgpu_sgl_phys, use sg_phys, then apply platform->phys_addr
if defined.
- implement IPA to PA translation in nvgpu_tegra_hv_ipa_pa
Bug 200392719
Change-Id: I622049ddc62c2a57a665dd259c1bb4ed3843a537
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673582
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add __nvgpu_sgl_phys function that can be used to implement IPA
to PA translation in a subsequent change.
Adapt existing function prototypes to add pointer to gpu context,
as we will need to check if IPA to PA translation is needed.
JIRA EVLR-2442
Bug 200392719
Change-Id: I5a734c958c8277d1bf673c020dafb31263f142d6
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1673142
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Replace the padding in nvgpu_channel_wdt_args with a timeout value in
milliseconds, and add NVGPU_IOCTL_CHANNEL_WDT_FLAG_SET_TIMEOUT to
signify the existence of this new field. When the new flag is included
in the value of wdt_status, the field is used to set a per-channel
timeout to override the per-GPU default.
Add NVGPU_IOCTL_CHANNEL_WDT_FLAG_DISABLE_DUMP to disable the long debug
dump when a timed out channel gets recovered by the watchdog. Printing
the dump to serial console takes easily several seconds. (Note that
there is NVGPU_TIMEOUT_FLAG_DISABLE_DUMP about ctxsw timeout separately
for NVGPU_IOCTL_CHANNEL_SET_TIMEOUT_EX as well.)
The behaviour of NVGPU_IOCTL_CHANNEL_WDT is changed so that either
NVGPU_IOCTL_CHANNEL_ENABLE_WDT or NVGPU_IOCTL_CHANNEL_DISABLE_WDT has to
be set. The old behaviour was that other values were silently ignored.
The usage of the global default debugfs-controlled ch_wdt_timeout_ms is
changed so that its value takes effect only for newly opened channels
instead of in realtime. Also, zero value no longer means that the
watchdog is disabled; there is a separate flag for that after all.
gk20a_fifo_recover_tsg used to ignore the value of "verbose" when no
engines were found. Correct this.
Bug 1982826
Bug 1985845
Jira NVGPU-73
Change-Id: Iea6213a646a66cb7c631ed7d7c91d8c2ba8a92a4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1510898
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
/sys/devices/gpu.0/gfxp_wfi_timeout_unit
usec - microseconds
sysclk - gpu clock count
Treat gr_fe_gfxp_wfi_timeout_r as context-switched
register on gv11b.
Set default gfxp_wfi_timeout to 100 usec to match
gp10b at 1GHz.
bug 1888344
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed
Reviewed-on: https://git-master.nvidia.com/r/1651517
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode records supported feature list for a
particular chip as support mask sent
via PMU_PG_PARAM_CMD_GR_INIT_PARAM.
It then enables selective feature list through
enable mask sent via
PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd.
Right now only ELPG state machine mask was enabled.
Only ELPG state machine was getting executed
but other crucial steps in ELPG entry/exit sequence
were getting skipped.
Bug 200392620.
Bug 200296076.
Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665767
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space
But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync
Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint
Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not
Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform
For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel
Bug 200326065
Jira NVGPU-179
Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>