Compare gpu semaphores in the kernel in the same way as the hardware
does: released if value is over threshold, but at most half of u32's
range. This makes it possible to skip zeroing the sema values when semas
are allocated, so that they'd be just monotonically increasing numbers
like syncpoints are.
Jira NVGPU-514
Change-Id: I3bae352fbacfe9690666765b9ecdeae6f0813ea1
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1652086
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created ops for below boardobj methods to support gp10x & gv10x
branch boardobj changes, and defined methods for gv10x with
postfix _v1 with below names
boardobjgrp_pmucmd_construct_impl
boardobjgrp_pmuset_impl
boardobjgrp_pmugetstatus_impl
is_boardobjgrp_pmucmd_id_valid
- These ops are assigned based on PMU version to respective
chip.
- Modified BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT &
BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to support
gp10x & gv10x branch changes
- Updated struct boardobjgrp_pmu_cmd to include members
needed for gv10x boardobj changes
- Created "struct nv_pmu_rpc_struct_board_obj_grp_cmd"
to execute BOARD_OBJ_GRP_CMD using RPC.
- Defined method boardobjgrp_pmucmdsend_rpc() to
send BOARD_OBJ_GRP_CMD to PMU.
Change-Id: If2551bdda80e897e7b21d2966881586f3bbc7a9b
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656511
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added ops "pmu.alloc_super_surface" to create
memory space for pmu super surface
- Defined method nvgpu_pmu_sysmem_surface_alloc()
to allocate pmu super surface memory & assigned
to "pmu.alloc_super_surface" for gv100
- "pmu.alloc_super_surface" set to NULL for gp106
- Memory space of size "struct nv_pmu_super_surface"
is allocated during pmu sw init setup if
"pmu.alloc_super_surface" is not NULL &
free if error occur.
- Added ops "pmu_ver.config_pmu_cmdline_args_super_surface"
to describe PMU super surface details to PMU ucode
as part of pmu command line args command if
"pmu.alloc_super_surface" is not NULL.
- Updated pmu_cmdline_args_v6 to include member
"struct flcn_mem_desc_v0 super_surface"
- Free allocated memory for PMU super surface in
nvgpu_remove_pmu_support() method
- Added "struct nvgpu_mem super_surface_buf" to "nvgpu_pmu" struct
- Created header file "gpmu_super_surf_if.h" to include interface
about pmu super surface, added "struct nv_pmu_super_surface"
to hold super surface members along with rsvd[x] dummy space
to sync members offset with PMU super surface members.
Change-Id: I2b28912bf4d86a8cc72884e3b023f21c73fb3503
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656571
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When adding a sema wait to a pushbuf, verify that the sema threshold has
been incremented from the original value by reading the incremented
field instead of value (which is set to nonzero by
nvgpu_semaphore_incr()). Value could be 0 even after an increment if new
semas weren't reset to 0.
Jira NVGPU-514
Change-Id: I295451fbc7eb9e597aea12d73074e99f74a6a899
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658100
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
/sys/devices/gpu.0/gfxp_wfi_timeout_unit
usec - microseconds
sysclk - gpu clock count
Treat gr_fe_gfxp_wfi_timeout_r as context-switched
register on gv11b.
Set default gfxp_wfi_timeout to 100 usec to match
gp10b at 1GHz.
bug 1888344
Signed-off-by: Kirill Artamonov <kartamonov@nvidia.com>
Change-Id: I7fa64ce6912ae861244856807543b17bd7a26bed
Reviewed-on: https://git-master.nvidia.com/r/1651517
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Check these conditions for fecs arbiter idle:
1. Wait for gr_fecs_arb_ctx_cmd to idle
2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle
Just waiting for condition 1, only guarantees dispatching
of command but not completion of work.
Bug 2056266
Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667550
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU instance block layout is not getting
populated with subctx pdb info as max_subctx_count is
0x0 and is getting initialized after PMU instance block
initialization gets completed.
Therefore initializing max_subctx_count before using it.
For Volta, FECS can bind itself with the PMU instance
only if SC PDB info is populated.
Bug 2051863
Bug 200392620
Change-Id: Id4fc26502e189c15cb57cb36cc09387dad773dc5
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1666585
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode records supported feature list for a
particular chip as support mask sent
via PMU_PG_PARAM_CMD_GR_INIT_PARAM.
It then enables selective feature list through
enable mask sent via
PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd.
Right now only ELPG state machine mask was enabled.
Only ELPG state machine was getting executed
but other crucial steps in ELPG entry/exit sequence
were getting skipped.
Bug 200392620.
Bug 200296076.
Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665767
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
QNX is going to reuse the 'struct gk20a_event_id_data' data structure
for its event handling. Contrary to linux which works on fd polling,
QNX needs the information about the pid to find out which process to
deliver the event to.
The patch adds the requisite field.
Change-Id: I466e9cfaacb921c6e1c41bd92c66194b81fc6774
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657994
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case deferred_reset_pending is set in gk20a_fifo_handle_mmu_fault() and in
gv11b_fifo_teardown_ch_tsg(), we skip resetting the engines and skip setting
the error notifier
Then we call gk20a_channel_abort()/gk20a_fifo_abort_tsg() which aborts the
channels, and resets the syncpoint values to release all the waiters
But since we don't set error notifier this could lead User to assume a
successful submission without any error
To fix this disable channel/TSG in case deferred_reset_pending is set and skip
calls to gk20a_channel_abort()/gk20a_fifo_abort_tsg()
Note that we finally abort the channel when channel is being closed
Bug 200363077
Change-Id: Ia48ca369701c14d1913d8f7b66ed466b7b840224
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1664319
(cherry picked from commit ac40d082e8)
Reviewed-on: https://git-master.nvidia.com/r/1666445
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space
But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync
Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint
Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not
Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform
For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel
Bug 200326065
Jira NVGPU-179
Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The operations in struct nvgpu_sgt_ops have a scatter-gather list (sgl)
argument which is a void pointer. Change the type signatures to take
struct nvgpu_sgl * which is an opaque marker type that makes it more
difficult to pass around wrong arguments, as anything goes for void *.
Explicit types add also self-documentation to the code.
For some added safety, some explicit type casts are now required in
implementors of the nvgpu_sgt_ops interface when converting between the
general nvgpu_sgl type and implementation-specific types. This is not
purely a bad thing because the casts explain clearly where type
conversions are happening.
Jira NVGPU-30
Jira NVGPU-52
Jira NVGPU-305
Change-Id: Ic64eed6d2d39ca5786e62b172ddb7133af16817a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643555
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.
Bug 2070609
Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
For some reason the GPU does not like the mappings created by the
DMA API for coherent sysmem buffers. But a plain vmap() does seem
to work. To work around this, when we are using coherent sysmem,
force the NO_KERNEL_MAPPING flag to on and then make a vmap() in
the nvgpu DMA API wrapper. The rest of the driver will be none the
wiser but will work as expected.
This problem is not understood yet but it is being tracked in bug
2040115. Once this bug is understood this WAR should either be
determined as necessary or reverted with an appropriate fix.
Bug 2040115
JIRA EVLR-2333
Change-Id: Idae7a0c92441f0309df572ac18697af49bb6ff2b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657568
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Miscellaneous fixes for the sched code:
1. Make sure get_addr() on an SGL respects the use phys flag since
the runlist needs physical addresses when NVLINK is in use.
2. Ensure the runlist is contiguous. Since the runlist memory is
not virtually addressed the buffer must be physically contiguous.
3. Use all 64 bits of the runlist address in the runlist base addr
register (and related fields).
JIRA EVLR-2333
Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654667
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch does a couple of things. First it renames
NVGPU_DMA_COHERENT to NVGPU_USE_COHERENT_SYSMEM since the former
is somewhat ambiguous in meaning. The latter clearly states what
must happen: nvgpu needs to treat sysmem as coherent. This flag
does simply follow the state of the DMA API but there's no reason
to expect a casual reader of the code to know that when the DMA
API is coherent nvgpu must treat sysmem as coherent.
One thing to note though: when the dGPU is using PCIe and the
PCIe controller is coherent, it doesn't actually matter what we
do. However, we use this flag for determining how to make CPU
mappings in nvgpu_mem_begin() so this flag is still relevant for
the CPU side of things.
Next this patch adds a check in the core kernel GMMU mapping
routine to make sure that when the NVGPU_USE_COHERENT_SYSMEM flag
is set that the IO coherent flag is passed into the mapping code.
This is the primary fix that made NVLINK start working.
Finally the setting of the USE_COHERENT_SYSMEM flag and the
NVGPU_SUPPORT_IO_COHERENCE flag were set both for PCIe and for
iGPUs. The iGPU also must correctly match it's CPU mappings and
GPU mappings for proper operation.
JIRA EVLR-2333
Change-Id: Icd5f07167c9f48a0a2e8493e34c9cc6238e56907
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654519
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>