Safety build temporal requirement is that on FECS power up it should go
through entire initialization methods.
init_golden_image callback is being called from devctl/ioctl path and
triggers FECS method 10 and 11. As these methods are part of APP init,
not being called during resume and causing quiesce on safety build.
To fix this issue, calling the callback from poweron API.
Bug 4082813
Bug 4037712
Change-Id: I2d27203d3cb4326ae7d8bd6025693fd61d5237df
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2893218
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- During nvgpu_poweron, PERFMON_INIT RPC and
ACR_INIT_WPR_REGION command is sent to PMU in two different threads.
- For perfmon RPC method is used and for ACR, CMD-MSG queue is used.
- Since the pmu thread and poweron thread run in parallel, the
pmu sequence acquired by both can have the same seq_id.
- For Perfmon RPC, nvgpu_pmu_seq_free_release() is called
followed by nvgpu_pmu_seq_release().
- This causes clearing of sequence for the next command.
- To resolve this, instead of nvgpu_pmu_seq_free_release(),
just free the rpc-payload after getting ack for perfmon and
then do sequence release.
- This ensures that the ACR cmd sent just after perfmon RPC does
not get the same seq_id and the sequence is not cleared.
Bug 4074021
Change-Id: Id9972cb719458062d8c7d9e226a25599026c052b
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2889840
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
- Add defintions of the gfx/compute classes and methods that are
generated from the hw/sw class header files. Use these definitions
instead of the hard-coded ones so that mismatches may be caught by
the HAL checker.
- Abstract out the sw method handling functionality of
gr.intr.handle_sw_method into gr.intr.handle_gfx_sw_method and
gr.intr.handle_compute_sw_method and have gr.intr.handle_sw_method
call these two new HALs.
Jira NVGPU-9217
Change-Id: Ia30fcba6174878d9b5b7b5910c564c879a702ddc
Signed-off-by: Austin Tajiri <atajiri@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2885547
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- On older chips, PMU uses CMD-MSG queue method to
communicate with NvGPU.
- From Turing onwards, PMU uses RPC method for this.
- During poweroff, we release pmu_sequence and reset the
members of the structure.
- For chips that use RPC, we need to free the payload as well
and then reset the members.
- Add pmu_seq_cleanup hal for this.
Bug 4019694
Bug 4059157
Change-Id: Ieb474fe4ed81f54d78480214cde53b51d45652c6
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882267
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- During driver unload, shutdown or RG path as part of
pmu destroy, pmu sequences have to be cleaned up to
free payload memory and allocation info which is stored
as part of pmu_sequence.
- While doing so there can be race condition with pmu_isr
or nvgpu_pmu_rpc_execute path where it waits for fw ack.
- This race condition can lead to freeing of payload memory
before nvgpu_pmu_sequences_cleanup() does.
- This can lead to memory corruption or double free issue
when the cleanup code again tries to free the payload mem.
- To resolve this add a new function nvgpu_pmu_seq_free_release()
which will check for seq->id in pmu seq tbl before freeing the
memory and other info from pmu_sequence.
- Use this nvgpu_pmu_seq_free_release() in non-blocking RPC calls
and also when fw ack fails or driver is dying scenario.
- For blocking call, synchronise freeing of rpc payload memory by
using a new boolean seq_free_status.
Bug 4019694
Bug 4059157
Change-Id: Id45a6914a2d383a654539a87861c471a77fb6850
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882210
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
It is required to build nvgpu as separate module from OOT modules
because its source will be continue to be in different repository.
The nvgpu module will depends on the headers and symvars from
core kernel and OOT modules.
Add the path of headers of OOT modules when compiling the nvgpu as
OOT module.
Bug 4038415
Change-Id: I0f42c8e75ca63784c9d9ba3624e5ed0141e1df77
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2880466
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
On deleting the subcontext, tsg->subctx_vms[] entries are set to NULL as
per the subcontext id. For async subcontexts the index logic was used
from that of tsg->async_veids bitmask. However subctx_vms is an array
shared by all subcontexts hence index should be subcontext id aka veid.
Also update the description of function nvgpu_tsg_validate_ch_subctx_vm
as some of the functionality is now moved to another function
nvgpu_tsg_create_sync_subcontext_internal.
Bug 3979886
Change-Id: Ic290fb175b34988c6ffabe9c9dc4ec124d2c70af
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2879025
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
All drivers that use dma-bufs have been moved to the updated locking
specification wherein dma-buf reservation is to be locked while
accessing the dmabuf internal data. Lock is removed. So lock
the resv object onwards while updating dmabuf private data
used for compression and buffer metadata.
With this, we can enable compression for all kernel versions that
was disabled earlier for v6.2+ kernels.
Bug 3974855
Bug 3995618
Change-Id: Iece3ab57912d0420d4bc5c07d2c0d2e03ff19292
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2877633
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of process crash or forceful closure of the channels, userspace
may not release the VEID. In that case, creating further subcontexts
may not be possible.
Hence, when the channel is closed forcibly (linux), release the VEID on
closure of the last channel in the subcontext.
With this, normally on linux, channel close will not relase the VEID
However, on qnx it will release the VEID. So delete subcontext devctl
call on qnx will be nop in normal case hence changed the error print
and error return to success.
Also added check in the subcontext delete ioctl fn that all channels
are unbound before deleting the subcontext. This is to ensure that
channels don't refer to dangling subcontext pointer.
Bug 3979886
Change-Id: I434944b01740720011abce3664394ae8cb0d4e2e
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2858060
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
On ga10b+ platforms, more VM space is needed to map various buffers
to bar2 vm. Engine method buffer is mapped for each pbdma and for
maximum supported TSGs this requires more than 32MB of space.
Also we need to consider fault buffer space and vab buffer
space requirement.
Bug 3958581
Change-Id: I9ee87119f762352ee12859b71c08a5f75b3554e0
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2872811
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch adds nvenc support for TU104
- Fetch engine/dev info for nvenc
- Falcon NS boot (fw loading) support
- Engine context creation for nvenc
- Skip golden image for multimedia engines
- Avoid subctx for nvenc as it is a non-VEID engine
- Job submission/flow changes for nvenc
- Code refactoring to scale up the support for other multimedia
engines in future.
Bug 3763551
Change-Id: I03d4e731ebcef456bcc5ce157f3aa39883270dc0
Signed-off-by: Santosh BS <santoshb@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2859416
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
L4T does not support virtualization currently and so we should be able
to build NVGPU without virtualization support. This avoids having to
build many virtualization drivers for Tegra.
The build flag CONFIG_TEGRA_VIRTUALIZATION was added for building
out-of-tree drivers to select if virtualization is enabled or not. This
is enabled by default. However, if this is not set, then driver should
still build. Currently, NVGPU is not building when
CONFIG_TEGRA_VIRTUALIZATION is not set because
CONFIG_TEGRA_GR_VIRTUALIZATION is now always enabled for NVGPU. Fix this
by wrapping CONFIG_TEGRA_GR_VIRTUALIZATION with
CONFIG_TEGRA_VIRTUALIZATION.
Jira GVSCI-16046
Change-Id: I5448ad73d4d4e3e151ef216a7fcf0469890fd5ec
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2868502
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>