- Add defintions of the gfx/compute classes and methods that are
generated from the hw/sw class header files. Use these definitions
instead of the hard-coded ones so that mismatches may be caught by
the HAL checker.
- Abstract out the sw method handling functionality of
gr.intr.handle_sw_method into gr.intr.handle_gfx_sw_method and
gr.intr.handle_compute_sw_method and have gr.intr.handle_sw_method
call these two new HALs.
Jira NVGPU-9217
Change-Id: Ia30fcba6174878d9b5b7b5910c564c879a702ddc
Signed-off-by: Austin Tajiri <atajiri@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2885547
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- On older chips, PMU uses CMD-MSG queue method to
communicate with NvGPU.
- From Turing onwards, PMU uses RPC method for this.
- During poweroff, we release pmu_sequence and reset the
members of the structure.
- For chips that use RPC, we need to free the payload as well
and then reset the members.
- Add pmu_seq_cleanup hal for this.
Bug 4019694
Bug 4059157
Change-Id: Ieb474fe4ed81f54d78480214cde53b51d45652c6
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882267
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- During driver unload, shutdown or RG path as part of
pmu destroy, pmu sequences have to be cleaned up to
free payload memory and allocation info which is stored
as part of pmu_sequence.
- While doing so there can be race condition with pmu_isr
or nvgpu_pmu_rpc_execute path where it waits for fw ack.
- This race condition can lead to freeing of payload memory
before nvgpu_pmu_sequences_cleanup() does.
- This can lead to memory corruption or double free issue
when the cleanup code again tries to free the payload mem.
- To resolve this add a new function nvgpu_pmu_seq_free_release()
which will check for seq->id in pmu seq tbl before freeing the
memory and other info from pmu_sequence.
- Use this nvgpu_pmu_seq_free_release() in non-blocking RPC calls
and also when fw ack fails or driver is dying scenario.
- For blocking call, synchronise freeing of rpc payload memory by
using a new boolean seq_free_status.
Bug 4019694
Bug 4059157
Change-Id: Id45a6914a2d383a654539a87861c471a77fb6850
Signed-off-by: Divya <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2882210
Reviewed-by: svcacv <svcacv@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
It is required to build nvgpu as separate module from OOT modules
because its source will be continue to be in different repository.
The nvgpu module will depends on the headers and symvars from
core kernel and OOT modules.
Add the path of headers of OOT modules when compiling the nvgpu as
OOT module.
Bug 4038415
Change-Id: I0f42c8e75ca63784c9d9ba3624e5ed0141e1df77
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2880466
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
On deleting the subcontext, tsg->subctx_vms[] entries are set to NULL as
per the subcontext id. For async subcontexts the index logic was used
from that of tsg->async_veids bitmask. However subctx_vms is an array
shared by all subcontexts hence index should be subcontext id aka veid.
Also update the description of function nvgpu_tsg_validate_ch_subctx_vm
as some of the functionality is now moved to another function
nvgpu_tsg_create_sync_subcontext_internal.
Bug 3979886
Change-Id: Ic290fb175b34988c6ffabe9c9dc4ec124d2c70af
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2879025
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
All drivers that use dma-bufs have been moved to the updated locking
specification wherein dma-buf reservation is to be locked while
accessing the dmabuf internal data. Lock is removed. So lock
the resv object onwards while updating dmabuf private data
used for compression and buffer metadata.
With this, we can enable compression for all kernel versions that
was disabled earlier for v6.2+ kernels.
Bug 3974855
Bug 3995618
Change-Id: Iece3ab57912d0420d4bc5c07d2c0d2e03ff19292
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2877633
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
In case of process crash or forceful closure of the channels, userspace
may not release the VEID. In that case, creating further subcontexts
may not be possible.
Hence, when the channel is closed forcibly (linux), release the VEID on
closure of the last channel in the subcontext.
With this, normally on linux, channel close will not relase the VEID
However, on qnx it will release the VEID. So delete subcontext devctl
call on qnx will be nop in normal case hence changed the error print
and error return to success.
Also added check in the subcontext delete ioctl fn that all channels
are unbound before deleting the subcontext. This is to ensure that
channels don't refer to dangling subcontext pointer.
Bug 3979886
Change-Id: I434944b01740720011abce3664394ae8cb0d4e2e
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2858060
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
On ga10b+ platforms, more VM space is needed to map various buffers
to bar2 vm. Engine method buffer is mapped for each pbdma and for
maximum supported TSGs this requires more than 32MB of space.
Also we need to consider fault buffer space and vab buffer
space requirement.
Bug 3958581
Change-Id: I9ee87119f762352ee12859b71c08a5f75b3554e0
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2872811
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch adds nvenc support for TU104
- Fetch engine/dev info for nvenc
- Falcon NS boot (fw loading) support
- Engine context creation for nvenc
- Skip golden image for multimedia engines
- Avoid subctx for nvenc as it is a non-VEID engine
- Job submission/flow changes for nvenc
- Code refactoring to scale up the support for other multimedia
engines in future.
Bug 3763551
Change-Id: I03d4e731ebcef456bcc5ce157f3aa39883270dc0
Signed-off-by: Santosh BS <santoshb@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2859416
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
L4T does not support virtualization currently and so we should be able
to build NVGPU without virtualization support. This avoids having to
build many virtualization drivers for Tegra.
The build flag CONFIG_TEGRA_VIRTUALIZATION was added for building
out-of-tree drivers to select if virtualization is enabled or not. This
is enabled by default. However, if this is not set, then driver should
still build. Currently, NVGPU is not building when
CONFIG_TEGRA_VIRTUALIZATION is not set because
CONFIG_TEGRA_GR_VIRTUALIZATION is now always enabled for NVGPU. Fix this
by wrapping CONFIG_TEGRA_GR_VIRTUALIZATION with
CONFIG_TEGRA_VIRTUALIZATION.
Jira GVSCI-16046
Change-Id: I5448ad73d4d4e3e151ef216a7fcf0469890fd5ec
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2868502
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
When building VGPU with compression disabled, then build fails as
follows:
error: ‘struct gk20a’ has no member named ‘max_comptag_mem’
505 | gk20a->max_comptag_mem = totalram_size_in_mb; | ^~
error: implicit declaration of function ‘gk20a_dma_buf_priv_list_clear’
[-Werro r=implicit-function-declaration] 523 | gk20a_dma_buf_priv_list_clear(l); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fix these build failures by guarding the applicable code with
CONFIG_NVGPU_COMPRESSION.
Bug 4014315
Change-Id: I8c7287ab57ef513ba11a3d85c2edfce243f07418
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2869169
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Upstream Linux commit bc292ab00f6c ("(HEAD) mm: introduce vma->vm_flags
wrapper functions") breaking building the NVGPU driver because the
vm_flags variable is made a const and can no longer be set directly. Fix
the build for Linux v6.3 by using the helper functions for setting the
flags.
Bug 4014315
Change-Id: Ie58d1f43b59167869742ff01ffe4e1841dbb1d6e
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2867167
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The memory bandwidth reported by the nvgpu driver is a resultant of FBP and EMC floorsweeping status. The FBP floorsweep status was already getting reported in the GPU characterstics so the status of EMC was fetched and reported in this change.
Jira NVGPU-9609
Bug 3661074
Change-Id: Ia2fe6cb029d086765da15d9e964ea77256e06604
Signed-off-by: atanand <atanand@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2859237
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>