When GPU need to programmed with PA(physical address),
given IPA need to be converted to PA by querying Hypervisor.
As this is an IPC between OSes, the call will reduce the
performance badly. So this is adding a IPA-PA cache to improve
the performance. This will be more helpful in passthr config.
Bug 3277194
Change-Id: I6a3230d858977313a0ed0f33068055a3b516330a
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2571814
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
WAR assume 1:1 IPA to PA mapping when hyp_read_ipa_pa_info fails for
the syncpt address which falls into syncpt shim aperture. API
nvgpu_mem_phys_sgl_ipa_to_pa() is taking care of IPA to PA mapping for
the syncpts which makes this WAR invalid.
Patch removes the WAR.
Bug 200673604
Change-Id: I966711e11c2ff1b5b5dd3f5e09674bea66c5d04b
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2478068
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
tegra_get_chip_id is going to be deprecated soon and this patch removes
calls to it. Chip-ID is already read via DT and call to
tegra_get_chip_id() can be avoided by adding metadata to store the
Chip-ID information in struct gk20a_platform.
Bug 200524194
Bug 200551105
Change-Id: I5f9f5abf679cf9afe98840e20144d76eb0238426
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2236311
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- This patch replaces "nvgpu_current_time_ms" with "nvgpu_hr_timestamp_us".
- "nvgpu_hr_timestamp_us" gives timestamp in microseconds and has better
accuracy than "nvgpu_current_time_ms" (which gives timestamp in millisecond)
Bug 200503143
Change-Id: I6a10e8e1b3e8ff842aa23f58bf2ba9344af232a6
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2125959
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Background:
In Hypervisor mode dGPU device is configured in pass through mode for
the Guest (QNX/Linux). GMMU programming is handled by the guest which
converts a mapped buffer's GVA into SGLes in IPA (Intermediate/Guest
Physical address) which is then translated into PA (Acutual Physical
address) and programs the GMMU PTEes with correct GVA to PA mapping.
Incase of the vgpu this work is delegated to the RM server which takes care
of the GMMU programming and IPA to PA conversion.
Problem:
The current GMMU mapping logic in the guest assumes that PA range is
continuous over a given IPA range. Hence, it doesn't account for holes being
present in the PA range. But this is not the case, a continous IPA range
can be mapped to dis-contiguous PA ranges. In this situation the mapping
logic sets up GMMU PTEes ignoring the holes in physical memory and
creates GVA => PA mapping which intrudes into the PA ranges which are
reserved. This results in memory being corrupted.
This change takes into account holes being present in a given PA range and
for a given IPA range it also identifies the discontiguous PA ranges and
sets up the PTE's appropriately.
Bug 200451447
Jira VQRM-5069
Change-Id: I354d984f6c44482e4576a173fce1e90ab52283ac
Signed-off-by: aalex <aalex@nvidia.com>
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1850972
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>