video: tegra: nvmap: Use pfn_is_map_memory instead of pfn_valid

CONFIG_HAVE_ARCH_PFN_VALID has been removed by following upstream patch:
https://lkml.kernel.org/linux-mm/20210527174913.GJ8661@arm.com/T/
Hence kernel k5.15 onwards uses pfn_valid definition from mmzone.h,
while k5.10 uses pfn_valid definition from init.c
pfn_valid definition for k5.10 has last call to memblock_is_map_memory
which is missing in current definition of pfn_valid. Hence bad pte fault
is seen for carveout buffers. Use pfn_is_map_memory instead of pfn_valid
as it ultimately calls memblock_is_map_memory.

Bug 4343935

Change-Id: I27d1057ed566220e2d8b9a4482022f5318df65ff
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nv-oot/+/3027601
Reviewed-by: Krishna Reddy <vdumpa@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
This commit is contained in:
Ketan Patil
2023-12-04 13:53:59 +00:00
committed by mobile promotions
parent a433b16870
commit a1b11bd735

View File

@@ -202,7 +202,7 @@ static int nvmap_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
BUG_ON(priv->handle->carveout->base & ~PAGE_MASK);
pfn = ((priv->handle->carveout->base + offs) >> PAGE_SHIFT);
if (!pfn_valid(pfn)) {
if (!pfn_is_map_memory(pfn)) {
vm_insert_pfn(vma,
(unsigned long)vmf_address, pfn);
return VM_FAULT_NOPAGE;
@@ -217,7 +217,7 @@ static int nvmap_vma_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
offs >>= PAGE_SHIFT;
page = priv->handle->pgalloc.pages[offs];
pfn = page_to_pfn(page);
if (!pfn_valid(pfn)) {
if (!pfn_is_map_memory(pfn)) {
vm_insert_pfn(vma,
(unsigned long)vmf_address, pfn);
return VM_FAULT_NOPAGE;