nvmap: Keep cache flush at allocation time only

On TOT, in case of carveouts, nvmap performs cache flush operation
during carveout creation, buffer allocation and buffer release. Due to
cache flush for entire carveout at creation time, nvmap takes ~430 ms
for probe. This is affecting boot KPIs.
Fix this by performing cache flush only at buffer allocation time,
instead of carveout creation and buffer release. This is reducing nvmap
probe time to ~0.69 ms.

Bug 3821631

Change-Id: I54da7dd179f8d30b8b038daf3eceafb355b2e789
Signed-off-by: Ketan Patil <ketanp@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvidia/+/2802353
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Reviewed-by: Ashish Mhetre <amhetre@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
This commit is contained in:
Ketan Patil
2022-11-03 08:48:38 +00:00
committed by Laxman Dewangan
parent d1c06c1dce
commit 92f34279ba
5 changed files with 42 additions and 17 deletions

View File

@@ -710,19 +710,20 @@ static void alloc_handle(struct nvmap_client *client,
mb();
h->alloc = true;
/* Clear the allocated buffer */
if (nvmap_cpu_map_is_allowed(h)) {
void *cpu_addr;
cpu_addr = memremap(b->base, h->size,
MEMREMAP_WB);
if (cpu_addr != NULL) {
memset(cpu_addr, 0, h->size);
__dma_flush_area(cpu_addr, h->size);
memunmap(cpu_addr);
if (nvmap_dev->co_cache_flush_at_alloc) {
/* Clear the allocated buffer */
if (nvmap_cpu_map_is_allowed(h)) {
void *cpu_addr;
cpu_addr = memremap(b->base, h->size,
MEMREMAP_WB);
if (cpu_addr != NULL) {
memset(cpu_addr, 0, h->size);
__dma_flush_area(cpu_addr, h->size);
memunmap(cpu_addr);
}
}
}
return;
}
ret = nvmap_heap_pgalloc(client, h, type);