gpu: nvgpu: pd_cache enablement for >4k allocations in qnx

Mapping of large buffers to GMMU end up needing many
pages for the PTE tables. Allocating these one by one
can end up being a performance bottleneck, particularly
in the virtualized case.

This is adding the following changes:

 - As the TLB invalidation doesn't have access to mem_off,
   allow top-level allocation by alloc_cache_direct().
 - Define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab
   for the PD cache, effectively set to 64K bytes
 - Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE
   When freeing up cached entries, avoid prefetch errors by
   invalidating the entry (memset to 0).
 - Try to fall back to direct allocation of smaller chunk for
   contiguous allocation failures.
 - Unit test changes.

Bug 200649243

Change-Id: I0a667af0ba01d9147c703e64fc970880e52a8fbc
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404371
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
Peter Daifuku
2020-08-26 16:25:36 -07:00
committed by Alex Waterman
parent 94bc3a8135
commit a331fd4b3a
16 changed files with 122 additions and 22 deletions

View File

@@ -89,6 +89,11 @@ int test_rc_init(struct unit_module *m, struct gk20a *g, void *args)
unit_return_fail(m, "fifo reg_space failure");
}
ret = nvgpu_pd_cache_init(g);
if (ret != 0) {
unit_return_fail(m, "PD cache initialization failure");
}
nvgpu_device_init(g);
g->ops.gr.init.get_no_of_sm = stub_gv11b_gr_init_get_no_of_sm;