mirror of
git://nv-tegra.nvidia.com/linux-nvgpu.git
synced 2025-12-22 09:12:24 +03:00
gpu: nvgpu: sim: make ring buffer independent of PAGE_SIZE
The simulator ring buffer DMA interface supports buffers of the following sizes: 4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once 4K is reached. However, this is not always true; for instance, take 64K pages. Hence, replace PAGE_SIZE with SIM_BFR_SIZE. Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace latter with former. Bug 200658101 Jira NVGPU-6018 Change-Id: I83cc62b87291734015c51f3e5a98173549e065de Signed-off-by: Antony Clince Alex <aalex@nvidia.com> Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728 Tested-by: mobile promotions <svcmobile_promotions@nvidia.com> Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
committed by
Alex Waterman
parent
09857ecd91
commit
c36752fe3d
@@ -195,14 +195,14 @@ int nvgpu_gmmu_init_page_table(struct vm_gk20a *vm)
|
||||
* aligned. Although lower PDE tables can be aligned at 256B boundaries
|
||||
* the PDB must be 4K aligned.
|
||||
*
|
||||
* Currently PAGE_SIZE is used, even when 64K, to work around an issue
|
||||
* Currently NVGPU_CPU_PAGE_SIZE is used, even when 64K, to work around an issue
|
||||
* with the PDB TLB invalidate code not being pd_cache aware yet.
|
||||
*
|
||||
* Similarly, we can't use nvgpu_pd_alloc() here, because the top-level
|
||||
* PD must have mem_offs be 0 for the invalidate code to work, so we
|
||||
* can't use the PD cache.
|
||||
*/
|
||||
pdb_size = ALIGN(pd_get_size(&vm->mmu_levels[0], &attrs), PAGE_SIZE);
|
||||
pdb_size = ALIGN(pd_get_size(&vm->mmu_levels[0], &attrs), NVGPU_CPU_PAGE_SIZE);
|
||||
|
||||
err = nvgpu_pd_cache_alloc_direct(vm->mm->g, &vm->pdb, pdb_size);
|
||||
if (err != 0) {
|
||||
|
||||
Reference in New Issue
Block a user