gpu: nvgpu: rename gk20a_locked_gmmu_map() and move to gmmu.h

Rename the two native GPU GMMU map/unmap functions and update the
HAL initializations to reflect this:

  gk20a_locked_gmmu_map   -> nvgpu_gmmu_map_locked
  gk20a_locked_gmmu_unmap -> nvgpu_gmmu_unmap_locked

This matches what other units do for handling vGPU "HAL" indirection.

Also move the function declarations to <nvgpu/gmmu.h> since these are
shared among all non-vGPU chips. But since these are still technically
HAL operations they should never be called directly. This is a bit of
an organixational issue that I have not thought through hwo to solve
yet.

Ideally they would go into a "hal/mm/gmmu/" include somewhere, but
that A) doesn't yet exist, and B) those are chip specific; these
functions are native specific. Ugh.

JIRA NVGPU-2042

Change-Id: Ibc614f2928630d12eafcec6ce73019628b44ad94
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099692
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
Alex Waterman
2019-04-15 12:19:46 -07:00
committed by mobile promotions
parent 0561bace40
commit 32eea0988c
12 changed files with 48 additions and 48 deletions

View File

@@ -136,8 +136,8 @@ static int init_test_env(struct unit_module *m, struct gk20a *g)
g->ops.mm.get_default_big_page_size =
gp10b_mm_get_default_big_page_size;
g->ops.mm.get_mmu_levels = gp10b_mm_get_mmu_levels;
g->ops.mm.gmmu_map = gk20a_locked_gmmu_map;
g->ops.mm.gmmu_unmap = gk20a_locked_gmmu_unmap;
g->ops.mm.gmmu_map = nvgpu_gmmu_map_locked;
g->ops.mm.gmmu_unmap = nvgpu_gmmu_unmap_locked;
g->ops.mm.gpu_phys_addr = gv11b_gpu_phys_addr;
g->ops.mm.cache.l2_flush = gv11b_mm_l2_flush;
g->ops.mm.cache.fb_flush = gk20a_mm_fb_flush;