By always mapping gmmu kernel page using 4kB page, it'll be consistent
with native nvgpu driver. It's a workaround for enabling 64KB os kernel
page support.
In long term solution, GMMU_PAGE_SIZE_KERNEL will be os kernel page
size, and function nvgpu_gmmu_update_page_table will choose big page or
small page by comparing the size of GMMU_PAGE_SIZE_KERNEL with the size
of small or big pages. Regardingly vgpu will choose kernel page size by
comparing the size too when send map commands to server.
Bug 3015296
Bug 3015296
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Change-Id: I5d25280a9410da3ef628e5914ea962a76b102273
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2437193
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Update gk20a_ctrl_dev_ioctl() to fetch gpu_instance_id with
nvgpu_get_gpu_instance_id_from_cdev() and gr_instance_id with
nvgpu_grmgr_get_gr_instance_id().
Get instance specific GR engine configuration pointer with
nvgpu_gr_get_gpu_instance_config_ptr()
Update gk20a_ctrl_ioctl_gpu_characteristics() to return instance
specific characteristics with below changes :
- 0th GPU instance is a physical instance. Set a limited and relevant
characteristics flags for 0th instance.
For rest of the instances and non-MIG mode, continue fetching flags
with nvgpu_ctrl_ioctl_gpu_characteristics_flags.
- nvgpu_set_preemption_mode_flags() should be set only for non-MIG mode
and non-zero instance in MIG mode.
- In MIG mode, 0th instance does not support any classes. Rest of the
instances support only compute, copy and gpfifo classes.
Non-MIG mode supports all the classes including graphics ones.
- Fetch gpu_instance_id/gr_sys_pipe_id/gr_instance_id from gpu_instance
pointer.
- Fetch max_veid_count_per_tsg from gpu_instance pointer.
Also update nvgpu_gr_get_zcull_ptr() and nvgpu_gr_get_zbc_ptr() to
return instance specific pointers. zcull/zbc are not supported in MIG
mode, this is just for consistency of the code.
Jira NVGPU-5648
Change-Id: I764526061542c48ed87659844e16dd0e0253c588
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2436752
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Modify NVGPU_GPU_IOCTL_ALLOC_AS and struct nvgpu_alloc_as_args to
accept start address and size of user memory. This allows configurable
address space allocation.
- Modify gk20a_as_alloc_share() and gk20a_vm_alloc_share() to receive
va_range_start and va_range_end values.
- gk20a_vm_alloc_share() initializes vm with low_hole = va_range_start,
and user vma size = (va_range_end - va_range_start).
- Modify nvgpu_as_alloc_space_args and nvgpu_as_free_space_args to
accept 64 bit number of pages.
Bug 2043269
JIRA NVGPU-5302
Change-Id: I243995adf5b7e0e84d6b36abe3b35a5ccabd7a37
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2385496
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use fixed address mapping for pma byte buffer so that the address of
this buffer always fits in 32 bits.
This also requires to move unmap sequence to OS specific function since
different unmap API is now needed for linux and QNX.
Also call nvgpu_prof_free_pma_stream_priv_data() before
nvgpu_profiler_free_pma_stream() since former uses mm->perfbuf which
is released in later.
Bug 2510974
Jira NVGPU-5360
Change-Id: I398b0ca4f96527d6e09c9aacacb4b43c90f5bfc9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2424691
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Fuse registers should be queried with physical gpc-id and not the
logical ones. For tu104 and before chips physical gpc-ids are same as
logical for non-floorswept config but for newer chips it may differ.
Also, logical to physical mapping is not present for a floorswept gpc so
query gpc_tpc mask only upto actual gpcs that are present.
Jira NVGPU-6080
Change-Id: I84c4a3c1f256fdd1927f4365af26e9892fe91beb
Signed-off-by: shashank singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2417721
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Mapping of large buffers to GMMU end up needing many
pages for the PTE tables. Allocating these one by one
can end up being a performance bottleneck, particularly
in the virtualized case.
This is adding the following changes:
- As the TLB invalidation doesn't have access to mem_off,
allow top-level allocation by alloc_cache_direct().
- Define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab
for the PD cache, effectively set to 64K bytes
- Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE
When freeing up cached entries, avoid prefetch errors by
invalidating the entry (memset to 0).
- Try to fall back to direct allocation of smaller chunk for
contiguous allocation failures.
- Unit test changes.
Bug 200649243
Change-Id: I0a667af0ba01d9147c703e64fc970880e52a8fbc
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404371
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update nvgpu_gr_zbc as:
struct nvgpu_gr_zbc {
struct nvgpu_mutex zbc_lock; /* Lock to access zbc table */
struct zbc_color_table *zbc_col_tbl; /* SW zbc color table pointer */
struct zbc_depth_table *zbc_dep_tbl; /* SW zbc depth table pointer */
struct zbc_stencil_table *zbc_s_tbl; /* SW zbc stencil table pointer */
u32 min_color_index; /* Minimum valid color table index */
u32 min_depth_index; /* Minimum valid depth table index */
u32 min_stencil_index; /* Minimum valid stencil table index */
u32 max_color_index; /* Maximum valid color table index */
u32 max_depth_index; /* Maximum valid depth table index */
u32 max_stencil_index; /* Maximum valid stencil table index */
u32 max_used_color_index; /* Max used color table index */
u32 max_used_depth_index; /* Max used depth table index */
u32 max_used_stencil_index; /* Max used stencil table index */
};
Add global struct nvgpu_gr_zbc_table_indices
struct nvgpu_gr_zbc_table_indices {
u32 min_color_index;
u32 min_depth_index;
u32 min_stencil_index;
u32 max_color_index;
u32 max_depth_index;
u32 max_stencil_index;
};
Currently, hw zbc table registers are written during both
gr_init_setup_sw() and gr_init_setup_hw().
- Modify nvgpu_gr_zbc_load_default_table() to
nvgpu_gr_zbc_load_default_sw_table() to only update sw copy of zbc table
during gr_init_setup_sw().
- Modify nvgpu_gr_zbc_load_table() to write zbc values stored in sw zbc
table to hw registers.
Re-structure zbc function as per zbc type i.e. color, depth and stencil.
Add gr.zbc.init_table_indices() hal to initialize zbc indices. Valid ZBC
table indices start from 1. HW indices start from 0 for color, depth and
stencil tables. Note that the corresponding format registers follow ZBC
index range starting at 1.
- void (*init_table_indices)(struct gk20a *g,
struct nvgpu_gr_zbc_table_indices *zbc_indices);
- Add corresponding functions for legacy chips
- Add zbc color, depth and stencil table size hw defines
- Remove ltc.zbc_table_size() hal
- Update ltc.set_zbc_s_entry(), ltc.set_zbc_color_entry and
ltc.set_zbc_depth_entry() accordingly.
Bug 3122410
Bug 3122649
Change-Id: Ib799991ad35c6613534c0a6eb07f3bf24e600dc5
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2417620
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Right now PMA byte buffer address is allocated in the range of
0x1ffc010000. The register that stores this address is only 32-bit and
there is no corresponding _hi() register, so the address must fit in
32 bits.
Update nvgpu_vm_init() parameters in nvgpu_perfbuf_init_vm() so that a
low_hole of only 4K is used. This allows the address to be allocated
in the range of 0x4000000.
Also map byte buffer before PMA stream buffer so that byte buffer always
gets lower address.
There is only one PMA stream buffer allowed to be mapped right now so
this works for now. But in future multiple buffers can be mapped and this
solution needs to be reworked.
Bug 2510974
Jira NVGPU-5360
Change-Id: Ief1a9ee54d554e3bc13c7a9567934dcbeaefbcc6
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2418520
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
APIs to set preemption modes right now have config based code to set
default preemption modes or to check if given preemption mode is valid
or not. This makes code unreadable and complex.
Rework nvgpu_gr_obj_ctx_init_ctxsw_preemption_mode() so that it checks
for initial preemption modes in the beginning. If no preemption mode is
passed while allocating context, get default preemption modes with
gops.gr.init.get_default_preemption_modes() and use them.
Rework nvgpu_gr_ctx_check_valid_preemption_mode() so that it is more
readable. Use gops.gr.init.get_supported_preemption_modes() to validate
incoming preemption modes against supported preemption modes.
Log preemption modes getting set in
nvgpu_gr_obj_ctx_set_ctxsw_preemption_mode().
Disable failing unit test. It will need rework according to new code.
Jira NVGPU-5648
Change-Id: Ie1a3e1aeae7826a123e104d9d016f181bea3b271
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2419034
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
common.gr defined a temporary macro NVGPU_GR_NUM_INSTANCES to enable or
disable multiple GR instances from common.gr unit.
Multiple GR instance boot is now verified, so we can remove this
temporary solution.
Note that nvgpu_grmgr_get_num_gr_instances() will return more than 1
instance only if NVGPU_SUPPORT_MIG is enabled.
Update unit tests to set number of syspipes to 1 to allow enumeration
of GR instance by grmgr.
Jira NVGPU-5648
Change-Id: I795901ae516843ae7b6c1794dae0f023a213ab1d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2418377
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below HALs are implemented in common.gr unit, but they really belong
to common.perf unit since they access registers from perf unit.
gops.gr.init_hwpm_pmm_register()
gops.gr.get_num_hwpm_perfmon()
gops.gr.set_pmm_register()
gops.gr.reset_hwpm_pmm_registers()
Move them to common.perf unit, and update all the code accordingly
gops.perf.init_hwpm_pmm_register()
gops.perf.get_num_hwpm_perfmon()
gops.perf.set_pmm_register()
gops.perf.reset_hwpm_pmm_registers()
Add new HAL gops.gr.get_pm_ctx_buffer_offsets() and set it to
gr_gk20a_get_pm_ctx_buffer_offsets() for all chips.
Bug 2510974
Jira NVGPU-5360
Change-Id: Ib5e84ed5c8b6e72cc6923161e55fc2c3a6a4070e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2418306
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL g->ops.gr.reset_hwpm_pmm_registers() to reset all HWPM regs
while binding HWPM in global mode in nvgpu_profiler_bind_hwpm()
Add below new HALs to get sys/gpc/fbp register list and count
g->ops.perf.get_hwpm_sys_perfmon_regs()
g->ops.perf.get_hwpm_gpc_perfmon_regs()
g->ops.perf.get_hwpm_fbp_perfmon_regs()
Auto generate all the HWPM regs in below arrays for gv11b/tu104
static const u32 hwpm_sys_perfmon_regs[]
static const u32 hwpm_gpc_perfmon_regs[]
static const u32 hwpm_fbp_perfmon_regs[]
Bug 2510974
Jira NVGPU-5360
Change-Id: I2ca5c04ed75c7b30ae942807bf018a24551d7ba0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2414934
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>