- Modify NVGPU_GPU_IOCTL_ALLOC_AS and struct nvgpu_alloc_as_args to
accept start address and size of user memory. This allows configurable
address space allocation.
- Modify gk20a_as_alloc_share() and gk20a_vm_alloc_share() to receive
va_range_start and va_range_end values.
- gk20a_vm_alloc_share() initializes vm with low_hole = va_range_start,
and user vma size = (va_range_end - va_range_start).
- Modify nvgpu_as_alloc_space_args and nvgpu_as_free_space_args to
accept 64 bit number of pages.
Bug 2043269
JIRA NVGPU-5302
Change-Id: I243995adf5b7e0e84d6b36abe3b35a5ccabd7a37
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2385496
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In __gk20a_channel_open(), if runlist_id is provided as -1,
pick up correct GPU instance sprcific default runlist id using
nvgpu_grmgr_get_gpu_instance_runlist_id().
Also, get GPU instance is using nvgpu_get_gpu_instance_id_from_cdev()
If runlist_id is received as input, check if it is valid for given
GPU instance with nvgpu_grmgr_is_valid_runlist_id()
Jira NVGPU-5648
Change-Id: I69303a3dd81f28f474b40564da51254bcaa1ed15
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2435467
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add new API nvgpu_get_gk20a_from_cdev() that extracts gk20a pointer
from cdev pointer. This helps in keeping cdev related implementation
details in ioctl.c and away from other device ioctl files.
Also move struct nvgpu_cdev, nvgpu_class, and nvgpu_cdev_class_priv_data
from os_linux.h to ioctl.h since all of these structures are more IOCTL
related and better to keep them in ioctl specific header.
Jira NVGPU-5648
Change-Id: Ifad8454fd727ae2389ccf3d1ba492551ef1613ac
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2435466
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add new API nvgpu_get_gpu_instance_id_from_cdev() that returns GPU
instance id from nvgpu_cdev pointer.
Store cdev pointer in channel private data channel_priv and ctrl node
private data gk20a_ctrl_priv.
Update below functions to pass cdev pointer :
__gk20a_channel_open()
gk20a_channel_open_ioctl()
In gk20a_channel_ioctl(), extract gpu instance id using cdev pointer
stored in channel_priv and new API nvgpu_get_gpu_instance_id_from_cdev().
Extract GR instance id using nvgpu_grmgr_get_gr_instance_id()
Invoke context creation API inside nvgpu_gr_exec_with_err_for_instance()
so that context is created with correct GR instance id.
Jira NVGPU-5648
Change-Id: I5a4e79165e021b56181d08105b2185306a19703b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2435465
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In MIG mode, each of the dev nodes should be enumerated for each fGPU.
And for physical instance only the "ctrl" node should be enumerated.
Support this with below set of changes :
- Add struct nvgpu_mig_static_info that describes static GPU instance
configuration. GPCs are enumerated only during poweron and grmgr unit
will populate instance information based on number of GPCs.
For linux, GPU poweron happens only with first gk20a_busy() call and
instance information is not available during probe() time. Hence this
static table is a temporary solution until proper solution is
identified.
- Add nvgpu_default_mig_static_info for iGPU and
nvgpu_default_pci_mig_static_info for dGPU that describes GPU instance
partition.
- Add new function nvgpu_prepare_mig_dev_node_class_list() that parses
the static table and creates one class per instance in MIG mode.
Non-MIG mode classes are now enumerated in
nvgpu_prepare_default_dev_node_class_list().
- Add new structure nvgpu_cdev_class_priv_data to store private data for
each cdev. This will hold instance specific information and pointer to
private data will be maintained in struct class and also passed as
private data while creating device node with device_create()
- Add nvgpu_mig_phys_devnode() to set dev node path/names for fGPUs and
add nvgpu_mig_fgpu_devnode() to set dev node path/names for physical
instance in MIG mode.
- Add new field mig_physical_node to struct nvgpu_dev_node. This field
is set if corresponding dev node should be created for physical
instance in MIG mode. For now set it only for "ctrl" node.
Jira NVGPU-5648
Change-Id: Ic97874eece1fbe0083b3ac4c48e36e06004f1bc2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2434586
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Lakshmanan M <lm@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove devnode_class pointer from struct nvgpu_os_linux and replace it
by a list head.
Add new structure nvgpu_class to store class related meta-data and
create it dynamically in nvgpu_create_class().
Add new function nvgpu_prepare_dev_node_class_list() to prepare list of
all classes that are required for each GPU.
For now there is only one class per GPU, but in MIG mode multiple
classes will be created with one class per instance.
Update gk20a_user_init() to loop through list of classes and create
dev nodes for each class.
gk20a_user_deinit() frees up the linked list.
Jira NVGPU-5648
Change-Id: I891a55c0ce1c2ff9db094564529b3f569df9735c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428501
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Remove static dev node meta data from struct nvgpu_os_linux and replace
it by a dynamic list. Struct nvgpu_os_linux will only keep track of list
head and number of entries.
Add new structure nvgpu_cdev to store meta data of each dev node and
create/setup it dynamically in gk20a_user_init(). Once done, add the new
node under list head maintained in nvgpu_os_linux.
Add a static list dev_node_list[] that contains list of dev node names
and file operations. This static list is used to create nvgpu_cdev data
structures and to register new device nodes.
Update all dev node open file operations (e.g. gk20a_as_dev_open()) to
extract struct gk20a pointer from device pointer of dev node.
gk20a device is the parent of dev node device.
Jira NVGPU-5648
Change-Id: If070c3428afd6215e45b4919335d9f43e04c36f9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428500
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Remove static class definition and registration for iGPU and dGPU.
Create the class dynamically in gk20a_user_init() and setup the callback
function to create devnode name based on GPU type.
For now add nvgpu_pci_devnode() callback for dGPU that sets correct
dev node path for dGPUs. For iGPU, Android apparently does not honor dev
node path set in callback and hence override the device name for iGPU
with function nvgpu_devnode().
Destroy the class in gk20a_user_deinit().
This will overall be helpful in adding multiple classes and dev nodes
for each GPU instance in MIG mode.
Set GPU device pointer as the parent of new devices created with
device_create(). This is helpful in getting GPU device name in
callback function nvgpu_pci_devnode().
Update functions to not pass class structure and interface names :
nvgpu_probe()
gk20a_user_init()
gk20a_user_deinit()
nvgpu_remove()
Remove static interface name format like INTERFACE_NAME since it is no
longer needed.
Update GK20A_NUM_CDEVS to 10 since there are 10 dev nodes per GPU right
now.
Jira NVGPU-5648
Change-Id: I5d41db5a0f87fa4a558297fb4135a9fbfcd51080
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2423492
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use fixed address mapping for pma byte buffer so that the address of
this buffer always fits in 32 bits.
This also requires to move unmap sequence to OS specific function since
different unmap API is now needed for linux and QNX.
Also call nvgpu_prof_free_pma_stream_priv_data() before
nvgpu_profiler_free_pma_stream() since former uses mm->perfbuf which
is released in later.
Bug 2510974
Jira NVGPU-5360
Change-Id: I398b0ca4f96527d6e09c9aacacb4b43c90f5bfc9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2424691
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Background: There is a race that occurs when l2_fb_ops ioctl is
invoked. The race occurs as part of the flush() call while a
gk20_idle() is in progress.
This patch handles the race by making changes in the l2_fb_ops
ioctl itself. For cases where pm_runtime is disabled or railgate is
disabled, we allow this ioctl call to always go ahead as power is
assumed to be always on.
For the other case, we first check the status of g->power_on. In the
driver, g->power_on is set to true, once unrailgate is completed and is
set to false just before calling railgate.
For linux, the driver invokes gk20a_idle() but there is a delay after
which the call to the rpm_suspend()'s callback gets triggered. This
leads to a scenario where we cannot efficiently rely on the
runtime_pm's APIs to allow us to block an imminent suspend or exit if
the suspend is currently in progress. Previous attempts at solving this
has lead to ineffective solutions and make it much complicated to
maintain the code.
With regards to the above, this patch attempts to simplify the way this
can be solved. The patch calls gk20a_busy() when g->power_on = true.
This prevents the race with gk20a_idle(). Based on the rpm_resume and
rpm_suspend's upstream code, resume is prioritized over a suspend
unless a suspend is already in progress i.e. the delay period has been
served and the suspend invokes the callback. There is a very small
window for this to happen and the ioctl can then power_up the device as
evident from the gk20a_busy's calls.
nvgpu power state is queried using nvgpu_is_powered_off to determine
whether to skip the resume. power state is protected under spinlock.
Bug 200507468
Change-Id: I5c02dfa8ea855732e59b759d167152cf45a1131f
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2299545
(cherry picked from commit 06942bd268)
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2425682
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
In linux kernel v4.14 and below gpu sysfs node is created under
/sys/devices. In linux kernel v5.x it is created under
/sys/devices/platform.
Create symbolic link gpu.0 under /sys/devices/ as various tests
and scripts expect it to be there.
Bug 200665782
Change-Id: I807ce72fad94438f927df25e829082e771b72543
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2426544
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fuse registers should be queried with physical gpc-id and not the
logical ones. For tu104 and before chips physical gpc-ids are same as
logical for non-floorswept config but for newer chips it may differ.
Also, logical to physical mapping is not present for a floorswept gpc so
query gpc_tpc mask only upto actual gpcs that are present.
Jira NVGPU-6080
Change-Id: I84c4a3c1f256fdd1927f4365af26e9892fe91beb
Signed-off-by: shashank singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2417721
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Mapping of large buffers to GMMU end up needing many
pages for the PTE tables. Allocating these one by one
can end up being a performance bottleneck, particularly
in the virtualized case.
This is adding the following changes:
- As the TLB invalidation doesn't have access to mem_off,
allow top-level allocation by alloc_cache_direct().
- Define NVGPU_PD_CACHE_SIZE, the allocation size for a new slab
for the PD cache, effectively set to 64K bytes
- Use the PD cache for any allocation < NVGPU_PD_CACHE_SIZE
When freeing up cached entries, avoid prefetch errors by
invalidating the entry (memset to 0).
- Try to fall back to direct allocation of smaller chunk for
contiguous allocation failures.
- Unit test changes.
Bug 200649243
Change-Id: I0a667af0ba01d9147c703e64fc970880e52a8fbc
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2404371
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Update nvgpu_gr_zbc as:
struct nvgpu_gr_zbc {
struct nvgpu_mutex zbc_lock; /* Lock to access zbc table */
struct zbc_color_table *zbc_col_tbl; /* SW zbc color table pointer */
struct zbc_depth_table *zbc_dep_tbl; /* SW zbc depth table pointer */
struct zbc_stencil_table *zbc_s_tbl; /* SW zbc stencil table pointer */
u32 min_color_index; /* Minimum valid color table index */
u32 min_depth_index; /* Minimum valid depth table index */
u32 min_stencil_index; /* Minimum valid stencil table index */
u32 max_color_index; /* Maximum valid color table index */
u32 max_depth_index; /* Maximum valid depth table index */
u32 max_stencil_index; /* Maximum valid stencil table index */
u32 max_used_color_index; /* Max used color table index */
u32 max_used_depth_index; /* Max used depth table index */
u32 max_used_stencil_index; /* Max used stencil table index */
};
Add global struct nvgpu_gr_zbc_table_indices
struct nvgpu_gr_zbc_table_indices {
u32 min_color_index;
u32 min_depth_index;
u32 min_stencil_index;
u32 max_color_index;
u32 max_depth_index;
u32 max_stencil_index;
};
Currently, hw zbc table registers are written during both
gr_init_setup_sw() and gr_init_setup_hw().
- Modify nvgpu_gr_zbc_load_default_table() to
nvgpu_gr_zbc_load_default_sw_table() to only update sw copy of zbc table
during gr_init_setup_sw().
- Modify nvgpu_gr_zbc_load_table() to write zbc values stored in sw zbc
table to hw registers.
Re-structure zbc function as per zbc type i.e. color, depth and stencil.
Add gr.zbc.init_table_indices() hal to initialize zbc indices. Valid ZBC
table indices start from 1. HW indices start from 0 for color, depth and
stencil tables. Note that the corresponding format registers follow ZBC
index range starting at 1.
- void (*init_table_indices)(struct gk20a *g,
struct nvgpu_gr_zbc_table_indices *zbc_indices);
- Add corresponding functions for legacy chips
- Add zbc color, depth and stencil table size hw defines
- Remove ltc.zbc_table_size() hal
- Update ltc.set_zbc_s_entry(), ltc.set_zbc_color_entry and
ltc.set_zbc_depth_entry() accordingly.
Bug 3122410
Bug 3122649
Change-Id: Ib799991ad35c6613534c0a6eb07f3bf24e600dc5
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2417620
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>