Update the nvgpu_runlist_update_for_channel() function:
- Rename it to nvgpu_runlist_update()
- Have it take a pointer to the runlist to update instead
of a runlist ID. For the most part this makes the code
better but there's a few places where it's worse (for
now).
This starts the slow and painful process of moving away from
the non-runlist code using runlist IDs in many places it should
not.
Most of this patch is just fixing compilation problems with
the minor header updates.
JIRA NVGPU-6425
Change-Id: Id9885fe655d1d750625a1c8aceda9e67a2cbdb7a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2470304
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add new MAPPING_MODIFY ioctl to the linux nvgpu driver.
This ioctl is used (for example) by the NvRmGpuMappingModify API to
change the kind of an existing mapping.
For compressed mappings the ioctl can be used to do the following:
* switch between two different compressed kinds
* switch between compressed and incompressed kinds
For incompressed mappings the ioctl can be used to do the following:
* switch between two different incompressed kinds
In order to properly update an existing mapping the nvgpu_mapped_buf
structure has been extended to cache the following state when the
mapping is first created:
* the compression tag offset (if applicable)
* the GMMU read/write flags
* the memory aperture
The unused ctag_lines field in the nvgpu_ctag_buffer_info structure
has been replaced with a new ctag_offset field.
Jira NVGPU-6374
Change-Id: I647ab9c2c272e3f9b52f1ccefc5e0de4577c14f1
Signed-off-by: scottl <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468100
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename struct nvgpu_runlist_info to struct nvgpu_runlist; the
info is not necessary. struct nvgpu_runlist is soon to be a
first class object among the nvgpu object model.
Also rename the fields runlist_info and active_runlist_info to
simply runlists and active_runlists respectively. Again the info
text is just not necessary and somewhat misleading. These structs
_are_ the runlist representations in SW; they are not merely
informational.
Also add an rl_dbg() macro to print debug info specific to
runlist management and some debug prints specifying the runlist
topology for the running chip.
Change-Id: Id9fcbdd1a7227cb5f8c75cca4abbff94fe048e49
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2470303
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The NEXT bit can remain set for the channel if timeslice expires before
scheduler clears it. Due to this nvgpu fails TSG unbind and in turn
nvrm_gpu fails channel close. In this case, checking the channel hw
state after some time can help see NEXT bit cleared by scheduler.
Reenable the tsg and return -EAGAIN to nvrm_gpu for it to retry again.
Bug 3144960
Change-Id: I35f417f02270e371a4e632986b73a00f8a4f921a
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468391
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
When building NVGPU with the GCC -Werror=sizeof-pointer-memaccess
warning enabled, the following error is seen ...
drivers/gpu/nvgpu/common/mm/as.c: In function ‘gk20a_vm_alloc_share’:
drivers/gpu/nvgpu/common/mm/as.c:131:33: error: argument to ‘sizeof’
in ‘strncpy’ call is the same expression as the source; did you
mean to use the size of the destination?
[-Werror=sizeof-pointer-memaccess]
131 | p = strncpy(name, "as_", sizeof("as_"));
| ^
This is caused because the source buffer is passed to sizeof instead of
the destination. This could cause a buffer overflow if the source is
larger than the destination buffer.
Looking at the code further, there is another problem and that is that
after copying the string 'as_' to the 'name' buffer, the pointer 'p'
returned by strncpy is then used as the address to append an unsigned
integer to the string 'as_'. However, the pointer returned by strncpy
is actually the same address as pointed to by 'name'. Therefore, the
prefix 'as_' is actually overwritten by the call to nvgpu_strnadd_u32.
Fix these issues by initialising 'name' buffer to 'as_' statically and
then set the pointer 'p' to the offset in the 'name' buffer that
follows the prefix 'as_'. This removes the need to use strncpy at all
and simplifies the code.
Bug 200689205
Change-Id: Ia9f0c634dc5a6dada088756cdae8c3dd688dcc48
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2465814
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add CAU initialization data in const array hwpm_cau_init_data[].
Add HAL API gops.gr.get_hwpm_cau_init_data() to retrieve this data
and implement it for TU104.
Add new HAL API gops.gr.init_cau() that uses above data and
initializes all cau units. Implement this HAL only for TU104.
Invoke above sequence from nvgpu_profiler_bind_hwpm() in case of
global HWPM mode.
Jira NVGPU-5360
Change-Id: I1c7a380e9d04d6cd45fb7f746c0a79fc56675244
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2463854
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
New profiler APIs set regop type based on whether context is bound or
not in nvgpu_prof_get_regops_staging_data(). But it is possible that
ctxsw is not enabled for some particular HWPM resource even if context
is bound to profiler object.
Fix this by extracting regop type based on per-resource ctxsw flag
instead of bound context.
Add reg_op_type[] array in profiler object to track regop type for each
HWPM resource. Initialize the array based on resource ctxsw flag in
nvgpu_profiler_pm_resource_reserve().
Update profiler_obj_validate_reg_op_offset() to get regop type from
nvgpu_profiler_validate_regops_allowlist() and use this type and
prof->reg_op_type[] to get actual type that should be used for that
regop.
Update validate_reg_ops() to validate the offset first since regop
type is now determined in offset validation. Set ops[i].status to 0
for each validation iteration, and if op is valid set it to
REGOP(STATUS_SUCCESS) at the end of iteration.
Bug 2510974
Jira NVGPU-5360
Change-Id: Ib1f75d840d04d288789473adabda02cdc807eea0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2460003
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add ptimer register offsets to regops allowlist for testing. New
allowlist restricts regops only to reserved resources, this makes it
difficult to test the interface since only HWPM registers can be
accessed and that could have side effects on system.
Having ptimer registers as test offsets has advantage that the offsets
do not change across chips, registers are read-only, and values are
always incrementing so a test can verify read regops and test various
flags of interface.
Add gops.ptimer.get_timer_reg_offsets() HAL to return timer offsets.
Add static function add_test_range_to_map() that adds timer offsets to
allowlist always.
In nvgpu_profiler_validate_regops_allowlist() return success if timer
offsets are hit in range search.
Bug 2510974
Jira NVGPU-5360
Change-Id: I8b51bb92e43e8b1bbe903c874a429341659ef603
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2460002
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add gv11b and tu104 HALs to get allowed HWPM resource register ranges,
offsets, and stride meta data.
Add new enum nvgpu_pm_resource_hwpm_register_type for HWPM register
type. Add new struct nvgpu_pm_resource_register_range_map to store all
the register ranges for HWPM resources. Add pointer of map in struct
nvgpu_profiler_object along with map entry count.
Add new API nvgpu_profiler_build_regops_allowlist() to build the regops
allowlist dynamically while binding the resources. Map entry count is
received with get_pm_resource_register_range_map_entry_count() and only
those resource ranges are added for which resource is reserved by
profiler object.
Add nvgpu_profiler_destroy_regops_allowlist() to destroy the allowlist
while unbinding the resources.
Add static functions allowlist_range_search() to search a register
offset in HWPM resource ranges. Add another static function
allowlist_offset_search() to search the offset in per-resource offset
list.
Add nvgpu_profiler_validate_regops_allowlist() that accepts an offset
value, checks if it is in allowed ranges using allowlist_range_search()
and then checks if offset is in allowlist using allowlist_offset_search().
Update gops.regops.exec_regops() to receive profiler object pointer as
a parameter.
Invoke nvgpu_profiler_validate_regops_allowlist() from
validate_reg_ops() if prof pointer is not-null. This will be true only
for new profiler stack and not legacy profilers.
In gr_exec_ctx_ops(), skip regops execution if offset is invalid.
Bug 2510974
Jira NVGPU-5360
Change-Id: I40acb91cc37508629c83106ea15b062250bba473
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2460001
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Implement below functions:
- nvgpu_profiler_quiesce_hwpm_streamout_resident
Teardown sequence when context is resident or in case profiling
session is a device level session.
- nvgpu_profiler_quiesce_hwpm_streamout_non_resident
Teardown sequence when context is non resident
- nvgpu_profiler_quiesce_hwpm_streamout
Generic sequence to call either of above API based on whether
context is resident or not.
Trigger HWPM streamout teardown sequence while unbinding resources
in nvgpu_profiler_unbind_hwpm_streamout()
Add a new HAL gops.gr.is_tsg_ctx_resident to call
gk20a_is_tsg_ctx_resident() from common code.
Implement below supporting HALs for resident teardown sequence:
- gops.perf.pma_stream_enable()
- gops.perf.disable_all_perfmons()
- gops.perf.wait_for_idle_pmm_routers()
- gops.perf.wait_for_idle_pma()
- gops.gr.disable_cau()
- gops.gr.disable_smpc()
Jira NVGPU-5360
Change-Id: I304ea25d296fae0146937b15228ea21edc091e16
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2461333
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Starting with nvgpu-next, the ctxsw ucode computes the checksum for each
ctxsw'ed register list, this checksum is saved at the end of the same list;
This entry will be given a special placeholder address 0x00ffffff, which can
be used to distinguish it from other entries in the register list.
There is only one checksum per list, even if it has multiple subunits. Hence,
update "add_ctxsw_buffer_map_entries_subunits" to avoid adding checksum
entires for each subunit within a list.
Bug 2916121
Change-Id: Ia7abedc7467ae8158ce3e791a67765fb52889915
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2457579
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
The function "nvgpu_engine_mmu_fault_id_to_eng_id_and_veid" updates only
the veid field and leaves the engine_id as invalid. This can cause the
recovery to be skipped in certain instances of MMUFAULT; For example,
the MMUFAULT when a unbind is done on a channel which is currently active
on the engine. In this case, the ch_id associated with the fault is -1 and
the function "gv11b_mm_mmu_fault_handle_non_replayable" will not set the
rc_type correctly causing recovery to be skipped and leaving the engine in
a bad state.
Bug 3163660
Change-Id: Ic99c47771a4002c153ac77ab0473b11d01cfd54a
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2457259
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Update pmu ucode version for next pmu to 29323513.
This version is taken from P4 CL#29323216.
Changes:
- Enabled ACR task support
- Disabled few features/code for commands to work
- ELPG fifo preemption hals fixed
- Halt functions in ELPG save and restore functions
are commented as bloaded flag is not getting set. This
is not significant as this change will not have any impact
in elpg functionality.
P4 ToT CL on which above change was made: P4 CL#29322732
P4 CL link: https://p4sw-swarm.nvidia.com/changes/29323216
Bug 200666202
Signed-off-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Change-Id: I34581cc15889463fa363cffb369485171c603247
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2447234
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
By always mapping gmmu kernel page using 4kB page, it'll be consistent
with native nvgpu driver. It's a workaround for enabling 64KB os kernel
page support.
In long term solution, GMMU_PAGE_SIZE_KERNEL will be os kernel page
size, and function nvgpu_gmmu_update_page_table will choose big page or
small page by comparing the size of GMMU_PAGE_SIZE_KERNEL with the size
of small or big pages. Regardingly vgpu will choose kernel page size by
comparing the size too when send map commands to server.
Bug 3015296
Bug 3015296
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Change-Id: I5d25280a9410da3ef628e5914ea962a76b102273
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2437193
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Update gk20a_ctrl_dev_ioctl() to fetch gpu_instance_id with
nvgpu_get_gpu_instance_id_from_cdev() and gr_instance_id with
nvgpu_grmgr_get_gr_instance_id().
Get instance specific GR engine configuration pointer with
nvgpu_gr_get_gpu_instance_config_ptr()
Update gk20a_ctrl_ioctl_gpu_characteristics() to return instance
specific characteristics with below changes :
- 0th GPU instance is a physical instance. Set a limited and relevant
characteristics flags for 0th instance.
For rest of the instances and non-MIG mode, continue fetching flags
with nvgpu_ctrl_ioctl_gpu_characteristics_flags.
- nvgpu_set_preemption_mode_flags() should be set only for non-MIG mode
and non-zero instance in MIG mode.
- In MIG mode, 0th instance does not support any classes. Rest of the
instances support only compute, copy and gpfifo classes.
Non-MIG mode supports all the classes including graphics ones.
- Fetch gpu_instance_id/gr_sys_pipe_id/gr_instance_id from gpu_instance
pointer.
- Fetch max_veid_count_per_tsg from gpu_instance pointer.
Also update nvgpu_gr_get_zcull_ptr() and nvgpu_gr_get_zbc_ptr() to
return instance specific pointers. zcull/zbc are not supported in MIG
mode, this is just for consistency of the code.
Jira NVGPU-5648
Change-Id: I764526061542c48ed87659844e16dd0e0253c588
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2436752
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Modify NVGPU_GPU_IOCTL_ALLOC_AS and struct nvgpu_alloc_as_args to
accept start address and size of user memory. This allows configurable
address space allocation.
- Modify gk20a_as_alloc_share() and gk20a_vm_alloc_share() to receive
va_range_start and va_range_end values.
- gk20a_vm_alloc_share() initializes vm with low_hole = va_range_start,
and user vma size = (va_range_end - va_range_start).
- Modify nvgpu_as_alloc_space_args and nvgpu_as_free_space_args to
accept 64 bit number of pages.
Bug 2043269
JIRA NVGPU-5302
Change-Id: I243995adf5b7e0e84d6b36abe3b35a5ccabd7a37
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2385496
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Any PDE can allocate memory with a specific page size. That means memory
allocation with page size 4K and 64K will be realized by different PDEs
with page size (or PTE size) 4K and 64K respectively. To accomplish this
user vma is required to be pde aligned.
Currently, user vma is aligned by (big_page_size << 10) carried over
from when pde size was equivalent to (big_page_size << 10).
Modify user vma alignment check to use pde size.
JIRA NVGPU-5302
Change-Id: I2c6599fe50ce9fb081dd1f5a8cd6aa48b17b33b4
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2428327
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Sami Kiminki <skiminki@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use fixed address mapping for pma byte buffer so that the address of
this buffer always fits in 32 bits.
This also requires to move unmap sequence to OS specific function since
different unmap API is now needed for linux and QNX.
Also call nvgpu_prof_free_pma_stream_priv_data() before
nvgpu_profiler_free_pma_stream() since former uses mm->perfbuf which
is released in later.
Bug 2510974
Jira NVGPU-5360
Change-Id: I398b0ca4f96527d6e09c9aacacb4b43c90f5bfc9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2424691
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Fuse registers should be queried with physical gpc-id and not the
logical ones. For tu104 and before chips physical gpc-ids are same as
logical for non-floorswept config but for newer chips it may differ.
Also, logical to physical mapping is not present for a floorswept gpc so
query gpc_tpc mask only upto actual gpcs that are present.
Jira NVGPU-6080
Change-Id: I84c4a3c1f256fdd1927f4365af26e9892fe91beb
Signed-off-by: shashank singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2417721
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The simulator ring buffer DMA interface supports buffers of the following sizes:
4, 8, 12 and 16K. At present, it is configured to 4K and it happens to match
with the kernel PAGE_SIZE, which is used to wrap back the GET/PUT pointers once
4K is reached. However, this is not always true; for instance, take 64K pages.
Hence, replace PAGE_SIZE with SIM_BFR_SIZE.
Introduce macro NVGPU_CPU_PAGE_SIZE which aliases to PAGE_SIZE and replace
latter with former.
Bug 200658101
Jira NVGPU-6018
Change-Id: I83cc62b87291734015c51f3e5a98173549e065de
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2420728
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>