Setting timeslice for virtualized case was not effective,
because both ioctls NVGPU_TSG_IOCTL_SET_TIMESLICE and
NVGPU_SCHED_IOCTL_TSG_SET_TIMESLICE were calling the
native function to set TSG timeslice.
- Fixed wrapper function to call HAL
- Defined HAL function for "native" set TSG timeslice
- Also, properly update timeout_us in TSG context, in
virtualized case.
This change also moves the min/max bounds checking for
tsg timeslice into the native function implementation.
There is no sysfs node for these parameters for vgpu,
as RM server is ultimately responsible for this check.
Bug 200263575
Change-Id: Ibceab9427561ad58ec28abfff0c96ca8f592bdb9
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1283180
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Move the gp10b HW headers to a new directory specially for them:
include/nvgpu/hw/gp10b
And change the code to include like so:
#include <nvgpu/hw/gp10b/hw_fb_gp10b.h>
This is part of the process to restructure the nvgpu driver.
Bug 1799159
Change-Id: Ic80ea5b7f5c280839e502e2178a345181f7a7ef9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1280326
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reorganize the HW headers of gk20a. The headers are moved to a
new directory:
include/nvgpu/hw/gk20a
And from the code are included like so:
#include <nvgpu/hw/gk20a/hw_pwr_gk20a.h>
This is the first step in reorganizing all of the HW headers for
gm20b, gm206, etc. This is part of a larger effort to re-structure
and make the driver more readable and scalable.
Bug 1799159
Change-Id: Ic151155cbc2e6f75009f2d9d597b364a1bed2c4c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1244790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move the GPU allocators to common/mm/ since the allocators are common
code across all GPUs. Also rename the allocator code to move away from
gk20a_ prefixed structs and functions.
This caused one issue with the nvgpu_alloc() and nvgpu_free() functions.
There was a function for allocating either with kmalloc() or vmalloc()
depending on the size of the allocation. Those have now been renamed to
nvgpu_kalloc() and nvgpu_kfree().
Bug 1799159
Change-Id: Iddda92c013612bcb209847084ec85b8953002fa5
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1274400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Take interrupts as one kind of event message, and make it
easier to add new kind of events.
JIRA VFND-3291
Bug 200257899
Change-Id: I83482293230c0aa10b05caf61e249a042bf6653c
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1278396
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Currently sparse mapping is not supported for gp10b
in virtualized environment. Modify gpu characteristics
to reflect non-implementation of this functionality.
Also fix return value in vgpu_gp10b_locked_gmmu_map()
on error condition.
Bug 200243373
Change-Id: Ia367b923b87738a5cad0617cdb074f5a24fb1c81
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: http://git-master/r/1269710
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
Propagate the buffer aperture flag in gk20a_locked_gmmu_map up so that
buffers represented as a mem_desc and present in vidmem can be mapped to
gpu.
JIRA DNVGPU-18
JIRA DNVGPU-76
Change-Id: Icd675e83e3c28836f0ed8880425748697713bb0a
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: http://git-master/r/1169296
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
gr_ctx will managed as independent resource in RM server
and vgpu can get a gr_ctx handle.
Bug 1702773
Change-Id: Ifceb44b7d9a1ba03fc2a4df847f4a78ac4c4a0d4
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1144934
(cherry picked from commit 0da3101d9b59fe1f9a47ce7b70b30cb8919f35ac)
Reviewed-on: http://git-master/r/1150707
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Separate out new API gr_gp10b_set_ctxsw_preemption_mode()
which will check requested preemption modes and take appropriate
action for each preemption mode
This API will also do some sanity checking for valid
preemption modes and combinations
Define API set_preemption_mode() for gp10b which will set the
preemption modes passed as argument and then use
gr_gp10b_set_ctxsw_preemption_mode() and
update_ctxsw_preemption_mode() to update preemption mode
Legacy path from gr_gp10b_alloc_gr_ctx() will convert
flags NVGPU_ALLOC_OBJ_FLAGS_* into appropriate preemption modes
and then call gr_gp10b_set_ctxsw_preemption_mode()
New API set_preemption_mode() will use new flags
NVGPU_GRAPHICS/COMPUTE_PREEMPTION_MODE_* and set and update
ctxsw preemption mode
In gr_gp10b_update_ctxsw_preemption_mode(), update graphics
context to set CTA premption mode if mode
NVGPU_COMPUTE_PREEMPTION_MODE_CTA is set
Also, define preemption modes in nvgpu-t18x.h
and use them everywhere
Remove old definitions of modes from gr_gp10b.h
Bug 1646259
Change-Id: Ib4dc1fb9933b15d32f0122a9e52665b69402df18
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1131806
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Refactor makefiles so that there is one makefile, and that file
can be included in the main nvgpu build.
Bug 1476801
Change-Id: I23ac451d695fc64064de2300e83b9d9487c52743
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1028353
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
The server only releases ownership of the ctxsw buffer mappings
after the GR ctx has been released. Update the sequence to
account for this.
JIRA VFND-1117
Bug 1708163
Change-Id: I3aed015805b4ca51433e7d37ad32de2f8353999f
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/922817
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
Some of the allocated buffers are used during normal graphics
processing. Mark them as GPU cacheable to improve performance.
Bug 1695718
Change-Id: I71d5d1538516e966526abe5e38a557776321597f
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/827087
(cherry picked from commit 60b40ac144c94e24a2c449c8be937edf8865e1ed)
Reviewed-on: http://git-master/r/828493
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
VM server only know big page and small page, so convert
gmmu_page_size_kernel to according page size index.
JIRA VFND-890
Change-Id: Id1f932752b8ca33d14635ac9d71019364aa89dc4
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/816359
(cherry picked from commit 5bfc4a2a55889f5457bd34aa06861c042ee67421)
Reviewed-on: http://git-master/r/827131
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>
gp10b introduces support for preemption (GfxP and CILP).
Add a new interface to allow allocating buffers needed
to support this functionality.
Bug 1677153
Change-Id: I8578a7b0a4327f3496d852eeb8be5fc778e2c225
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/806963
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: http://git-master/r/817039
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
We use &ch->error_notifier_mutex to protect
writes and free of error notifier
But we currently do not protect reading of
notifier in gk20a_fifo_set_ctx_mmu_error()
and vgpu_fifo_set_ctx_mmu_error()
Add new API gk20a_set_error_notifier_locked()
which is same as gk20a_set_error_notifier()
but without the locks.
In *_fifo_set_ctx_mmu_error() APIs, acquire
the mutex explicitly, and then use this new
API
gk20a_set_error_notifier() will now just call
gk20a_set_error_notifier_locked() within
a mutex
Bug 1824788
Bug 1844312
Change-Id: I1f3831dc63fe1daa761b2e17e4de3c155f505d6f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1273471
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Process long regops lists in 4-kB fragments, overcoming the overly
low limit of 128 reg ops per IOCTL call. Bump the list limit to 1024
and report the limit in GPU characteristics.
Bug 200248726
Change-Id: I3ad49139409f32aea8b1226d6562e88edccc8053
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: http://git-master/r/1253716
(cherry picked from commit 22314619b28f52610cb8769cd4c3f9eb01904eab)
Reviewed-on: http://git-master/r/1266652
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add HAL interface for TSG open, which is intended to
be called from the exisiting gk20a_tsg_open function.
The tsg_open entryoint is only implemented for vgpu,
as the server needs to clear metadata when a tsg is opened.
Bug 200215060
Change-Id: Icc8fd602f31e52d9fa9b2e7786b665b9e7b9294e
Signed-off-by: Sachit Kadle <skadle@nvidia.com>
Reviewed-on: http://git-master/r/1249218
(cherry picked from commit 35c86f7c796c6574d3dc336e20012ea5c16d7cb4)
Reviewed-on: http://git-master/r/1256468
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix small problems related to signed versus unsigned comparisons
throughout the driver. Bump up the warning level to prevent such
problems from occuring in future.
Change-Id: I8ff5efb419f664e8a2aedadd6515ae4d18502ae0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1252068
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit 5f1c2bc27f.
Added back now that matching RM server has been updated:
In hypervisor mode, all GPU VA allocations must be done by client;
fix this for the allocation of the hwpm ctxt buffer
Bug 200231611
Change-Id: Ie5ce2c2562401b1f00821231d37608e3fc30d4a4
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1252138
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
vgpu_vm_alloc_share() wants to return -EINVAL if VMA areas requested
do not fulfill the criteria. The error code gets overwritten by a
call to vgpu_comm_sendrecv(), which makes vgpu_vm_alloc_share() always
return 0.
Change-Id: I93f56025f963d1d4ad2f9b06139fce742d3be41b
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/1249961
GVS: Gerrit_Virtual_Submit
Reviewed-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
In hypervisor mode, all GPU VA allocations must be done by client;
fix this for the allocation of the hwpm ctxt buffer
Bug 200231611
Change-Id: I0270b1298308383a969a47d0a859ed53c20594ef
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1240913
(cherry picked from commit 49314d42b13e27dc2f8c1e569a8c3e750173148d)
Reviewed-on: http://git-master/r/1245867
(cherry picked from commit d0b10e84d90d0fd61eca8be0f9e879d9cec71d3e)
Reviewed-on: http://git-master/r/1246700
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove a global debugfs variable and instead save the allocator
debugfs root node in the gk20a struct.
Bug 1799159
Change-Id: If4eed34fa24775e962001e34840b334658f2321c
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1225611
(cherry picked from commit 1908fde10bb1fb60ce898ea329f5a441a3e4297a)
Reviewed-on: http://git-master/r/1242390
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Instead of blocking for gpfifo space in the nvgpu driver,
return -EAGAIN and allow userspace to decide the blocking
policy.
Bug 1795076
Change-Id: Ie091caa92aad3f68bc01a3456ad948e76883bc50
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/1202591
(cherry picked from commit 8056f422c6a34a4239fc4993c40c2e517c932714)
Reviewed-on: http://git-master/r/1203800
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Fix cyclestats snapshots HAL entries in the vgpu case, need
to null out the ones that don't apply.
Bug 1700143
JIRA EVLR-278
Change-Id: I1b5f4652d1bf3283d96fdb3c2f66c4f69a9f6acc
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1217507
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Add support for cyclestats snapshots in the virtual case
Bug 1700143
JIRA EVLR-278
Change-Id: I376a8804d57324f43eb16452d857a3b7bb0ecc90
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: http://git-master/r/1211547
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
We have completely different versions of probe for
nvgpu and pci device
Extract out common steps into nvgpu_probe() function
and separate it out in new file nvgpu_common.c
Divide task of nvgpu_probe() into further smaller
functions
Do platform specific things (like irq handling,
memresource management, power management) only in
individual probes and then call nvgpu_probe() to
complete the common initialization
Move all debugfs initialization to common gk20a_debug_init()
This also helps to bringup all debug nodes to pci device
Pass debugfs_symlink name as a parameter to gk20a_debug_init()
This allows us to set separate debugfs symlink for nvgpu
and pci device
In case of railgating, cde and ce debugfs, check if
platform supports them or not
Copy vidmem_is_vidmem from platform to mm structure
and set it to true for pci device
Return from gk20a_scale_init() if we don't have either of
governor or qos_notifier
Fix gk20a_alloc_debugfs_init() and gk20a_secure_page_alloc()
to receive device pointer instead of platform_device
Export gk20a_railgating_debugfs_init() so that we can call
it from gk20a_debug_init()
Jira DNVGPU-56
Jira DNVGPU-58
Change-Id: I3cc048082b0a1e57415a9fb8bfb9eec0f0a280cd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/1204207
(cherry picked from commit add6bb0a3d5bd98131bbe6f62d4358d4d722b0fe)
Reviewed-on: http://git-master/r/1204462
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add a new debug message type: gpu_dbg_map_v. This is used for mapping
messages that are not specifically memory map operations.
Also cleanup the memory mapping debugging a bit since there was one
duplicate print and the memory map print was difficult to parse
visually. As a result the message has been modified to put the most
important information first in an easily readable format.
Bug 1732449
JIRA DNVGPU-12
Change-Id: Ib19c9371ee958009ab5a2d89b9610e699d070ee2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/1198593
(cherry picked from commit 51dba53b06ca171cdb13d1707f2d026b0ce29f07)
Reviewed-on: http://git-master/r/1147670
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Flushing timestamp record method can fail in case FECS is not
processing the main method queue. In particular, this occurs
in case of ctxsw timeout, where we process fifo sched interrupts
from the host, but FECS is still waiting for idle (grWFI).
In such scenario, this adds huge delay in fifo recovery
procedure (timeout on FECS method). Since flushing the last
(incomplete) record from FECS would only be useful in that case
(context switch ongoing), remove flush operation on engine
reset. Note that an explicit ENGINE_RESET event (with pid)
is inserted in user-facing ctxsw buffer on engine reset.
Bug 200228310
Change-Id: I885525f8f197f81266b50db161bb511867fc74f4
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: http://git-master/r/1207305
(cherry picked from commit 44391b6204fd648949295f90481b0c424d9a5ddf)
Reviewed-on: http://git-master/r/1208414
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Moving getting constant attributes into one cmd which will be
called only once.
This patch adds basic infrastructure and gpu arch info, max_freq
and num_channels support.
JIRA VFND-2103
Change-Id: I100599b49f29c99966f9e90ea381b1f3c09177a3
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: http://git-master/r/1189832
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vladislav Buzov <vbuzov@nvidia.com>