Currently, there are few chip specific erratas present in nvgpu code.
For better traceability of the erratas and corresponding fixes,
introduce flags to indicate existing erratas on a chip. These flags
decide if a corresponding solution is applied to the chip(s).
This patch introduces below functions to handle errata flags:
- nvgpu_init_errata_flags
- nvgpu_set_errata
- nvgpu_is_errata_present
- nvgpu_print_errata_flags
- nvgpu_free_errata_flags
nvgpu_print_errata_flags: print below details of erratas present in chip
1. errata flag name
2. chip where the errata was first discovered
3. short description of the errata
Flags corresponding to erratas present in a chip are set during chip hal
init sequence.
JIRA NVGPU-6510
Change-Id: Id5a8fb627222ac0a585aba071af052950f4de965
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2498095
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
All graphics code is under CONFIG_NVGPU_GRAPHICS and all the HALs are
in non-fusa files. In order to support graphics in safety,
CONFIG_NVGPU_GRAPHICS needs to be enabled. But since most of the HALs
are in non-fusa files, this causes huge compilation problem.
Fix this by moving all graphics specific HALs used on gv11b to fusa
files. Graphics specific HALs not used on gv11b remain in non-fusa files
and need not be protected with GRAPHICS config.
Protect call to nvgpu_pmu_save_zbc() also with config
CONFIG_NVGPU_POWER_PG, since it is implemented under that config.
Delete hal/ltc/ltc_gv11b.c since sole function in this file is moved to
fusa file.
Enable nvgpu_writel_loop() in safety build since it is needed for now.
This will be revisited later once requirements are clearer.
Move below CTXSW methods under CONFIG_NVGPU_NON_FUSA for now. Safety
CTXSW ucode does not support these methods. These too will be revisited
later once requirements are clearer.
NVGPU_GR_FALCON_METHOD_PREEMPT_IMAGE_SIZE
NVGPU_GR_FALCON_METHOD_CTXSW_DISCOVER_ZCULL_IMAGE_SIZE
Jira NVGPU-6460
Change-Id: Ia095a04a9ba67126068aa7193f491ea27477f882
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2513675
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
- This is deferring the dev_nodes creation after power_on to
select the MIG config and to create the dev_nodes as per the
selected MIG config.
- The patch is adding a device node to issue power on. The
nodes are:
for igpu :/dev/nvgpu/igpu0/power
for dgpu:/dev/nvgpu/dgpu-0001:01:00.0/power
To issue power on :
echo "1" > /dev/nvgpu/igpu0/power
echo "1" > /dev/nvgpu/dgpu-0001:01:00.0/power
JIRA NVGPU-6633
Change-Id: Ic4f1f3e42724cc788dcfaf0e881d188fd3bd1ce1
Signed-off-by: dt <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2512647
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The SDL's error reporting code will be leveraged by central interrupt
controller (CIC) or common.cic unit.
This is a base patch to move SDL error reporting code from QNX
to common. Move the data structures used during error reporting to
common header - nvgpu_err_info.h
JIRA NVGPU-6522
Change-Id: Ie6b209323a14b9bb38e3402c2427fbcdaae52206
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2504726
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This is one of the steps in restructuring of interrupt code.
- Move ISR logic to common code. This will allow us to add mixed ASIL
error handling levels.
- Modify nonstall ISR to use threaded interrupts. Bottom half of
nonstall ISR will run nonstall operations instead of adding work to
workqueues.
- Remove nonstall workqueue implementation.
JIRA NVGPU-6351
Change-Id: I5f891b0de4b0c34f6ac05522a5da08dc36221aa6
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2467713
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The NEXT bit can remain set for the channel if timeslice expires before
scheduler clears it. Due to this nvgpu fails TSG unbind and in turn
nvrm_gpu fails channel close. In this case, checking the channel hw
state after some time can help see NEXT bit cleared by scheduler.
Reenable the tsg and return -EAGAIN to nvrm_gpu for it to retry again.
Bug 3144960
Change-Id: I35f417f02270e371a4e632986b73a00f8a4f921a
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2468391
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Add ptimer register offsets to regops allowlist for testing. New
allowlist restricts regops only to reserved resources, this makes it
difficult to test the interface since only HWPM registers can be
accessed and that could have side effects on system.
Having ptimer registers as test offsets has advantage that the offsets
do not change across chips, registers are read-only, and values are
always incrementing so a test can verify read regops and test various
flags of interface.
Add gops.ptimer.get_timer_reg_offsets() HAL to return timer offsets.
Add static function add_test_range_to_map() that adds timer offsets to
allowlist always.
In nvgpu_profiler_validate_regops_allowlist() return success if timer
offsets are hit in range search.
Bug 2510974
Jira NVGPU-5360
Change-Id: I8b51bb92e43e8b1bbe903c874a429341659ef603
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2460002
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
gr_gv100_reset_hwpm_pmm_registers() writes a bunch of registers in
sys/gpc/fbp chiplets to reset perfmons. To ensure all the writes have
completed it is necessary to readback each chiplet's PRI fence register.
Add and use new HAL g->ops.priv_ring.read_pri_fence() to achieve this.
Implement the HAL for gv11b in new source code file
hal/priv_ring/priv_ring_gv11b.c.
Bug 2510974
Jira NVGPU-5360
Change-Id: If4dd61cb4265422e8c2d16884790eb0fe7f2c103
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2453631
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL g->ops.gr.reset_hwpm_pmm_registers() to reset all HWPM regs
while binding HWPM in global mode in nvgpu_profiler_bind_hwpm()
Add below new HALs to get sys/gpc/fbp register list and count
g->ops.perf.get_hwpm_sys_perfmon_regs()
g->ops.perf.get_hwpm_gpc_perfmon_regs()
g->ops.perf.get_hwpm_fbp_perfmon_regs()
Auto generate all the HWPM regs in below arrays for gv11b/tu104
static const u32 hwpm_sys_perfmon_regs[]
static const u32 hwpm_gpc_perfmon_regs[]
static const u32 hwpm_fbp_perfmon_regs[]
Bug 2510974
Jira NVGPU-5360
Change-Id: I2ca5c04ed75c7b30ae942807bf018a24551d7ba0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2414934
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove current mc hals
- mc.reset()
- mc.enable()
- mc.disable()
- mc.reset_mask()
- mc.reset_engine()
- mc.reset_engine_enable()
Add new mc hals
- mc.enable_units(g, units, enable)
> enable/disable given unit(s)
- mc.enable_dev(g, dev, enable)
> enable/disable engine represented by given device pointer
- mc.enable_devtype(g, devtype)
> enable/disable all engines of given devtype
Move common mc intr functions to common/mc/mc_intr.c.
Add below common mc functions
- nvgpu_mc_reset_units(g, units)
> reset given logical OR of nvgpu unit bitmap
- nvgpu_mc_reset_dev(g, dev)
> reset given single engine via dev
> if engine is graphics, reset gpcs for nvgpu_next
- nvgpu_mc_reset_devtype(g, devtype)
> reset all engines of given devtype
> if devtype is graphics, reset gpcs for nvgpu_next
Bug 200648985
Bug 3109773
Change-Id: Idc67a14a0a7cde83de44fbfbec13007fead3ed5c
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2408523
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a new header file <nvgpu/gr/gr_instances.h> that supports below
macros to execute various functions for GR instances
1) nvgpu_gr_exec_for_each_instance
Execute a function for each GR instance by configuring GR remap
window for that instance. Function being executed returns void.
2) nvgpu_gr_exec_with_ret_for_each_instance
Execute a function for each GR instance by configuring GR remap
window for that instance. Function being executed returns an error.
3) nvgpu_gr_exec_for_all_instances
Execute a function for all GR instances at once. For this GR remap
window needs to be disabled temporarily.
If CONFIG_NVGPU_MIG is disabled, all above macros will turn into simple
funciton calls.
If CONFIG_NVGPU_MIG is disabled or if runtime flag NVGPU_SUPPORT_MIG is
disabled, all above macros will turn into simple function calls that
configure single GR instance.
Separate out GR engine reset code into new API gr_reset_engine() and
execute it with nvgpu_gr_exec_with_ret_for_each_instance().
PROD values need to be loaded in legacy mode, hence call
nvgpu_cg_init_gr_load_gating_prod() inside
nvgpu_gr_exec_for_all_instances().
Rename gr_init_prepare_hw() to more appropriate
gr_reset_hw_and_load_prod()
Moe gops.gr.init.fifo_access() call to gr_init_reset_enable_hw().
Add new API nvgpu_grmgr_get_gr_syspipe_id() to query GR instance syspipe
id from common.grmgr unit. Add nvgpu_gr_get_syspipe_id() that returns
same value stored in nvgpu_gr struct.
Add cur_gr_instance field to struct nvgpu_gr to track current GR
instance being programmed under remap window.
Jira NVGPU-5648
Change-Id: I86920303427a6e6547ebf195daa37438365bb38e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2403550
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following changes are made in this patch.
1) Change unnamed structs within gpu_ops to named structs
with the prefix gops_*.
2) Each named struct gops_ are moved into a separate gops specific file
under include/nvgpu/gops/
3) struct gpu_ops is moved into a separate file include/nvgpu/gpu_ops.h
and all other dependent struct gops_* are included in this header.
4) Direct references to include/nvgpu/gops are removed from files as its enough
to include gk20a.h.
Change-Id: Ieb22cb853be567e3bef14f5f8a04674eebd902ea
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398776
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
* Removed unnecessary irqs_enabled flag, and
Replaced enable/disable irq logics with nvgpu variant functions.
* Added nvgpu_interrupts data structure to hold interrupt details.
* Interpret all stall irqs first and followed by nonstall irq from dt.
* Used interrupt size checks for enable/disable irqs instead of
comparing stall and nonstall interrupt lines.
Now adding new stall interrupt lines as easy as just updating macro.
Jira NVGPU-6019
Change-Id: I5a5eaa8d333c68ee87d25d2b45ec244ec8d7b297
Signed-off-by: Sagar Kadamati <skadamati@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2400777
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Continue making the incoherent channel functionality more structured by
moving the worker thread business to one file and the channel watchdog
logic to another. This is channel-internal restructuring; the interface
to other units does not change.
The watchdog logic is called from the worker thread and as such these
are rather tightly coupled but it's possible to have the thread and not
the watchdog.
Jira NVGPU-5582
Change-Id: I70f334dd15c9aca0eed75393b99e2f080d133015
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2398921
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The lockless allocator that spins in alloc and free ops using cmpxchg to
mitigate race conditions has only ever been used for the post fences in
preallocated job resources. Now each post fence has a clear owner (the
job struct which already is allocated well) and lifetime, so this
allocator has no longer a purpose. Delete it to avoid bitrot. (The
design of the job queue has always been such that there's minimal
contention in any case.)
Jira NVGPU-5773
Change-Id: Ied98d977c2c75bacfd3d010ce60c80fe709231e0
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2392705
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create new dev nodes for device and context profilers. Example of dev
nodes on iGPU
/dev/nvhost-prof-dev-gpu - device scope profiler
/dev/nvhost-prof-ctx-gpu - context scope profiler
Add below APIs to open/close above dev nodes :
nvgpu_prof_dev_fops_open()
nvgpu_prof_ctx_fops_open()
nvgpu_prof_fops_release()
Add common API nvgpu_prof_fops_ioctl() to handle IOCTL call on these
dev nodes. Add IOCTL NVGPU_PROFILER_IOCTL_BIND_CONTEXT to bind the TSG
to profiler objects.
Add nvgpu_tsg_get_from_file() to retrieve TSG struct pointer from
file descriptor. Also store profiler object pointer into TSG struct.
Enable NVGPU_SUPPORT_PROFILER_V2_DEVICE capability on gv11b and tu104.
Note that this is not yet enabled for vGPU.
Keep NVGPU_SUPPORT_PROFILER_V2_CONTEXT capabiity disabled since this
will take longer to support.
Add new IOCTL NVGPU_PROFILER_IOCTL_UNBIND_CONTEXT so that userspace can
explicitly unbind the context and release the resources before closing
the profiler descriptor.
Add context_init flag to profiler object for book keeping.
Bug 2510974
Jira NVGPU-5360
Change-Id: Ie07e0cfd5a9da9d80008f79c955c7ef93b4bc60f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2384354
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently the vGPU engine management rewrites a lot of the common
device agnostic engine management code.
With the new top HAL parsing one device at a time, it is now more
easily possible to tie the vGPU into the new common device framework
by implementing the top HAL but with the vGPU engine list backend.
This lets the vGPU inherit all the common engine and device
management code. By doing so the vGPU HAL need only implement a
trivial and simple HAL.
This also gets us a step closer to merging all of the CE init
code: logically it just iterates through all CE engines whatever
they may be. The only reason this differs between chips is because
of the swap from CE0-2 to LCEs in the Pascal generation. This could
be abstracted by the unit code easily enough.
Also, the pbdma_id for each engine has to be added to the device
struct. Eventually this was going to happen anyway, since the
device struct will soon replace the nvgpu_engine_info struct.
It's a little bit of an abuse but might be worth it long term. If
not, it should not be difficult to replace uses of dev->pbdma_id
with a proper lookup of PBDMA ID based on the device info.
JIRA NVGPU-5421
Change-Id: Ie8dcd3b0150184d58ca0f78940c2e7ca72994e64
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2351877
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add tu104 specific HAL tu104_gr_falcon_ctrl_ctxsw() that processes below
CTXSW methods to start/stop SMPC global mode :
NVGPU_GR_FALCON_METHOD_START_SMPC_GLOBAL_MODE
NVGPU_GR_FALCON_METHOD_STOP_SMPC_GLOBAL_MODE
Add new tu104 specific HAL tu104_gr_update_smpc_global_mode() to trigger
SMPC global mode start/stop using gops.gr.falcon.ctrl_ctxsw().
Update nvgpu_dbg_gpu_ioctl_smpc_ctxsw_mode() to enable/disable SMPC
global mode if channel is not bound to debug session.
Bug 2510974
Bug 2257799
Jira NVGPU-5360
Change-Id: I1f9d8f2a2d30a4738f291db3fc72c400d24f4048
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2368696
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Current PM resource reservation system is limited to HWPM resources
only. And reservation tracking is done using boolean variables.
New upcoming profiler support requires reservation for all the PM
resources like SMPC and PMA stream. Using boolean variables is
not scalable and confusing. Plus the variables have to be replicated
on gpu server in case of virtualization.
Remove flag tracking mechanism and use list based approach to track
all PM reservations. Also, current HALs are defined on debugger object.
Implement new HALs in new pm_reservation object since it is really an
independent functionality.
Add new source file common/profiler/pm_reservation.c which implements
functions to reserve/release resources and to check if any resource
is reserved or not.
Add common/vgpu/pm_reservation_vgpu.c for vGPU which simply forwards
the request to gpu server.
Define new HAL object gops.pm_reservation and assign above functions
to below respective HALs :
g->ops.pm_reservation.acquire()
g->ops.pm_reservation.release()
g->ops.pm_reservation.release_all_per_vmid()
Last HAL above is only used for gpu server cleanup of guest OS.
Add below new common profiler functions that act as APIs to reserve/
release resources for rest of the units in nvgpu.
nvgpu_profiler_pm_resource_reserve()
nvgpu_profiler_pm_resource_release()
Initialize the meta data required for reservtion system in
nvgpu_pm_reservation_init() and call it during nvgpu_finalize_poweron.
Clean up the meta data before releasing struct gk20a.
Delete below HALs :
g->ops.debugger.check_and_set_global_reservation()
g->ops.debugger.check_and_set_context_reservation()
g->ops.debugger.release_profiler_reservation()
Bug 2510974
Jira NVGPU-5360
Change-Id: I4d9f89c58c791b3b2e63099a8a603462e5319222
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2367224
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move profiler object allocation/free APIs to separate profiler
specific file common/profiler.c.
Store struct gk20a pointer in struct dbg_profiler_object_data for
convenience of accessing global struct pointer.
Update profiler object to store TSG pointer instead of channel
pointer. Since expectations is to have one profiler object
per context/TSG.
nvgpu_profiler_reserve_acquire() has a case to check if resource
reservation is acquired by some other channel in TSG.
But now since we keep track of TSG itself, this case becomes
redundant and can be removed.
All the support is compiled out of safety build with compile
flag CONFIG_NVGPU_PROFILER.
Linux will always compile the support.
Bug 2510974
Change-Id: I197bbd67a9cdd1fbea42f1effd1b74b15a6068e5
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2365674
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
quad type reg_ops were only needed on Kepler, and not for any other chip
beginning Maxweel.
HAL g->ops.gr.access_smpc_reg() was incorrectly set for Volta and Turing
whereas it was only applicable to Kepler. Delete it.
There is no register in the quad type whitelist since the type itself is
not supported anymore. Remove the empty whitelists for all chips and
also delete below HALs:
g->ops.regops.get_qctl_whitelist()
g->ops.regops.get_qctl_whitelist_count()
hal/regops/regops_gv100.* files are not used anymore. Delete the files
instead of just deleting quad HALs in these files.
Bug 200628391
Change-Id: I4dcc04bef5c24eb4d63d913f492a8c00543163a2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2366035
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Decouple the fence information needed for providing submit postfences to
userspace by adding a separate type for that and using it to pass fence
data to ioctls.
The data in struct nvgpu_fence_type is used in various places:
- job tracking needs to know when a post fence is expired
- job submitters within the driver (vidmem clears) need to be able to
wait for these fences
- userspace needs the fence as an id, value pair or as a file descriptor
created from an os fence
To keep object lifetimes strict, start decoupling the os fence data out
of struct nvgpu_fence_type: delete nvgpu_fence_install_fd() and add
nvgpu_fence_extract_user() to return a struct nvgpu_user_fence that
contains only the necessary information. Storing the os fence in job
tracking metadata is legacy code and not useful. Passing the os fence
from where it's created through the whole submit path inside this
combined fence type has been convenient, though.
The internally stored cde job fence in dmabuf compression metadata is
still nvgpu_fence_type to keep this patch simple.
Jira NVGPU-5248
Change-Id: I75b7da676fb6aa083828f888c55571bbf7645ef3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2359064
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
The current debugfs code is completely specific to FIFO's kickoff
profiler. But exposing these debugfs nodes is really a perfectly
generic operation to any given profiler.
Therefore add a generic debugfs interface for exposing profilers.
Any code that implements a profiler can now use a single function
call to export a profiler to the GPU debugfs area.
JIRA NVGPU-5606
Change-Id: I67a5bd9998fcfac94678e465442b9a38ab7e7612
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2358382
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a generic profiler based on the channel kickoff profiler. This
aims to provide a mechanism to allow engineers to (more) easily profile
arbitrary software paths within nvgpu.
Usage of this profiler is still primarily through debugfs. Next up is
a generic debugfs interface for this profiler in the Linux code.
The end goal for this is to profile the recovery code and generate
interesting statistics.
JIRA NVGPU-5606
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: I99783ec7e5143855845bde4e98760ff43350456d
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2355319
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This adds a new device management unit in the common code responsible
for facilitating the parsing of the GPU top device list and providing
that info to other units in nvgpu.
The basic idea is to read this list once from HW and store it in a
set of lists corresponding to each device type (graphics, LCE, etc).
Many of the HALs in top can be deleted and instead implemented using
common code parsing the SW representation.
Every time the driver queries the device list it does so using a
device type and instance ID. This is common code. The HAL is responsible
for populating the device list in such a way that the driver can
query it in a chip agnostic manner.
Also delete some of the unit tests for functions that no longer
exist. This code will require new unit tests in time; those should be
quite simple to write once unit testing is needed.
JIRA NVGPU-5421
Change-Id: Ie41cd255404b90ae0376098a2d6e9f9abdd3f5ea
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2319649
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The differences between sync_fence ("android sync") and dma_fence are
abstracted away by nvhost in the nvhost_fence interface. There is no
need to have separate android and dma os fences for syncpoints; unify
the general implementation so that it's always used when requested for
the build.
Jira NVGPU-5386
Change-Id: Ia829e93e18d03064ff46ab1271547de2d1fb1cae
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2356158
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Historically, nvgpu has supported a struct gk20a_dmabuf_priv and
associated it with a dmabuf instance. This was aided by Nvmap's
dma_buf_set_drv_data() and dma_buf_get_drvdata() APIs. gk20a_dmabuf_priv
is used to store Comptag IDs i.e. (1 per 64 kb) as well as can store the
dmabuf attachments to avoid multiple attach/detach calls. dma_buf_set_drv_data()
allows Nvgpu to associate an instance of struct gk20a_dmabuf_priv with the instance
of the dmabuf and also provide a release callback to delete the
instance when the last reference to the dmabuf is put. Nvmap accomplishes
this by modifying the struct dma_buf_ops definition to include the
set_drv_data and get_drv_data callbacks in the kernel code.
The above approach won't work for upstream Kstable and Nvmap
plans to remove these APIs for upcoming newer downstream kernels as
well.
In order to implement the same functionality without depending on Nvmap,
Nvgpu will implement a release chaining mechanism. Dmabuf's 'ops' pointer
points to a constant struct and hence a whole copy of the ops is made
followed by altering the new copy's release pointer.
struct gk20a_dmabuf_priv stores the new copy and the dmabuf's 'ops' is
changed to point to this. This allows Nvgpu to retrieve
the corresponding gk20a_dmabuf_priv instance using container_of.
Nvgpu's custom release callback will invoke the original release
callback of the dmabuf's producer as a last step, thus completing the
full circle. In case, the driver is removed, Nvgpu restores the
dmabuf's 'ops' back to the original state. In order to accomplish this,
every instance of a struct nvgpu_os_linux maintains a linkedlist of the
gk20a_dma_buf instances. During the driver removal, this linkedlist is
traversed and the corresponding dmabuf's 'ops' pointer is put back to
its original state followed by freeing of this instance.
Nvgpu is a producer of dmabuf's for vidmem and needs
a way to check whether the given dmabuf belongs to itself.
Its no longer reliable to depend on a comparision of
the 'ops' pointer. Instead dmabuf_export_info() allows a name to be set by the
exporter and this can be used to compare with a memory location
that belongs to Nvgpu. Similarly for sysmem dmabufs, Nvmap makes a
similar change in the way it identifies whether a dmabuf belongs to
itself.
Removed NVGPU_DMABUF_HAS_DRVDATA and moved to a unified mechanism for
both downstream as well as upstream kernel.
Some of the other changes in this file include the following.
1) Deletion of dmabuf.c and moving its contents over to dmabuf_priv.c
2) Replacing gk20a_mm_pin_has_drvdata with nvgpu_mm_pin_privdata and
vice-versa for unpin.
Bug 2878569
Change-Id: Icf8e79b05a25ad5a85f478c3ee0fc1eb7747e22d
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2341001
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Puneet Saxena <puneets@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, ltc fs_state is initialized during ltc init support. However,
ltc cbc_param and cbc_param2 registers do not seem to be providing
correct data if ltc.init_fs_state is called before fb.init_fs_state.
- Create fb.init_fb_support hal to initialize fb.
- Trigger init_fb_support before init_ltc_support.
Bug 2969956
Bug 2957808
JIRA NVGPU-4666
Change-Id: I54d697d27b9d9c6318c4ef459d215b6f82cd5571
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2345673
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
The ecc init, handling for the fb unit is refactored to improve reusability
for nvgpu-next.
The following changes have been done:
- fb.ecc:
This is a new subunit within fb and contains the following functions:
- init: Moved from fb.fb_ecc_init.
- free: Moved from fb.fb_ecc_free.
- l2tlb_error_mask: Fetch bit mask for corrected, uncorrected errors supported
by the unit.
- fb.intr:
This unit has been updated to include the following ecc interrupt, error
handlers:
- handle_ecc: Top level interrupt handler for fb ecc errors.
- handle_ecc_l2tlb: Handle errors within l2tlb memory.
- handle_ecc_hubtlb: Handle errors within hubtlb memory.
- handle_ecc_fillunit: Handle errors within fillunit memory
Jira: NVGPU-5032
Change-Id: I1a26c1823eb992e0e0175250b969f1186dff6e62
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2333271
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Many tests used various incarnations of the mock register framework.
This was based on a dump of gv11b registers. Tests that greatly
benefitted from having generally sane register values all rely
heavily on this framework.
However, every test essentially did their own thing. This was not
efficient and has caused a some issues in cleaning up the device and
host code.
Therefore introduce a much leaner and simplified register framework.
All unit tests now automatically get a good subset of the gv11b
registers auto-populated. As part of this also populate the HAL with
a nvgpu_detect_chip() call. Many tests can now _probably_ have all
their HAL init (except dummy HAL stuff) deleted. But this does
require a few fixups here and there to set HALs to NULL where tests
expect HALs to be NULL by default.
Where necessary HALs are cleared with a memset to prevent unwanted
code from executing.
Overall, this imposes a far smaller burden on tests to initialize
their environments.
Something to consider for the future, though, is how to handle
supporting multiple chips in the unit test world.
JIRA NVGPU-5422
Change-Id: Icf1a63f728e9c5671ee0fdb726c235ffbd2843e2
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335334
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>