MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch changes a call to falcon_copy_from_dmem to handle
error codes.
JIRA NVGPU-677
Change-Id: Id0bb2b67aa0661e65880fe41d4cfd344dd362c32
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2008815
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch changes calls to nvgpu_falcon_bootstrap to handle
error codes.
JIRA NVGPU-677
Change-Id: I1d9df6053c727e7eb3d99682ff7bb06267608a54
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2008797
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The support sparse HAL severs only one purpose: return true or false
depending on whether the given chip supports sparse mappings. This
HAL is used to, in turn, program (or not) the
NVGPU_SUPPORT_SPARSE_ALLOCS enabled flag. So instead of having all
this rigmarole to program this flag just program it for all native
GPUs. Then, in the vGPU specific characteristics function disable it
explicitly. This seems to have precedent already.
JIRA NVGPU-1737
JIRA NVGPU-1934
Change-Id: I630928ad656aaffc09fdc6b7fec9fc423aa94c38
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2006796
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
common/pmu/pmu.c uses IS_ENABLED() to check if perfmon sampling
should be enabled. The flag it is checking was already removed, so
the condition always evaluates to false. Remove the reference to the
removed build flag. Perfmon sampling can now be enabled only via
debugfs.
Remove the definition of IS_ENABLED() in include/nvgpu/posix/types.h.
Change-Id: I26ddd9d24c39513db68c7ae2b1cd1ebdd980e96e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011631
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
pstate s/w setup needs to know offsets of boardobjs present
in PMU super surface to construct boardobj set/get_status
construct.
With SSMD support, boardobj offset will be part of SSMD
lookup table which is part of PMU super surface buffer &
updated by PMU RTOS during init stage.
As SSMD is updated at init stage & SSMD will be ready for nvgpu
to read offsets only when PMU init message is received,
so pstate s/w must wait till PMU is ready to construct
boardobj set/get_status
JIRA NVGPU-1874
Change-Id: I87d840965b92c538f26dbedf458f0c18d1808abc
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2000863
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently there are multiple BOARDOBJGRP SET/GET_STATUS construct
added to support for 3.5 pstate, _pack boardobjs & to fetch respective
offset/size of boardobjs from PMU super surface.
With super surface member descriptor(SSMD) support, boardobj offset
will be part of SSMD lookup table which is part of PMU super surface
buffer & updated by PMU RTOS during init stage,
So, making changes to use common BOARDOBJGRP SET/GET_STATUS
construct define which fetchs offset/size from SSMD,
& deleting other BOARDOBJGRP construct defines.
JIRA NVGPU-1874
Change-Id: I4e3d9ad31da33a836c5e9219321c68d867c2a172
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2000860
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-SSMD - super surface member descriptor
-created new file pmu_super_sruface.c for super surface
related functions.
-Modified macros BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT and
BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to fetch
offset/size using super surface related functions
-Moved functions nvgpu_pmu_super_surface_alloc() &
nvgpu_pmu_surface_free from pmu.c to pmu_super_sruface.c
-Created ops create_ssmd_lookup_table under pmu
to support function for chip specific
Currently, NVGPU must modify RM/PMU defined common super surface
data struct to match offset as per NVGPU super surface data struct
as NVGPU cannot include directly RM/PMU defined struct due to number
boardobjs supported by NVGPU, this adds extra work when there is
changes in boardobj or when need add support for new boardobj.
SO, to fix this issue SSMD feature is introduced.
With SSMD support, NVGPU required boardobjs offset will be part of
SSMD lookup table which is part of PMU super surface buffer & is
always first member of PMU super surface data struct for easy access,
SSMD lookup table will be copied to PMU super surface SSMD offset by
PMU RTOS ucode at init stage as per predefined SSMD lookup table.
JIRA NVGPU-1874
Change-Id: Ida1edae707ddded300f9a629710b53a6606ac0ee
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761338
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changes:
1. Added a new type: CTRL_PERF_VFE_EQU_OUTPUT_TYPE_THRESHOLD
This parameter was added in VFE table(under index 36) of 4F VBIOS to
convert VFE floating point output into threshold percentage value for
Fmon threshold programming.
Bug 2500899
Change-Id: Ife72e9a7b644c289702b0bcc89a1c9dce9d60386
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011177
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
It is possible to have an invalid combination of the ioctl calls that
could result in a null pointer access in the function
gk20a_event_id_release(). The null pointer access can be prevented by
having a null check for a valid struct gk20a_event_id_data before
accessing its internal variables.
Bug 200462170
Change-Id: I9233479081b7a7659deeaa3b84141381ed302e63
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2006314
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function parameter of nvgpu_vm_map function is fixed for MISRA
where implicit assignment of objects to a narrower or different
essential type not allowed.This fixes few enum violations.
JIRA NVGPU-1584
Change-Id: I2353f7501c3326f792f5942b2e247badf03349cf
Signed-off-by: Dinesh T <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986509
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, PMU f/w ucode read is part of ACR prepare ucode blob
which makes PMU to depend on ACR to init PMU f/w version related
ops & to include PMU related members to be part of ACR data struct
to free the allocated space for PMU ucodes.
Moved PMU f/w ucode read to PMU early init function & initializing
version ops once PMU ucode descriptor is available.
JIRA NVGPU-1146
Change-Id: I465814a4d7a997d06a77d8123a00f3423bf3da1e
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2006339
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 21.2 forbids the usage of identifier names which start with
an underscore. This is because identifier names which start with an
underscore are reserved for the C library. This patch fixes rule 21.2
issues in POSIX header guard names.
JIRA NVGPU-1028
Change-Id: I12d01c9d18b64c2a12fbd7840455d38fb024c2e8
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011873
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 21.2 forbids the usage of identifier names which start with
an underscore. This is because identifier names which start with an
underscore are reserved for the C library. This patch fixes rule 21.2
issues in vgpu header guard names.
JIRA NVGPU-1028
Change-Id: I89b450f0c1960ad93392971dcd6ef3c15d2460db
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011872
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently clk_arb wq waits only for tasks.
Sometimes this doesnt invoke wq when thread_stop is called.
Add a check for thread_stop when waiting in wq.
Remove atomic_inc when no job is submitted.
Use current_time_ns() instead of ptimer to get time.
Bug 200488054
Change-Id: I770d566dd29371bd189d30dd5e60a6b23a9c98ff
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012244
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/subctx.c to manage GR subcontext
This unit provides interfaces to allocate/free/load GR subcontext
Add new header file include/nvgpu/gr/subctx.h to declare all the
interfaces.
Right now channel_gk20a structure directly includes a nvgpu_mem
for context header.
Declare a new structure nvgpu_gr_subctx for subcontext and include
this from channel_gk20a
Make all necessary changes to refer ctx_header from subctx instead
of directly referencing it from channel
Jira NVGPU-1613
Change-Id: I9eb1ee8f26fa88d2881f9b294935b65e9cbcc9b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990129
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When an engine faults due to unbound instance block, all
active TSGs are currently aborted. This includes the TSG
used by vidmem-clear task to clear vidmem buffers. From
this point nvgpu_vidmem_clear cannot submit jobs anymore.
Define TSG in MM CE context as non-abortable, and skip it
when aborting active TSGs.
Bug 2486146
Change-Id: I221259aec468e8ee3a24e80fab8d8fb7ee8607b0
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2008954
(cherry picked from commit 6f2444dc5e128aa2b870796bd1e9dee7853f90af)
Reviewed-on: https://git-master.nvidia.com/r/2008942
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gk20a_fifo_update_runlist_locked() and the vgpu counterpart do three
things:
1. find out whether there's a channel to add or remove (if not, the
whole runlist is just reconstructed or cleared),
2. reconstruct the runlist format for hardware (or the vgpu message),
3. actually update the runlist to hw and maybe wait for finish.
Split out the two first operations to separate functions to make the
code easier to understand. Now it's also clearer that the "add"
parameter behaves completely differently depending on whether the
channel pointer is NULL or not.
Also ignore (with a warning) channels not bound to a tsg. We shouldn't
get runlist updates on such channels. This simplifies the control flow a
bit.
Jira NVGPU-1309
Jira NVGPU-1922
Change-Id: I478c33eb2bd154c05a6c8c4148e4fea528a39a3e
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2007473
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add error prints for DMA alloc failures. This way when there is a
DMA alloc failure the failure is clear. Without this it's hard to
know what exactly is causing any given -ENOMEM issue or what the
specifics of said ENOMEM case are.
Change-Id: Ia535895ae07bc1704edaed564edbb6f6dfbf6518
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1976441
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Instead of using kmalloc() to get a buffer for storing the
computed flags string during DMA debugging use a stack
buffer. This removes the need for a kmalloc() call. The
problem with kmalloc() is that if a dma_alloc() fails due
to being out of memory the kmalloc may likely fail, too!
Also simplify the logic now that there's no need to do
any error checking for a kmalloc() call.
Change-Id: I45c1fd16658212188a1206a2edf17b28f3c06c9e
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1976440
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Remove handling for channels that are no more bound to tsg
as channel could be referenceable but no more part of a tsg
- Use tsg_gk20a_from_ch to get pointer to tsg for a given channel
- Clear unhandled gr interrupts
Bug 2429295
JIRA NVGPU-1580
Change-Id: I9da43a2bc9a0282c793b9f301eaf8e8604f91d70
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972492
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move code involved in nvlink interrupt and error handling and
initialization into a separate unit under subelement 'nvlink'.
Add g->ops.nvlink.intr_err ops to allow other units to access
the APIs exposed by this unit.
JIRA NVGPU-1813
Change-Id: I2d90cf1394faa0692630514b6a3cea15f5e105ae
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997732
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Earlier, with DMEM queue, if command needs in/out payload
then space needs to be allocated in DMEM/FB-surface &
copy payload in allocated space before sending command
by providing payload info in sending command .
-With FBQ, command in/out payload is also part of FB command
queue element & not required to allocate separate space in
DMEM/FB-surface, so added changes to handle FBQ payload request
while sending command & also in response handler to extract
data from out payload.
JIRA NVGPU-1579
Bug 2487534
Change-Id: If3ca724c2776dc57bf6d394edf9bc10eaacd94f9
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2004021
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Bug 2487534
Change-Id: Ib6c2b26af3927339bd4fb25396350b3f4d222737
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2004020
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-FBQ(command/message queue) will be part of super surface
which will reside in FB.
-FBQ access should happen using falcon generic queue functions
so added FBQ related functions under flacon queue as needed
to support FBQ.
-Additional FBQ related public functions exposed as command buffer is
constructed in sysmem buffer which will be copied to actual FBQ element
buffer & also, payload is part of queue element which needs some queue
parameters access from client
JIRA NVGPU-1577
Bug 2487534
Change-Id: I1b9d9d999e73af3dabe54240c16676ff8d0c21fa
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2004019
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>