MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch changes a call to falcon_copy_from_dmem to handle
error codes.
JIRA NVGPU-677
Change-Id: Id0bb2b67aa0661e65880fe41d4cfd344dd362c32
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2008815
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
common/pmu/pmu.c uses IS_ENABLED() to check if perfmon sampling
should be enabled. The flag it is checking was already removed, so
the condition always evaluates to false. Remove the reference to the
removed build flag. Perfmon sampling can now be enabled only via
debugfs.
Remove the definition of IS_ENABLED() in include/nvgpu/posix/types.h.
Change-Id: I26ddd9d24c39513db68c7ae2b1cd1ebdd980e96e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011631
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-SSMD - super surface member descriptor
-created new file pmu_super_sruface.c for super surface
related functions.
-Modified macros BOARDOBJGRP_PMU_CMD_GRP_SET_CONSTRUCT and
BOARDOBJGRP_PMU_CMD_GRP_GET_STATUS_CONSTRUCT to fetch
offset/size using super surface related functions
-Moved functions nvgpu_pmu_super_surface_alloc() &
nvgpu_pmu_surface_free from pmu.c to pmu_super_sruface.c
-Created ops create_ssmd_lookup_table under pmu
to support function for chip specific
Currently, NVGPU must modify RM/PMU defined common super surface
data struct to match offset as per NVGPU super surface data struct
as NVGPU cannot include directly RM/PMU defined struct due to number
boardobjs supported by NVGPU, this adds extra work when there is
changes in boardobj or when need add support for new boardobj.
SO, to fix this issue SSMD feature is introduced.
With SSMD support, NVGPU required boardobjs offset will be part of
SSMD lookup table which is part of PMU super surface buffer & is
always first member of PMU super surface data struct for easy access,
SSMD lookup table will be copied to PMU super surface SSMD offset by
PMU RTOS ucode at init stage as per predefined SSMD lookup table.
JIRA NVGPU-1874
Change-Id: Ida1edae707ddded300f9a629710b53a6606ac0ee
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1761338
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Bug 2487534
Change-Id: Ib6c2b26af3927339bd4fb25396350b3f4d222737
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2004020
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Change-Id: If2678d20f7195e6e8cba354b7dca5117003e3c29
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964068
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With intention to make falcon header free of private data we are making
all falcon struct members (pmu.flcn, sec2.flcn, fecs_flcn, gpccs_flcn,
nvdec_flcn, minion_flcn, gsp_flcn) in the gk20a, pointers to struct
nvgpu_falcon. Falcon structures are allocated/deallocated by
falcon_sw_init & _free respectively.
While at it, remove duplicate gk20a.pmu_flcn and gk20a.sec2_flcn,
refactor flcn_id assignment and introduce falcon_hal_sw_free.
JIRA NVGPU-1594
Change-Id: I222086cf28215ea8ecf9a6166284d5cc506bb0c5
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. Most of the time, callers of pmu_wait_message_cond ignore the
return value. This patch changes the signature to return void, and adds
a new pmu_wait_message_cond_status for callers that need the return
information.
JIRA NVGPU-677
Change-Id: Ibaa15b04c4d40a7de73f39a7d6eb68f9e3da71f3
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978211
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 11.8 states that a cast shall not remove any const or
volatile qualification from the type pointed to by a pointer.
The linux kernel's container_of() macro contains such a violation as
it generates a pointer to a caller-specified (and so possibly non-const
qualified) type by casting an internally declared const pointer.
The gk20a_from_pmu() uses the container_of() macro to convert
from a struct nvgpu_pmu pointer to a struct gk20a pointer.
The struct nvgpu_pmu has a back pointer to struct gk20a already
however and so this change modifies gk20a_from_gpu() to just
return this back pointer rather than use container_of().
JIRA NVGPU-862
Change-Id: If0e2481c1cf104c2fa6b89334e20e75705bf9c44
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955540
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This change fixes a number of This is a MISRA 10.3 rule violation due to
the implicit casts of sizeof() to u32's. This change adds u32 casts to
each of these violations. This should be safe because a 4GB type size
would be very unlikely in this driver.
JIRA NVGPU-1008
Change-Id: Icb6dd719b167fd48b86d89837897f1501fd24794
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959429
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs
to qualified/unqualified types.
To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.
This change switches non-offending memcpy usage in pmu/* code
over to to use nvgpu_memcpy() with appropriate casts applied
to maintain consistency within nvgpu.
JIRA NVGPU-849
Change-Id: I095abe3a95071d619ed1cf8421150139a7d4ab93
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946263
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 12.2 states that the right hand operand of a shift
operator shall lie in the range zero to one less than the width
in bits of the essential type of the left hand operand. This
patch will fix these violations by casting them to an appropriate
type or using the relevant BITxx() macros.
JIRA NVGPU-666
Change-Id: I57b6081e9bd98c45ca9f7aa5f35e1d2d66ed0134
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945655
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 14.4 doesn't allow the usage of non-boolean variable as
boolean in the controlling expression of an if statement or an
iteration statement.
Fix violations where a non-boolean variable is used as a boolean in the
controlling expression of if and loop statements.
JIRA NVGPU-1022
Change-Id: I61a2d24830428ffc2655bd9c45bb5403c7f22c09
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1943058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch contains fix for all 17.7 violations instandard C functions
in common code.
JIRA NVGPU-1036
Change-Id: Id6dea92df371e71b22b54cd7a521fc22812f9b69
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929899
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-On Turing, LS PMU RTOS is bootstrapped by SEC2 RTOS,
so PMU needs some time to boot & get ready to process
request as boot happened bit late in flow.
-So issue here is, code execution reaches function
gk20a_init_pstate_pmu_support() to send boardobj
commands to PMU before PMU ready to accept.
-As a result, this cause command request failure
as PMU is not ready yet due to falcon clock is
slower compared CPU clock.
-To fix issue, waiting for pmu_ready flag to become
true before staring PSTATE requests to PMU.
-pmu_ready flag will be set to true as soon as
INIT message received from PMU RTOS.
-Pre Turing-PMU RTOS will be ready much ahead of
reaching this point.
JIRA NVGPU-1150
Change-Id: I4fb5d4ca6dabc22fac34ae32880d7a1f3164e1b7
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925240
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule-14.4 doesn't allow the usage of function pointers & integer
types as booleans in the controlling expression of an if statement or
an iteration statement.
Fix violations where a function pointer or a function whose return
value is an integer, is used as a boolean in the controlling expression
of if and loop statements.
JIRA NVGPU-1021
Change-Id: Ic5336268394ba4396ce80744c25930d2fb44dc42
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1932147
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Call secured_sec2_start() to start SEC2 RTOS ucode execution
on SEC2 falcon in nvgpu_init_sec2_support() function
-Modified nvgpu_init_pmu_support() to do PMU bootstrap
from SEC2 RTOS by sending command.
-Added function nvgpu_sec2_bootstrap_ls_falcons() to
bootstrap LS falcon by taking falcon id as a parameter &
sending request to SEC2 RTOS with command
NV_SEC2_ACR_CMD_ID_BOOTSTRAP_FALCON.
-Modified method gr_gm20b_load_ctxsw_ucode() to
bootstrap FECS & GPCCS falcons using SEC2 RTOS
in cold boot & recovery path.
-Updated ldr_cfg parameters for SEC2 falcon
-Skip adding PMU ucode details to non-wpr blob preparation
to skip supporting of LS PMU falcon bootstrap.
JIRA NVGPUT-85
Change-Id: I5f6828e2737e247767814014801671327bb34a4e
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1832363
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In nvgpu repository, we have multiple accesses to methods in
pmu_gk20a.h which have register accesses. Instead of directly invoking
these methods, these are now called via HALs. Some common methods such
as pmu_wait_message_cond which donot have any register accesses
are moved to pmu_ipc.c and the method declarations are moved
to pmu.h. Also, changed gm20b_pmu_dbg to
nvgpu_dbg_pmu all across the code base. This would remove all
indirect dependencies via gk20a.h into pmu_gk20a.h. As a result
pmu_gk20a.h is now removed from gk20a.h
JIRA-597
Change-Id: Id54b2684ca39362fda7626238c3116cd49e92080
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1804283
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
Jira NVGPU-992
Change-Id: Iab512139a025e035ec82a9dd74245bcf1f3869fb
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1789425
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces,
including single statement blocks. Fix errors due to single statement
if blocks without braces, introducing the braces.
JIRA NVGPU-671
Change-Id: I497fbdb07bb2ec5a404046f06db3c713b3859e8e
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1799525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Renamed "struct pmu_queue" to "struct
nvgpu_falcon_queue" & moved to falcon.h
-Renamed pmu_queue_* functions to flcn_queue_* &
moved to new file falcon_queue.c
-Created ops for queue functions in struct
nvgpu_falcon_queue to support different queue
types like DMEM/FB-Q.
-Created ops in nvgpu_falcon_engine_dependency_ops
to add engine specific queue functionality & assigned
correct HAL functions in hal*.c file.
-Made changes in dependent functions as needed to replace
struct pmu_queue & calling queue functions using
nvgpu_falcon_queue data structure.
-Replaced input param "struct nvgpu_pmu *pmu" with
"struct gk20a *g" for pmu ops pmu_queue_head/pmu_queue_tail
& also for functions gk20a_pmu_queue_head()/
gk20a_pmu_queue_tail().
-Made changes in nvgpu_pmu_queue_init() to use nvgpu_falcon_queue
for PMU queue.
-Modified Makefile to include falcon_queue.o
-Modified Makefile.sources to include falcon_queue.c
Change-Id: I956328f6631b7154267fd5a29eaa1826190d99d1
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1776070
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In order to avoid the circular dependencies,
rearrange the static inline functions from
gk20a.h file.
Moved gk20a_gr_flush_channel_tlb function to
gr_gk20a.c and removed the #include gr_gk20a.h
from gk20a.h
Added a helper function utils.h to
move all generic static inline functions which
have no reference to gpu related structures.
ptimer related functions are moved to
ptimer.h
Implementations for as and pmu are moved to
corresponding files.
JIRA NVGPU-624
Change-Id: I4e956326e773ba037bf3a1696cc4c462085dbbe5
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1781941
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Mostly just including necessary includes to make sure that
global function declarations actually match their implementations.
Also work around pointer munging warning:
/build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c: In function 'nvgpu_pmu_process_init_msg':
/build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c:348:4: error: dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
(*(u32 *)gid_data.signature == PMU_SHA1_GID_SIGNATURE);
Work around this warning by simply moving the type punning.
This code is certainly dangerous - it assumes the endianness
of the header data is the same as the machine this code is
running on. Apparently it works, though, so this ignores
the warning.
JIRA NVGPU-525
Change-Id: Id704bae7805440bebfad51c8c8365e6d2b7a39eb
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1692454
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added ops "pmu.alloc_super_surface" to create
memory space for pmu super surface
- Defined method nvgpu_pmu_sysmem_surface_alloc()
to allocate pmu super surface memory & assigned
to "pmu.alloc_super_surface" for gv100
- "pmu.alloc_super_surface" set to NULL for gp106
- Memory space of size "struct nv_pmu_super_surface"
is allocated during pmu sw init setup if
"pmu.alloc_super_surface" is not NULL &
free if error occur.
- Added ops "pmu_ver.config_pmu_cmdline_args_super_surface"
to describe PMU super surface details to PMU ucode
as part of pmu command line args command if
"pmu.alloc_super_surface" is not NULL.
- Updated pmu_cmdline_args_v6 to include member
"struct flcn_mem_desc_v0 super_surface"
- Free allocated memory for PMU super surface in
nvgpu_remove_pmu_support() method
- Added "struct nvgpu_mem super_surface_buf" to "nvgpu_pmu" struct
- Created header file "gpmu_super_surf_if.h" to include interface
about pmu super surface, added "struct nv_pmu_super_surface"
to hold super surface members along with rsvd[x] dummy space
to sync members offset with PMU super surface members.
Change-Id: I2b28912bf4d86a8cc72884e3b023f21c73fb3503
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656571
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added nvgpu_flcn_mem_scrub_wait() to
falcon interface layer to poll imem/dmem
scrubbing status complete check for 1msec
with status check interval of 10usec.
-Called nvgpu_flcn_mem_scrub_wait() in
falcon reset interface to check scrubbing
status upon falcon/engine reset.
-Replaced mem scrubbing wait check code in
pmu_enable_hw() by calling
nvgpu_flcn_mem_scrub_wait()
Bug 200346134
Change-Id: Iac68e24dea466f6dd5facc371947269db64d238d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1598644
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Created nvgpu_kill_task_pg_init() method to set
pmu state to PMU_STATE_EXIT & make thread stop,
and poll to confirm thread stopped.
- Check for PMU/SEC2 ACR secure boot completion
status & initiate pg init thread kill if ACR boot
exits with error, which fails to validate &
boot LS-PMU.
- Set pmu state to PMU_STATE_OFF after thread kill
during ACR boot failure.
Issue: pg init task blocks if PMU boot fails &
cause kernel to show message "task nvgpu_pg_init_g:2120
blocked for more than 120 seconds"
Bug 200346134
Change-Id: I5270426080dcd628ccca4df798005294c19767a0
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1582593
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU response(intr callback for messages) can run faster
than the kthread posting commands to PMU.
This causes the PMU message callback to skip important pmu
state change(which happens just after the PMU command is posted).
Solution:
State change should be triggered from only inside the intr callback.
Other places can only update the pmu_state variable.
This change also adds error check to print in case command post fails.
JIRA GPUT19X-20
Change-Id: Ib0a4275440455342a898c93ea9d86c5822e039a7
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583577
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added status check for nvgpu_pmu_disable_elpg() return value
& prints error information upon failure.
- Below CID's are due to missing status check of function
nvgpu_pmu_disable_elpg() return value, so this CL helps to fix it
2624546
2624547
2624548
Bug 200291879
Change-Id: I263fc6bc9e2667af478bfd7160fe205167556f99
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1565998
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
The Pg init task hogs the kernel by having a wait condition with no timeout
waiting for pg state change, but ps state may not post a change in a long time
depending on runtime conditions, so we get soft-crashes warning spews in the
kernel
We solve this by making the condition wait interruptible
bug 200346134
Change-Id: I8a3349031acc5065b767dc22eec6e5df113d3ad7
Signed-off-by: David Nieto <dmartineznie@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1566545
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>