In nvgpu repository, we have multiple accesses to methods in
pmu_gk20a.h which have register accesses. Instead of directly invoking
these methods, these are now called via HALs. Some common methods such
as pmu_wait_message_cond which donot have any register accesses
are moved to pmu_ipc.c and the method declarations are moved
to pmu.h. Also, changed gm20b_pmu_dbg to
nvgpu_dbg_pmu all across the code base. This would remove all
indirect dependencies via gk20a.h into pmu_gk20a.h. As a result
pmu_gk20a.h is now removed from gk20a.h
JIRA-597
Change-Id: Id54b2684ca39362fda7626238c3116cd49e92080
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1804283
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
Jira NVGPU-992
Change-Id: Iab512139a025e035ec82a9dd74245bcf1f3869fb
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1789425
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces,
including single statement blocks. Fix errors due to single statement
if blocks without braces, introducing the braces.
JIRA NVGPU-671
Change-Id: I497fbdb07bb2ec5a404046f06db3c713b3859e8e
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1799525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Commit c61e21c868 fixed race codition in PMU
state transition.
- The race condition is such that PMU response(intr callback for messages) can
run faster than kthread posting commands to PMU and thus PMU message callback
may skip important pmu state change.
- Commit c61e21c868 introduced a fix where PMU
state change was only updated from callback while other places can only update
pmu_state variable
- However, this commit introduced a regression as follows:
- When PMU state is PMU_STATE_INIT_RECEIVED, we loop over every engine
supported by GPU --> If state = PMU_STATE_INIT_RECEIVED, change the state
to PMU_STATE_ELPG_BOOTING and init ELPG else If state != PMU_STATE_INIT_RECEIVED
throw an error saying "PMU INIT not received"
- Now, if GPU supports multiple engines, first engine will check that
pmu_state is PMU_STATE_INIT_RECEIVED and change it to PMU_STATE_ELPG_BOOTING
However, from second engine onwards, since state is already changed to
PMU_STATE_ELPG_BOOTING, all engines except first engine start throwing
error "PMU INIT not received"
- This patch fixes the issue by changing pmu state from
PMU_STATE_INIT_RECEIVED to PMU_STATE_ELPG_BOOTING only once.
Bug 200372838
JIRA EVLR-2164
Change-Id: Ic8c954d14acb1d6ec3adcbc4bcf4d4745542d9f0
Signed-off-by: Deepak Bhosale <dbhosale@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1769814
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-by: Aparna Das <aparnad@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode records supported feature list for a
particular chip as support mask sent
via PMU_PG_PARAM_CMD_GR_INIT_PARAM.
It then enables selective feature list through
enable mask sent via
PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd.
Right now only ELPG state machine mask was enabled.
Only ELPG state machine was getting executed
but other crucial steps in ELPG entry/exit sequence
were getting skipped.
Bug 200392620.
Bug 200296076.
Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665767
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU response(intr callback for messages) can run faster
than the kthread posting commands to PMU.
This causes the PMU message callback to skip important pmu
state change(which happens just after the PMU command is posted).
Solution:
State change should be triggered from only inside the intr callback.
Other places can only update the pmu_state variable.
This change also adds error check to print in case command post fails.
JIRA GPUT19X-20
Change-Id: Ib0a4275440455342a898c93ea9d86c5822e039a7
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1583577
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In pmu_handle_pg_elpg_msg, when processing ELPG_MSG_DISALLOW_ACK, if current
pmu state is ELPG_BOOTING and GR_POWER_GATING is not supported, make sure
nvgpu_pmu_state_change updates the state_change flag, since PMU is now fully
initialized. In particular, this fixes PMU boot for dgpu, which does not support
GR_POWER_GATING.
JIRA EVLR-1776
Change-Id: I2feb97b0fb8248e9cb7945ac3189877c21815a4a
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1539102
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
ACCESS_ONCE() is used for making sure that in a given place of code
access a variable exactly once. It prevents compiler rearranging the
read from happening earlier.
Remove its use from cases where rearranging of the read does not
create problems.
Change-Id: I340f375e8fecc31f3a3fab543256069cb4c682dc
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1531649
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
- moved pg related code to pmu_pg.c under common/pmu folder
PG state machine support methods
PG ACK handlers
AELPG methods
PG enable/disable methods
-prepended with nvgpu_ for elpg/aelpg global methods
by replacing gk20a_
JIRA NVGPU-97
Change-Id: I2148a69ff86b5c5d43c521ff6e241db84afafd82
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/1498363
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>