-Earlier, with DMEM queue, if command needs in/out payload
then space needs to be allocated in DMEM/FB-surface &
copy payload in allocated space before sending command
by providing payload info in sending command .
-With FBQ, command in/out payload is also part of FB command
queue element & not required to allocate separate space in
DMEM/FB-surface, so added changes to handle FBQ payload request
while sending command & also in response handler to extract
data from out payload.
JIRA NVGPU-1579
Change-Id: Ic256523db38badb1f9c14cbdb98dc9f70934606d
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966741
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Added NVGPU_SUPPORT_PMU_RTOS_FBQ feature to enable
FBQ support.
-Add support to read PMU RTOS init message from
FBQ message queue to process init message &
construct FBQ for further communication
with PMU RTOS ucode.
-Added functions to init FB command/message queues
as per init message inputs from PMU RTOS ucode.
JIRA NVGPU-1578
Change-Id: If2678d20f7195e6e8cba354b7dca5117003e3c29
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964068
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
All slave clock should be quantized as per step size.
TU104 has 15Mhz as step size.
Enable clk_arb without enabling clk_freq_controller.
clk_freq_controller is not needed for Auto use case.
Increase the maxclk only when master is less that slave clock.
This is needed when gpcclk is less than slave P0 min.
Use get_status to get Vim and use it for change sequencer.
Add support for Device Events
Bug 200454682
Bug 2481917
Change-Id: Ie0c404f4b77e41f6a1719b52d6e29a5ac757b41b
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1994831
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vaikundanathan S <vaikuns@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, there is free space of 3MB with current implementation due to
gap between WPR & NON-WPR offset, with this PMU buffers are allocated
between this space & some are after WPR.
So, modified WPR to allocate at 0th offset of bootstrap-region of VIDMEM
& NON-WPR to be at WPR+WPR_SIZE offset of bootstrap-region to make
contiguous free space available till end of bootstrap-region of VIDMEM.
Increased WPR/NON-WPR size from 1MB to 2MB as LS falcon managed
count increased to 4 for Turing & remains 2MB for previous chips too.
Bug 200476497
Change-Id: I92ca5bc9a571330d75a66ce820a1c82442c1f200
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1994653
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename __nvgpu_set_enabled() to nvgpu_set_enabled(). The original
double underscore was present to indicate that this function is a
function with potentially unintended side effects (enabling a feature
has wide ranging impact).
To not lose this documentation a comment was added to convey that this
function must be used with care.
JIRA NVGPU-1029
Change-Id: I8bfc6fa4c17743f9f8056cb6a7a0f66229ca2583
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989434
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses.
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in clk arb code:
* nvgpu_clk_dev_from_refcount
* nvgpu_clk_session_from_refcount
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: I612990e9c27f10d0ce3ac76729529aa1eb15d42a
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993796
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gv11b array.
JIRA NVGPU-844
Change-Id: I6200936455117a6205bd282365d4bc90ee1ccccc
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990492
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gp10b array.
JIRA NVGPU-844
Change-Id: I573dd94fb0fc354d297fa94ab1d9a74d5a2b1809
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990482
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.
To satisfy the requirements of this rule, a "U" suffix is added to all
the integer literals used for initializing the _pginitseq_gm20b array.
JIRA NVGPU-844
Change-Id: I04f3b4db1601c1d5c21d7c48a8a513db4e12a542
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990479
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add clk arbiter support for tu104
setup clk_arb for supporting functions in hal_tu04
TU104 supports GPCCLK and not GPC2CLK
Remove multiplication and division by 2 to convert gpcclk to gpc2clk
Provide support for following features
*Domains: Currently GPCCLK is supported
*clk Range: From P0 min to P0 max
*Freq Points: Gives the VF curve from PMU
*Default: Default value(P0 Max)
*Current Pstate: P0 is supported
All request for change is freq is validated against P0 value
Out of bound values are trimmed to match the Pstate limits
Multiple requests are supported and max of that will be set
Requests are sent to PMU via change sequencer
Bug 200454682
JIRA NVGPU-1653
Change-Id: I36735fa50c7963830ebc569a2ea2a2d7aafcf2ab
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1982078
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.1 mandates that the correct data types are used as
operands of operators. For example, only unsigned integers can be used
as operands of bitwise operators.
This patch fixes rule 10.1 vioaltions for drivers/gpu/nvgpu/common.
JIRA NVGPU-777
JIRA NVGPU-1006
Change-Id: I53fe750f1b41816a183c595e5beb7bd263c27725
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971221
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add common/falcon/falcon_priv.h file that will contain declarations
private to Falcon unit. Clean up the falcon header files inclusion.
Rules followed:
1. Remove unneeded header file includes.
2. Falcon unit source files will only include falcon_priv.h.
3. Base architecture Falcon source (falcon_gk20a.c) will only
include hw_falcon_*.h file.
4. Derived architecture source will include hw headers if needed.
5. Other units should not include hw headers for Falcon.
6. HAL source will include the Falcon unit header if needed.
JIRA NVGPU-1459
Change-Id: Ia9f03f7b577fe10b8c0f417e6302fa7ebd4131cc
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1961634
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With intention to make falcon header free of private data we are making
all falcon struct members (pmu.flcn, sec2.flcn, fecs_flcn, gpccs_flcn,
nvdec_flcn, minion_flcn, gsp_flcn) in the gk20a, pointers to struct
nvgpu_falcon. Falcon structures are allocated/deallocated by
falcon_sw_init & _free respectively.
While at it, remove duplicate gk20a.pmu_flcn and gk20a.sec2_flcn,
refactor flcn_id assignment and introduce falcon_hal_sw_free.
JIRA NVGPU-1594
Change-Id: I222086cf28215ea8ecf9a6166284d5cc506bb0c5
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
pmu.g & sec2.g were set in nvgpu_falcon_sw_init. They are now set
in nvgpu_early_init_pmu_sw & nvgpu_init_sec2_setup_sw. Pass gk20a
& pmu struct to nvgpu_init_pmu_fw_support like sec2.
pmu_fw_support & sec2_setup_sw are separated from respective init
sequence and now are called earlier since we need ->g member earlier
and most of the setup is sw only.
nvgpu_init_pmu_fw_ver_ops is now being exported.
JIRA NVGPU-1594
Change-Id: I6c71c6730ce06dad190159269e2cc60301f0237b
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968241
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. Most of the time, callers of pmu_wait_message_cond ignore the
return value. This patch changes the signature to return void, and adds
a new pmu_wait_message_cond_status for callers that need the return
information.
JIRA NVGPU-677
Change-Id: Ibaa15b04c4d40a7de73f39a7d6eb68f9e3da71f3
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978211
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.1 states that operands shall not be of an
inappropriate essential type.
For example, the use of bitwise OR on signed values is not
permitted.
Both the pmu_read_message() and sec2_read_message() routines
do this in some cases when an error (or unexpected number of
bytes) is returned from the falcon queue pop/rewind routines.
This patch eliminates the MISRA violations by modifying these
cases to return the falcon queue operation error unmodified in the
corresponding status argument (or use -EINVAL in the event the
requested number of bytes isn't returned).
To reduce code duplication new pmu_falcon_queue_read() and
sec2_falcon_queue_read() routines are added here to wrap the
code that handles the error for the respective units.
Note that higher up in the call sequence (tu104_sec2_isr() in the
sec2_read_message() case and gk20a_pmu_isr() in the pmu_read_message()
case) the actual status value is only checked for non-zero or ignored
altogether. So it appears no existing code would depend on the
bitwise OR result anyway.
JIRA NVGPU-650
Change-Id: Id303523ac096f1989e612044082e0a62ae8179c2
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972624
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 16.4 emphasizes on having a non-empty default label
for every switch case
MISRA Rule 16.6 emphasizes that every switch statement
shall have atleast two switch-clauses
JIRA NVGPU-1545
JIRA NVGPU-1557
Change-Id: I2d124ac0d66d8c490c59d262ddc647045d455633
Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970216
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU falcon base address was being set without invoking hal api. Remove
FALCON_PWR_BASE. This patch defines gpu_ops.pmu.falcon_base_addr hal api
to get this base address.
JIRA NVGPU-1587
Change-Id: I5c3f27e89bdcc775025bc8d4fa9cf0af11ceb002
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969428
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU counters #0 and #4 are used to count total cycles and busy cycles.
These counts are used by podgov to estimate GPU load.
PMU idle intr status register is used to monitor overflow. Overflow
rarely occurs because frequency governor reads and resets the counters
at a high cadence. When overflow occurs, 100% work load is reported to
frequency governor.
Bug 1963732
Change-Id: I046480ebde162e6eda24577932b96cfd91b77c69
Signed-off-by: Peng Liu <pengliu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1939547
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals or casting operands
to have same type of operands when an arithmetic operation is
performed.
This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.
JIRA NVGPU-992
Change-Id: I27e3e59c3559c377b4bd3cbcfced90fdf90350f2
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1921459
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 11.8 states that a cast shall not remove any const or
volatile qualification from the type pointed to by a pointer.
The linux kernel's container_of() macro contains such a violation as
it generates a pointer to a caller-specified (and so possibly non-const
qualified) type by casting an internally declared const pointer.
The gk20a_from_pmu() uses the container_of() macro to convert
from a struct nvgpu_pmu pointer to a struct gk20a pointer.
The struct nvgpu_pmu has a back pointer to struct gk20a already
however and so this change modifies gk20a_from_gpu() to just
return this back pointer rather than use container_of().
JIRA NVGPU-862
Change-Id: If0e2481c1cf104c2fa6b89334e20e75705bf9c44
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955540
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>