gm20b_pbdma_handle_intr is only initializing pbdma_status
when either pbdma_intr_0 or pbdma_intr_1 is pending.
This could lead to using non-initialized chsw_status in
gv11b_fifo_preempt_poll_pbdma, and causing unexpected
failures in unit tests.
Read pbdma_status before checking for pbdma interrupts,
to make sure pbdma_status contains valid data.
Jira NVGPU-4887
Change-Id: If1bdb24ae04b58e85e4217c9c0854c01ca65525b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2279111
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Advisory Rule states that the precedence of operators within
expressions should be made explicit.
This change removes the Advisory Rule 12.1 violations from the
implementation of the is_power_of_2() macro.
Jira NVGPU-3178
Change-Id: I75117e9ab6e47beddebeebaa23c6408b6ceb88ad
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2278574
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In the current logic for nvgpu_timeout_expired(), function always
returns 0 if fault injection is enabled. This only helps for testing
timeout not expired scenarios. However, if nvgpu_timeout_expired() is
used in a while(true) loop, it is impossible to break the infinite loop.
This patch modifies nvgpu_timeout_expired() to not expire until fault
injection counter is non-zero. The function will now return -ETIMEDOUT
when fault injection is enabled and counter is zero.
Jira NVGPU-4675
Change-Id: I494031698ade19cf1ec5b75e4dbe5a1157da2aa7
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2275290
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below functions in common.gr.obj_ctx subunit include unnecessary
asserts to ensure value is not truncated when parsing into U32 size.
nvgpu_gr_obj_ctx_gr_ctx_alloc()
Make use of nvgpu_safe_cast_u64_to_u32() and remove unnecessary
asserts
Jira NVGPU-4778
Change-Id: Ic06c2f4131b3bba35222f7de5441f82ecee6d83d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2277158
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch removes dependency between pmu.clk and hal.clk.
Below is the implementation.
*Init the clk domains inside pmu.clk unit.
*Set this in a member variable and send it to hal.clk unit
*Use this to determine the valid clock domains and read the
monitor registers for each valid domains.
*Return the domain with error code.
*With this all the clk domain data is removed from hal.clk and
only pmu.clk uses it as it owns it.
JIRA NVGPU-4491
Change-Id: Ie57b2472cfaacfd2ce43ac7f95702bd95fb8bbaa
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2272416
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When generating Doxygen XML, if an inner structure is anonymous, the
generated XML will remove the inner structure and merge its members
with the parent structure. This is not ideal for tooling, so this
patch adds a name to a few anonymous structures.
JIRA NVGPU-3510
Change-Id: I42556a6b2a2f9a156d9a74fa894197c845656adc
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2274706
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_engine_get_gr_runlist_id gets the first instance of
active GR engine using nvgpu_engine_get_ids. Therefore the
engine_id is already known to be valid.
Remove call to nvgpu_engine_get_active_eng_info (which iterates
all engines), and fetch f->engine_info[engine_id] instead.
Also remove non-NULL test for engine_info, which could not
be true.
Jira NVGPU-4511
Change-Id: Ifcc0851e3d14d862e2ed7b21ea57f17a66eca9dd
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2263617
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
event_id_list and event_id_list_locks fields are only
needed in nvgpu_tsg when CONFIG_NVGPU_CHANNEL_TSG_CONTROL
is defined.
Conditionally compile those fields and related code,
so that they are removed from safety build.
Jira NVGPU-4376
Change-Id: I8678aa1b8cd4166aa37bcb42cda1eb9c703fd32f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2273261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Advisory Directive 4.5 states that identifiers in the same
name space with overlapping visibility should be typographically
unambiguous.
The presence of both the roundup(x,y) and round_up(x,y) macros in
the posix utils.h header incurs a violation of this rule.
These macros were added to keep in sync with the linux kernel variants.
However, there is a key distinction between how these two macros
work in the linux kernel; roundup(x,y) can handle any y alignment while
round_up(x,y) is intended to work only when y is a power-of-two.
Passing a non-power-of-two alignment to round_up(x,y) results in an
incorrect value being returned (silently).
Because all current uses of roundup(x,y) and round_up(x,y) in
nvgpu specify a y value that is a power-of-two and the underlying
posix macro implementations assume as much, it is best to remove
roundup(x,y) from nvgpu altogether to avoid any confusion.
So this change converts all uses of roundup(x,y) to round_up(x,y).
Jira NVGPU-3178
Change-Id: I0ee974d3e088fa704e251a38f6b7ada5a7600aec
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2271385
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For some code that is tested using public APIs it's not safe for the
parent thread to continue while the child thread is running. Fault
injection per thread pointer points to the same container for both
parent thread as well as the created one. So, there is chance of a race
in fault injection functionality. Serialize the run so that race can be
mitigated. Caller of nvgpu_thread_create() API should ensure that the
created thread is stopped using some fault injection or otherwise.
Bug 200580790
Change-Id: I334c07c4bac6e43d67de9bfc581dad021e421acd
Signed-off-by: shashank singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268133
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Tested-by: Rajesh Devaraj <rdevaraj@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created ucode_perf_change_seq_inf.h and moved all
change_seq interface structs and MACROs
-Moved nvgpu_clk_set_req_fll_clk_ps35 from clk unit
to change_seq unit
-Removed MACROs and includes which are not needed
NVGPU-4448
Change-Id: I04ab32cbc9a1fc827f3360a8ea0f367019981823
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2266051
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created ucode_perf_pstate_inf.h and moved all
pstate interface structs and MACROs.
-Created nvgpu_perf_pstate_get_lpwr_index for getting
lpwr index
-Created nvgpu_clk_domain_get_from_index for getting
clk_domain from index
-Removed pstate_get_status code which is not needed
for tu10a profile
-Removed MACROs and includes which are not needed
NVGPU-4448
Change-Id: I516816a1d92a60a91ea479cb9c334d332d3d7a89
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2264716
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Created ucode_perf_vfe_inf.h and moved all VFE
interface structs and MACROs into this header
-Created nvgpu_clk_fll_get_fmargin_idx to get
freq margin index
-Created nvgpu_vfe_var_get_s_param to read s_param
-Removed MACROs and header includes which are
not needed
NVGPU-4448
Change-Id: I89f946d555bcbc7823665d2a5a761049f7a5e963
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2260150
GVS: Gerrit_Virtual_Submit
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.3 doesn't allow value of an expression to be assigned to a
narrower or different essential type object.
Currently, in nvgpu_do_assert() "false" value is recognized as "signed
8-bit int". And so passing value false to nvgpu_assert converts signed
8-bit int to boolean. This is a coverity bug acknowledged by Synopsys.
This patch adds whitelisting to nvgpu_do_assert() macro.
Bug 2623654
Bug 200510004
Jira NVGPU-4780
Change-Id: Ibe35b30c12e2575f45e25ef21741627957b4ea75
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2271448
GVS: Gerrit_Virtual_Submit
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Passing branch of nvgpu_timeout_peek_expired was not covered due to jump
over it in nvgpu_falcon_mem_scrub_wait. Remove that jump to cover the
branch.
Add unit test for covering the error handling in case of read from
DMEM control register returns invalid data using fault injection.
JIRA NVGPU-4814
Change-Id: I9f99186bd2b1c5f39ead130d3161d3e7fa622ac4
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2272937
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>