gpu: nvgpu: add wrapping_add_u32

Add nvgpu_wrapping_add_u32() to perform static analysis safe arithmetic
where unsigned wraparound is expected. nvgpu_safe_add_u32() expects that
the result does not wrap, so it cannot be used in such cases.

Jira NVGPU-5491

Change-Id: I68f550fbc62601a9045f8e405e925ad8dac90872
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2342585
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
Konsta Hölttä
2020-05-12 08:44:57 +03:00
committed by Alex Waterman
parent f73d035983
commit dbbe6b67be

View File

@@ -131,6 +131,30 @@ static inline s64 nvgpu_safe_add_s64(s64 sl_a, s64 sl_b)
}
}
/**
* @brief Add two u32 values with wraparound arithmetic
*
* @param ui_a [in] Addend value for adding.
* @param ui_b [in] Addend value for adding.
*
* Adds the two u32 values together. If the result would overflow an u32, wrap
* in modulo arithmetic as defined in the C standard (value mod (U32_MAX + 1)).
*
* @return wrapping add of (\a ui_a + \a ui_b).
*/
static inline u32 nvgpu_wrapping_add_u32(u32 ui_a, u32 ui_b)
{
/* INT30-C */
u64 ul_a = (u64)ui_a;
u64 ul_b = (u64)ui_b;
u64 sum = (ul_a + ul_b) & 0xffffffffULL;
/* satisfy Coverity's CERT INT31-C checker */
nvgpu_assert(sum <= U32_MAX);
return (u32)sum;
}
/**
* @brief Subtract two u8 values and check for underflow.
*