gpu:nvgpu: fix for shadow domain submission

There are three issues with shadow domain submission:
1. runlist mem is not being swapped with mem_hw for shadow domain when
there is non-shadow domain being bound to tsg which does not allow runlist
to have all the tsgs. To fix this nvgpu_runlist_swap_mem() is being called
for shadow domain as well.
2. tsg num_active_channels is being set as part of non-shadow domain which
does happen after shadow domain. Due to this, runlist tsg length is not
being set as part of runlist reconstruct and leaving tsg length 0 for last
tsg. To fix this, tsg num_active_channels always get set for shadow domain
as it gets configured first.
3. NV_BUILD_CONFIGURATION_VARIANT_IS_EMBEDDED is not solving the purpose
to differentiate l4t and embedded_linux builds so using
NV_BUILD_SYSTEM_TYPE in place of this to find out the build type.

L4t is using round robin scheduling and this issue coming with manual mode
scheduling so adding the fix only for manual mode scheduling support.

Bug 3884011

Change-Id: Ic55da8f75294eb32c8df6e35fb1fa47df78db8f8
Signed-off-by: prsethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2833880
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
prsethi
2022-12-27 13:32:22 +00:00
committed by mobile promotions
parent 6567a4e048
commit 144f548552
2 changed files with 25 additions and 5 deletions

View File

@@ -1,5 +1,5 @@
/*
* Copyright (c) 2011-2022, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2011-2023, NVIDIA CORPORATION. All rights reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
@@ -521,6 +521,7 @@ int nvgpu_runlist_update_locked(struct gk20a *g, struct nvgpu_runlist *rl,
struct nvgpu_channel *ch, bool add,
bool wait_for_finish)
{
bool can_update_tsg_state = false;
int ret = 0;
(void)wait_for_finish;
/*
@@ -532,14 +533,33 @@ int nvgpu_runlist_update_locked(struct gk20a *g, struct nvgpu_runlist *rl,
}
if (domain != rl->shadow_rl_domain) {
/* Avoid duplicate updates to the TSG state in nvgpu_runlist_modify_active_locked */
ret = nvgpu_runlist_update_mem_locked(g, rl, rl->shadow_rl_domain, ch, add, false);
/* Changes enclosed here in CONFIG_NVS_ROUND_ROBIN_SCHEDULER_DISABLE
* enabled for manual mode schedulers which is supported by embedded
* platforms currently. This change will not have any impact on l4t as
* l4t uses RR scheduling. Once l4t migrates to manual mode scheduling
* these flags can be removed which will eventually enabled the change
* for l4t as well.
*/
#ifdef CONFIG_NVS_ROUND_ROBIN_SCHEDULER_DISABLE
/* Avoid duplicate updates to the TSG state in
* nvgpu_runlist_modify_active_locked and suppose to be updated
* with shadow domain as shadow domain updates the tsg state
* first.
*/
can_update_tsg_state = true;
#endif
ret = nvgpu_runlist_update_mem_locked(g, rl,
rl->shadow_rl_domain, ch, add, can_update_tsg_state);
if (ret != 0) {
return ret;
}
#ifdef CONFIG_NVS_ROUND_ROBIN_SCHEDULER_DISABLE
nvgpu_runlist_swap_mem(g, rl->shadow_rl_domain);
#endif
}
ret = nvgpu_runlist_update_mem_locked(g, rl, domain, ch, add, true);
ret = nvgpu_runlist_update_mem_locked(g, rl, domain, ch, add,
!can_update_tsg_state);
if (ret != 0) {
return ret;
}