gpu: nvgpu: specify devfreq timer through dt

Originally,
nvgpu uses deferrable timer for devfreq polling by default,
this leads to issues below because of unstable polling interval.
 - out of time frequency scaling
 - unstable GPU frequency scaling

This change lets users be able to specify devfreq timer through dt.
If the dt node 'devfreq-timer' equals to 'delayed', then gpu will uses
delayed timer for devfreq polling.

Bug 3823798

Change-Id: Idc0849b4a6b8af52fda8e88f5c831f183b7a27de
Signed-off-by: shaochunk <shaochunk@nvidia.com>
(cherry picked from commit c655a5e058)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2908703
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
This commit is contained in:
shaochunk
2023-05-02 15:16:57 +08:00
committed by mobile promotions
parent 878182c1d5
commit d6359b5adc
3 changed files with 36 additions and 1 deletions

View File

@@ -40,6 +40,7 @@
#include "platform_gk20a.h"
#include "scale.h"
#include "os_linux.h"
#include "driver_common.h"
/*
* gk20a_scale_qos_notify()
@@ -520,6 +521,7 @@ void gk20a_scale_init(struct device *dev)
int error = 0;
register_gpu_opp(dev);
nvgpu_read_devfreq_timer(g);
profile->devfreq_profile.initial_freq =
profile->devfreq_profile.freq_table[0];