gpu: nvgpu: specify devfreq timer through dt

Originally,
nvgpu uses deferrable timer for devfreq polling by default,
this leads to issues below because of unstable polling interval.
 - out of time frequency scaling
 - unstable GPU frequency scaling

This change lets users be able to specify devfreq timer through dt.
If the dt node 'devfreq-timer' equals to 'delayed', then gpu will uses
delayed timer for devfreq polling.

Bug 3823798

Change-Id: Idc0849b4a6b8af52fda8e88f5c831f183b7a27de
Signed-off-by: shaochunk <shaochunk@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2897026
Reviewed-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
This commit is contained in:
shaochunk
2023-05-02 15:16:57 +08:00
committed by mobile promotions
parent c066401be7
commit c655a5e058
3 changed files with 33 additions and 2 deletions

View File

@@ -1,7 +1,7 @@
/*
* gk20a clock scaling profile
*
* Copyright (c) 2013-2022, NVIDIA Corporation. All rights reserved.
* Copyright (c) 2013-2023, NVIDIA Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -40,6 +40,7 @@
#include "platform_gk20a.h"
#include "scale.h"
#include "os_linux.h"
#include "driver_common.h"
/*
* gk20a_scale_qos_notify()
@@ -520,6 +521,7 @@ void gk20a_scale_init(struct device *dev)
int error = 0;
register_gpu_opp(dev);
nvgpu_read_devfreq_timer(g);
profile->devfreq_profile.initial_freq =
profile->devfreq_profile.freq_table[0];