Earlier implementation of railgate disable config is disabling
runtime pm during pm_init. This is causing multiple issues:
1. gpu rail will be on as soon as nvgpu driver probe is called.
Actual gpu hw init may happen at much later point of time.
2. This is breaking railgate_enable sysfs node functionality.
railgate_enable is not working if runtime pm is disabled.
To avoid all these issues for railgate disable, enable runtime pm
during pm_init and set auto-suspend delay to negative (-1), which
will disable runtime pm suspend calls.
Also fixed following issues along with this:
1. Updated railgate_enable debugfs implementation to use auto-suspend delay.
To disable railgating:
Set auto-suspend delay with negative value(-1) which will disable runtime
pm suspend.
To enable railgating:
Set auto-suspend delay with railgate_delay value.
Also removed redundant user_railgate_disabled gk20a device data and
replaced with can_railgate, where ever it is applicable.
2. Initialized default railgate_delay to 500msec to avoid railgate
on/off transitions with railigate enable from disabled state.
3. Created railgate_residency debug fs node irrespective of can_railgate
initial state. This is helping with the case, where initial state of
railgate state off and then railgate enable is done through sysfs node.
Bug 2073029
Change-Id: I531da6d93ba8907e806f65a1de2a447c1ec2665c
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694944
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>