Commit Graph

2615 Commits

Author SHA1 Message Date
Terje Bergstrom
7f991657c1 gpu: nvgpu: Add boost once GPU is initialized
Workaround for GPU hang if boost turns GPU on before it is
initialized.

Bug 1435870

Change-Id: I07d0617049612344ca7c494da8cb8d75789984e5
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/453375
2015-03-18 12:10:47 -07:00
Deepak Nibade
2489960344 gpu: nvgpu: remove redundant lock
"isr_enable_lock" was used to protect pmu's isr_enabled flag
and pmu enable/disable calls

Instead of this extra lock, we can reuse "isr_mutex" for this
purpose

Bug 200014542
Bug 200014887

Change-Id: Ifbb7d6108effc132266a20517820e470d52a7110
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/453348
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:47 -07:00
Deepak Nibade
fff31d310c gpu: nvgpu: add channel enable/disable ioctls
Add below ioctls for channels

1. NVHOST_IOCTL_CHANNEL_ENABLE
   To enable the channel

2. NVHOST_IOCTL_CHANNEL_DISABLE
   To disable the channel

3. NVHOST_IOCTL_CHANNEL_PREEMPT
   To preempt the channel
   (Not supported for a channel in TSG)

Bug 1514064

Change-Id: Ie9315f9742bb27efb22f993799c51a1ecda91756
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449229
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:47 -07:00
Arto Merilainen
19b8f854ce gpu: nvgpu: Fix semaphore refcounting
This patch fixes a refcounting issue in semaphore handling.

Change-Id: I03327c60ed6923a90663f0b845566e81af4b94d4
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/453056
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:47 -07:00
Deepak Nibade
ae3ba04955 gpu: nvgpu: verify runnable channel count in TSG
In runlist we first write channel count in TSG entry and then
follow those many channel entries
If no. of channel entries does not match to count then it is
considered as error

To detect this, add a counter while adding channel entries and
give warning if channel count does not match with this counter

bug 1470692

Change-Id: I4bbfd9b696fbfafa25dffb27979373f057a7f35a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449228
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:46 -07:00
Deepak Nibade
478e659ae4 gpu: nvgpu: do not touch runlist during recovery
Currently we clear the runlist and re-create it in
scheduled work during fifo recovery process

But we can post-pone this runlist re-generation for
later time i.e. when channel is closed

Hence, remove runlist locks and re-generation from
handle_mmu_fault() methods. Instead of that, disable
gr fifo access at start of recovery and re-enable
it at end of recovery process.

Also, delete scheduled work to re-create runlist.
Re-enable EPLG and fifo access in
finish_mmu_fault_handling() itself.

bug 1470692

Change-Id: I705a6a5236734c7207a01d9a9fa9eca22bdbe7eb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449225
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:46 -07:00
Deepak Nibade
76993ba18c gpu: nvgpu: rework TSG's channel list
Modify TSG's channel list as "ch_list" for all channels
instead of "ch_runnable_list" for only runnable list
We can traverse this list and check runnable status of
channel in active_channels to get runnable channels

Remove below APIs as they are no longer required :
gk20a_bind_runnable_channel_to_tsg()
gk20a_unbind_channel_from_tsg()

While closing the channel, call gk20a_tsg_unbind_channel()
to unbind the channel from TSG

bug 1470692

Change-Id: I0178fa74b3e8bb4e5c0b3e3b2b2f031491761ba7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449227
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:45 -07:00
Arto Merilainen
b33020008b gpu: nvgpu: Add sw shadow for load value
Reading the load value may increase CPU power consumption
temprorarily. In most cases we are ok with a value that
was read a moment earlier.

This patch introduces a software shadow for gpu load. The shadow
is updated before starting scaling and all scaling code paths use
the sw shadow.

Change-Id: I53d2ccb8e7f83147f411a14d3104d890dd9af9a3
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/453347
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:45 -07:00
Lauri Peltonen
574ee40e51 gpu: nvgpu: Add compression state IOCTLs
Bug 1409151

Change-Id: I29a325d7c2b481764fc82d945795d50bcb841961
Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
2015-03-18 12:10:44 -07:00
Terje Bergstrom
c8faa10d1d gpu: nvgpu: Add support for FECS errors
Add retrieving error code for FECS errors.

Change-Id: I7d9dfc4723376272edb2e5b2ef06f71de1a06889
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/450351
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Chris Dragan <kdragan@nvidia.com>
Tested-by: Chris Dragan <kdragan@nvidia.com>
2015-03-18 12:10:44 -07:00
Mahantesh Kumbar
0858498f7b nvgpu:Added PROD settings for ELPG sequencing
Added PROD settings for ELPG sequencing registers

Bug 200023161

Change-Id: Id313f9bc800d3a57f45aff0f0f609887565971be
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
2015-03-18 12:10:43 -07:00
Arun Kumar Swain
e5f82c848d arm: tegra: Register tegra-throttle cdev as driver
1. Register tegra-throttle cooling device as a
platform driver.
2. Obtain all the platform data (throtlle table
info) for all instances of blanced-throtlled cdev
from device tree and register them.

Change-Id: Ie92685eea3eb5cb18068b195adc9ab5f83762399
Signed-off-by: Arun Kumar Swain <arswain@nvidia.com>
Reviewed-on: http://git-master/r/449104
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
Tested-by: Diwakar Tundlam <dtundlam@nvidia.com>
2015-03-18 12:10:43 -07:00
Edgardo Handal
8bd11ae3b0 gpu: nvgpu: fix compbit_store page allocation
Allocate enough pages in the case that compbit_backing_size is not a
power of two.

Change-Id: Iaa2da66a3d1bd86ac746ed619a7f37e9379904db
Signed-off-by: Edgardo Handal <ehandal@nvidia.com>
Reviewed-on: http://git-master/r/449460
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:43 -07:00
Mahantesh Kumbar
f5422f80f2 gpu:nvgpu:sysfs node to enable/disable aelpg
Added "aelpg_enable" sysfs node to enable/disable aelpg.

Bug 1464737

Change-Id: Ia0eadbea59e2f9373ab5f413fa6e28780aff3c3c
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
2015-03-18 12:10:42 -07:00
Deepak Nibade
b0759dc68d gpu: nvgpu: return error from mutex_acquire()
return error from pmu_mutex_acquire() and release() if
pmu->initialized is not set

Bug 1533644

Change-Id: I341a5831bc5beeccb4587668f61c954ce7576226
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2015-03-18 12:10:42 -07:00
Deepak Nibade
a20bb7dde2 gpu: nvgpu: fix error handling for mutex_acquire()
Currently if pmu_mutex_acquire() fails, we disable ELPG
and move ahead. But it is not clear why it is required
to disable ELPG in case where we fail to acquire mutex.

Hence skip disabling ELPG if mutex_acquire() fails

Bug 1533644

Change-Id: I7e8e99a701d0ba071eb31ac17582b04072ee55eb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/448131
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:41 -07:00
Arto Merilainen
4df9290536 gpu: nvgpu: Fix compbit base calculation
Compression bit base was calculated incorrectly in cases where
number of LTCs was not 1. This patch fixes the code.

Change-Id: I25e3fa7446b238202d93ce8a72ed919d11fb6e30
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/449281
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Jussi Rasanen <jrasanen@nvidia.com>
GVS: Gerrit_Virtual_Submit
2015-03-18 12:10:41 -07:00
Tuomas Tynkkynen
e51f76f1c0 gpu: nvgpu: Use noinline_for_stack to avoid GCOV build break
If code coverage is enabled on GCC 4.7, the kernel build fails in
gk20a_init_kind_attr() since GCC decides to inline almost everything in
this file into it, leading to a massive stack frame with over kilobyte's
worth of temporary variables generated by gcov, leading to this error:

kind_gk20a.c: In function 'gk20a_init_kind_attr':
kind_gk20a.c:424:1: error: the frame size of 1232 bytes is larger than
1024 bytes [-Werror=frame-larger-than=]

(Just removing the inline keyword doesn't work, as GCC still decides to
inline it, so noinline_for_stack is actually required.)

Change-Id: I819fd2a5b20581f0ac60e1ee490899c977379151
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/448914
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
Tested-by: Juha Tukkinen <jtukkinen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:41 -07:00
Arto Merilainen
b3e023a805 gpu: nvgpu: CDE support
This patch adds support for executing a precompiled GPU program to
allow exporting GPU buffers to other graphics units that have color
decompression engine (CDE) support.

Bug 1409151

Change-Id: Id0c930923f2449b85a6555de71d7ec93eed238ae
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/360418
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:41 -07:00
Lauri Peltonen
c60a300c4a gpu: nvgpu: Attach compression state to dma-buf
Bug 1509620

Change-Id: I694fe43ef5d1f4f329d997a3d60e006785374cc3
Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-on: http://git-master/r/439849
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:40 -07:00
Lauri Peltonen
bcf60a22c3 gpu: nvgpu: Add gk20a_fence type
When moving compression state tracking and compbit management ops to
kernel, we need to attach a fence to dma-buf metadata, along with the
compbit state.

To make in-kernel fence management easier, introduce a new gk20a_fence
abstraction. A gk20a_fence may be backed by a semaphore or a syncpoint
(id, value) pair. If the kernel is configured with CONFIG_SYNC, it will
also contain a sync_fence. The gk20a_fence can easily be converted back
to a syncpoint (id, value) parir or sync FD when we need to return it to
user space.

Change gk20a_submit_channel_gpfifo to return a gk20a_fence instead of
nvhost_fence. This is to facilitate work submission initiated from
kernel.

Bug 1509620

Change-Id: I6154764a279dba83f5e91ba9e0cb5e227ca08e1b
Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-on: http://git-master/r/439846
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:40 -07:00
Terje Bergstrom
55295c6087 gpu: nvgpu: Remove unused code in allocator
Remove functions that are not used in gk20a allocator.

Bug 1523403

Change-Id: I36b2b236258d61602cb3283b59c43b40f237d514
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/432174
2015-03-18 12:10:40 -07:00
Kevin Huang
7812a11903 gpu: nvgpu: gk20a: add address check in allocator.
Check the address range before allocation to avoid illegal address
range.

Bug 1523403

Change-Id: Iff171399a980b69f9b1a18eea5bc37eff4c5d749
Signed-off-by: Kevin Huang <kevinh@nvidia.com>
Reviewed-on: http://git-master/r/437871
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:40 -07:00
Arto Merilainen
ccead861f2 gpu: nvgpu: gm20b: Store LTC configuration
Change-Id: Ia780e6a7cb3579f0d6ed2dca9949a349799535fd
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/448115
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:38 -07:00
Deepak Nibade
fc73ff7214 gpu: nvgpu: skip WFI for KEPLER_C channels
In channel_finish(), we submit WFI for all the channels
including channels with KEPLER_C class.

Since there is no need to submit WFI for channels with
KEPLER_C class, we can optimize to skip submitting
WFI and directly wait on last submit fence

Bug 1534272

Change-Id: I3838416cf22122728e7f1008e01d77b14a35deba
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2015-03-18 12:10:38 -07:00
Deepak Nibade
d0ce4807d0 gpu: nvgpu: poweron host1x explicitly
Currently gk20a gets reference of host1x via phandle in
Device Tree. But runtime PM does not seem to be handling power
dependencies too well in this case and hence some times host1x
is off when we need it.

To fix this, exlicitly power on host1x while powering gpu up.
Do this via "busy" and "idle" callbacks from gk20a_platform

Bug 1534272
Bug 200022536

Change-Id: Ia562ee19722cfc8edc5626a5a058ab8edfe3d206
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
2015-03-18 12:10:37 -07:00
Supriya
e34b945834 nvgpu: new gpmu ucode compatibility
For LS PMU new ucode needs to be used.
Ucode has interface header file changes too.
This patch also has fixes for pmu dmem copy failure

Bug 1509680

Change-Id: I8c7018f889a82104dea590751e650e53e5524a54
Signed-off-by: Supriya <ssharatkumar@nvidia.com>
Reviewed-on: http://git-master/r/441734
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:36 -07:00
Alex Frid
44b9d5fdb0 gpu: nvgpu: Use GPU device name in clock get operation
Used GPU device name in clock get operation (instead of fixed name),
to make operation is common for GK20A and GM20B. Updated clock ids
in tegra clock framework accordingly.

Bug 1450787

Change-Id: Ifd5b9c3a6fd8db5b06e6dcd989285e8410794803
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/441711
Reviewed-by: Bo Yan <byan@nvidia.com>
Tested-by: Bo Yan <byan@nvidia.com>
2015-03-18 12:10:35 -07:00
Alex Frid
ea530792c4 gpu: nvgpu: Make clock operations static
Made GK20A and GM20B  clock operations static, since they are invoked
only via HAL interfaces.

Bug 1450787

Change-Id: Ia30218ad4244bd8790b5ef96d1963678d0ba39e1
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/441710
Reviewed-by: Bo Yan <byan@nvidia.com>
Tested-by: Bo Yan <byan@nvidia.com>
2015-03-18 12:10:35 -07:00
Deepak Nibade
fb719a0075 Revert "gpu: nvgpu: return error from mutex_acquire() if pmu not initialized"
This reverts commit 50497d4031103df1067f14ce4c1e14b15713efb9.

Simply returning error from mutex_acquire() causes the code
to call disable_elpg() which decreases elpg refcount
But we already have a race condition between pmu initialization
where we initialize elpg and runlist update where we call
this mutex_acquire and decrease the refcount

As a result of this race and returned error we might mess up
with the elpg refcount and cause abnormal behaviour

Hence revert this change for now until we have clean fix
considering this race as well

Bug 200024116

Change-Id: Ie64ca36f70aba6b15c2acc235a5d36d13c9025aa
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/441793
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
2015-03-18 12:10:34 -07:00
Hoang Pham
f7642ca185 gpu: nvgpu: Fork GM20B clock from GK20A clock
Bug 1450787

Change-Id: Id7fb699d9129a272286d6bc93e0e95844440a628
Signed-off-by: Hoang Pham <hopham@nvidia.com>
Reviewed-on: http://git-master/r/440536
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2015-03-18 12:10:33 -07:00
Alex Frid
b972f8d15e gpu: nvgpu: Init clock debugfs after clock support
Initialized GK20A clock debugfs after clock support
hardware and software are ready.

Bug 1450787

Change-Id: I8ec2ef303a84b9151b7ce209a1864f1729382a44
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/440973
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2015-03-18 12:10:33 -07:00
Sang-Hun Lee
ebf991e990 gpu: nvgpu: flush write before unlocking
- gk20a_enable is reading the clock after unlocking the spinlock
   to flush any previous write
 - This could lead to a race if any write afterwards assume
   the write has been completed already
 - Read the clock before unlocking to ensure all previous writes
   have been completed before letting any other thread use gk20a

Bug 200007520

Change-Id: I737fbbe825c68b25ca256c4a8ee2b99aa8baf0f5
Signed-off-by: Sang-Hun Lee <sanlee@nvidia.com>
Reviewed-on: http://git-master/r/418485
(cherry picked from commit 2aed542a719caa69620766bf2dceefe50626c189)
Reviewed-on: http://git-master/r/437842
Reviewed-by: Mitch Luban <mluban@nvidia.com>
Tested-by: Mitch Luban <mluban@nvidia.com>
2015-03-18 12:10:33 -07:00
Arto Merilainen
5312fd2a18 gpu: nvgpu: Double syncpoint increments
gm20b/gm20x requires incrementing syncpoints twice to ensure that
the data has reached memory in all cases. This patch modifies
increment push buffer to account this requirement.

Bug 1491360

Change-Id: I5c2899b26ce0e1cdf9408bb9aaa576fc3054480f
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/437675
Reviewed-by: Automatic_Commit_Validation_User
2015-03-18 12:10:32 -07:00
Arto Merilainen
4cb6f6b357 gpu: nvgpu: Add helpers for backing store access
This patch adds mm helpers to access compression backing store
from in-kernel shader.

Bug 1409151

Change-Id: Icb4f6dc0b5a35fdb97bc4221ab3657866f775fae
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440263
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
2015-03-18 12:10:32 -07:00
Arto Merilainen
9b00f35242 gpu: nvgpu: Allow reloading the golden context
In cases where a kernel channel dies, we can reload the context by
just reloading the golden context buffer. This patch makes necessary
infrastructural changes to support this behaviour.

Bug 1409151

Change-Id: Ibe6a88bf7acea2d3aced2b86a7a687279075c386
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440262
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
2015-03-18 12:10:32 -07:00
Arto Merilainen
61e9189103 gpu: nvgpu: gk20a: Allow in-kernel channel alloc
This patch modifies channel interfaces to allow allocating the
channel for kernel use. This is needed if we want to run a shader
from kernel space.

Bug 1409151

Change-Id: I3544186bb1541120f85e01a19de106ef011c1b11
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440261
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
2015-03-18 12:10:32 -07:00
Deepak Nibade
a84dc62b5e gpu: nvgpu: return error from mutex_acquire() if pmu not initialized
In pmu_mutex_acquire(), we return zero (success) if
pmu->initialized is not set

Since mutex_acquire() was successful, we then call
pmu_mutex_release()

If now pmu->initialized is set in some other thread
then we proceed to validate the mutex owner and
end up causing below warning :

pmu_mutex_release: requester 0x00000000 NOT match owner 0x00000008

Hence to fix this return error from mutex_acquire()
and mutex_release() if pmu->initialized is not yet set
and in that case we proceed to call elpg enable/disable

Bug 1533644

Change-Id: Ifbb9e6a8e13f6478a13e3f9d98ced11792cc881f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/439333
GVS: Gerrit_Virtual_Submit
Reviewed-by: Naveen Kumar S <nkumars@nvidia.com>
Tested-by: Naveen Kumar S <nkumars@nvidia.com>
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
2015-03-18 12:10:31 -07:00
Alex Frid
d98099c9b6 gpu: nvgpu: Remove unused GK20A cooling device
Removed unused, obsolete GK20A cooling device.

Bug 1450787

Change-Id: I5b02546d0405dd518ec841d903e650a8d38db8f2
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/437942
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2015-03-18 12:10:31 -07:00
Hoang Pham
ba387d3d7e gpu: Split clk_ops for GK20A and GM20B
Split clk_ops for GK20A and GM20B into different files

Bug 1450787

Change-Id: I34d16c54ac40c70854e80588475434c9e50b51a5
Signed-off-by: Hoang Pham <hopham@nvidia.com>
Reviewed-on: http://git-master/r/437771
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2015-03-18 12:10:29 -07:00
Alex Frid
3058fb2b96 gpu: nvgpu: Use 1kHz resolution for GPCPLL programming
Used 1kHz resolution (instead of 1 MHz) for GPCPLL programming:
limits specifications, calculating GPCPLL settings, storing target
frequency values, and proving output from debug monitor. Updated
comments in clock header to properly reflect frequency units.

Bug 1450787

Change-Id: Ica58f794b82522288f2883c40626d82dbd794902
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/437943
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
2015-03-18 12:10:29 -07:00
Deepak Nibade
fec60b6e6e gpu: nvgpu: force idle if railgate not supported
Add a way to force idle and reset the GPU in case where GPU
rail gating is not supported
(i.e. platform->can_railgate = false)

In this case, we follow below sequence :
- once GPU is idle, get runtime reference which enables the clocks
- call prepare_poweroff() to save the state explicitly
- perform explicit reset assert/deassert
- call finalize_poweron() to restore the state
- drop the runtime reference taken earlier

Bug 1525284

Change-Id: Id5f3ec152093acd585631dfbf785d8e0561f9048
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/435620
GVS: Gerrit_Virtual_Submit
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Tested-by: Arto Merilainen <amerilainen@nvidia.com>
2015-03-18 12:10:26 -07:00
Deepak Nibade
597083eaba gpu: nvgpu: increase delays in do_idle()
Increase the wait delays in do_idle() to 2000 mS and make use
of msleep instead of mdelays

Also, to check if GPU is rail gated or not, add a do-while()
loop which will keep checking the status and bail out as soon
as GPU is rail gated

This increase in delays is required to allow GPU sufficient
time to complete its work and get rail gated

These delays are specially needed during stress testing where
it is possible that a large amount of GPU work is blocked
during do_idle() and then it might take more time to complete
it while next do_idle() is waiting for it

Also, remove waiting on API gk20a_wait_channel_idle() for each
channels since it is sufficient to wait for refcount to be 1

bug 1529160

Change-Id: Ie541485fbdda76d79ae4a75dda928da240fc5d8f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/434192
(cherry picked from commit 5a621bf2aaf3355e1330a662dc98e943d68ef86d)
Reviewed-on: http://git-master/r/435133
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
2015-03-18 12:10:25 -07:00
Deepak Nibade
7c5404fa42 gpu: nvgpu: remove redundant busy()/idle() calls
gk20a_busy() call in channel_syncpt_incr() and corresponding
gk20a_idle() call in channel_update() are redundant since they
are already encapsulated inside another pair of busy/idle calls

This busy/idle pair will be called only from submit_gpfifo()
and submit_gpfifo() already has its own busy/idle which it
preserves for whole path and hence this redundant pair can be
removed

Also, this prevents a dead lock scenario while do_idle() is in
progress as follows :
- in submit_gpfifo() we call first gk20a_busy() which acquires
  busy read semaphore
- in do_idle() we acquire busy write semaphore and wait for
  current jobs to finish
- now submit_gpfifo() encounters second gk20a_busy() and requests
  busy read semaphore again
- this results in dead lock where do_idle() is waiting for
  submit_gpfifo() to complete and submit_gpfifo() is waiting for
  busy lock held by do_idle() and hence it cannot complete

bug 1529160

Change-Id: I96e4368352f693e93524f0f61689b4447e5331ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/434191
(cherry picked from commit c4315c6caa42bab72ba6017c7ded25f4e9363dec)
Reviewed-on: http://git-master/r/435132
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
Tested-by: Sachin Nikam <snikam@nvidia.com>
2015-03-18 12:10:24 -07:00
Deepak Nibade
0b1f9e4272 gpu: nvgpu: fix race between do_idle() and unrailgate()
While we are executing do_idle() API, it is possible that
unrailgate() gets invoked in midst of idling the GPU and
this can result in failure of do_idle()

To prevent simultaneous execution of these methods,
add a mutex railgate_lock and acquire it during
do_idle() and unrailgate() APIs

Also, keep this lock held if do_idle() is successful.
In success, lock will be released in do_unidle(),
otherwise release this lock before returning

Note that this lock should not be held in railgate() API
since we do not want it to be blocked during do_idle()

bug 1529160

Change-Id: I87114b5367eaa217376455a2699c0d21c451c889
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/434190
(cherry picked from commit 561dc8e0933ff2d72573292968b893a52f5f783a)
Reviewed-on: http://git-master/r/435131
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Sachin Nikam <snikam@nvidia.com>
2015-03-18 12:10:24 -07:00
Arto Merilainen
d608aa53ee Revert "gpu: nvgpu: Dump offending push buffer fragment"
Channel and gpfifo allocations are entirely separated from each
other, however, the code here assumes that active channel means
that the channel also has a gpfifo.

This reverts commit a24602f094380539788696d1b1567a4f4d914b17 which
added gpfifo dump. Changing debug dumping to be safe requires
refactoring the channel release code to use proper locking.

Bug 1530226

Change-Id: I2fb02542a17dd56a0a9ce732b327e34b85ade8b9
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/434038
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Shridhar Rasal <srasal@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
2015-03-18 12:10:24 -07:00
Vijayakumar
2d666411ab gpu:nvgpu: Enable Falcon trace prints
Dump Falcon trace on PMU Crash and add debugfs node falc_trace.
This needs Debug trace to be enabled in GPmu binary.

Change-Id: I093ef196202958e46d8d9636a848bd6733b5e4cc
Signed-off-by: Vijayakumar <vsubbu@nvidia.com>
Reviewed-on: http://git-master/r/432732
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
2015-03-18 12:10:23 -07:00
Arto Merilainen
c230159665 gpu: nvgpu: Update generic platform
This patch adds .is_railgated() callback for the generic gpu platform.

Change-Id: Ief13a6fba82b376aafbe861e8f3823a19bb7f679
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/433059
Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
2015-03-18 12:10:23 -07:00
Arto Merilainen
75e334fceb gpu: nvgpu: Support probing host1x link from apps
This patch adds support to check if the host1x link exists and is
supported using the gpu characteristics ioctl.

Bug 1459653

Change-Id: I832eea217ed7f007e341dfde5769887e0882d6bb
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/433058
2015-03-18 12:10:23 -07:00
Mahantesh Kumbar
6dc277b783 gpu:nvgpu:sysfs node to update aelpg parameter
Added sysfs node to update aelpg parameter.
Pass parameter as below sequence,
SAMPLING_PERIOD_PG_DEFAULT_US, MINIMUM_IDLE_FILTER_DEFAULT_US,
MINIMUM_TARGET_SAVING_DEFAULT_US, POWER_BREAKEVEN_DEFAULT_US,
CYCLES_PER_SAMPLE_MAX_DEFAULT

Bug 1464737

Change-Id: I46873c463820f30f190c722d7ed038622cb2710f
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: http://git-master/r/422702
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
2015-03-18 12:10:22 -07:00