Workaround for GPU hang if boost turns GPU on before it is
initialized.
Bug 1435870
Change-Id: I07d0617049612344ca7c494da8cb8d75789984e5
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/453375
"isr_enable_lock" was used to protect pmu's isr_enabled flag
and pmu enable/disable calls
Instead of this extra lock, we can reuse "isr_mutex" for this
purpose
Bug 200014542
Bug 200014887
Change-Id: Ifbb7d6108effc132266a20517820e470d52a7110
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/453348
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Add below ioctls for channels
1. NVHOST_IOCTL_CHANNEL_ENABLE
To enable the channel
2. NVHOST_IOCTL_CHANNEL_DISABLE
To disable the channel
3. NVHOST_IOCTL_CHANNEL_PREEMPT
To preempt the channel
(Not supported for a channel in TSG)
Bug 1514064
Change-Id: Ie9315f9742bb27efb22f993799c51a1ecda91756
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449229
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
This patch fixes a refcounting issue in semaphore handling.
Change-Id: I03327c60ed6923a90663f0b845566e81af4b94d4
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/453056
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
In runlist we first write channel count in TSG entry and then
follow those many channel entries
If no. of channel entries does not match to count then it is
considered as error
To detect this, add a counter while adding channel entries and
give warning if channel count does not match with this counter
bug 1470692
Change-Id: I4bbfd9b696fbfafa25dffb27979373f057a7f35a
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449228
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Currently we clear the runlist and re-create it in
scheduled work during fifo recovery process
But we can post-pone this runlist re-generation for
later time i.e. when channel is closed
Hence, remove runlist locks and re-generation from
handle_mmu_fault() methods. Instead of that, disable
gr fifo access at start of recovery and re-enable
it at end of recovery process.
Also, delete scheduled work to re-create runlist.
Re-enable EPLG and fifo access in
finish_mmu_fault_handling() itself.
bug 1470692
Change-Id: I705a6a5236734c7207a01d9a9fa9eca22bdbe7eb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449225
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Modify TSG's channel list as "ch_list" for all channels
instead of "ch_runnable_list" for only runnable list
We can traverse this list and check runnable status of
channel in active_channels to get runnable channels
Remove below APIs as they are no longer required :
gk20a_bind_runnable_channel_to_tsg()
gk20a_unbind_channel_from_tsg()
While closing the channel, call gk20a_tsg_unbind_channel()
to unbind the channel from TSG
bug 1470692
Change-Id: I0178fa74b3e8bb4e5c0b3e3b2b2f031491761ba7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449227
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reading the load value may increase CPU power consumption
temprorarily. In most cases we are ok with a value that
was read a moment earlier.
This patch introduces a software shadow for gpu load. The shadow
is updated before starting scaling and all scaling code paths use
the sw shadow.
Change-Id: I53d2ccb8e7f83147f411a14d3104d890dd9af9a3
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/453347
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
1. Register tegra-throttle cooling device as a
platform driver.
2. Obtain all the platform data (throtlle table
info) for all instances of blanced-throtlled cdev
from device tree and register them.
Change-Id: Ie92685eea3eb5cb18068b195adc9ab5f83762399
Signed-off-by: Arun Kumar Swain <arswain@nvidia.com>
Reviewed-on: http://git-master/r/449104
Reviewed-by: Diwakar Tundlam <dtundlam@nvidia.com>
Tested-by: Diwakar Tundlam <dtundlam@nvidia.com>
return error from pmu_mutex_acquire() and release() if
pmu->initialized is not set
Bug 1533644
Change-Id: I341a5831bc5beeccb4587668f61c954ce7576226
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Currently if pmu_mutex_acquire() fails, we disable ELPG
and move ahead. But it is not clear why it is required
to disable ELPG in case where we fail to acquire mutex.
Hence skip disabling ELPG if mutex_acquire() fails
Bug 1533644
Change-Id: I7e8e99a701d0ba071eb31ac17582b04072ee55eb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/448131
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Compression bit base was calculated incorrectly in cases where
number of LTCs was not 1. This patch fixes the code.
Change-Id: I25e3fa7446b238202d93ce8a72ed919d11fb6e30
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/449281
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Jussi Rasanen <jrasanen@nvidia.com>
GVS: Gerrit_Virtual_Submit
If code coverage is enabled on GCC 4.7, the kernel build fails in
gk20a_init_kind_attr() since GCC decides to inline almost everything in
this file into it, leading to a massive stack frame with over kilobyte's
worth of temporary variables generated by gcov, leading to this error:
kind_gk20a.c: In function 'gk20a_init_kind_attr':
kind_gk20a.c:424:1: error: the frame size of 1232 bytes is larger than
1024 bytes [-Werror=frame-larger-than=]
(Just removing the inline keyword doesn't work, as GCC still decides to
inline it, so noinline_for_stack is actually required.)
Change-Id: I819fd2a5b20581f0ac60e1ee490899c977379151
Signed-off-by: Tuomas Tynkkynen <ttynkkynen@nvidia.com>
Reviewed-on: http://git-master/r/448914
Reviewed-by: Juha Tukkinen <jtukkinen@nvidia.com>
Tested-by: Juha Tukkinen <jtukkinen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This patch adds support for executing a precompiled GPU program to
allow exporting GPU buffers to other graphics units that have color
decompression engine (CDE) support.
Bug 1409151
Change-Id: Id0c930923f2449b85a6555de71d7ec93eed238ae
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/360418
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
When moving compression state tracking and compbit management ops to
kernel, we need to attach a fence to dma-buf metadata, along with the
compbit state.
To make in-kernel fence management easier, introduce a new gk20a_fence
abstraction. A gk20a_fence may be backed by a semaphore or a syncpoint
(id, value) pair. If the kernel is configured with CONFIG_SYNC, it will
also contain a sync_fence. The gk20a_fence can easily be converted back
to a syncpoint (id, value) parir or sync FD when we need to return it to
user space.
Change gk20a_submit_channel_gpfifo to return a gk20a_fence instead of
nvhost_fence. This is to facilitate work submission initiated from
kernel.
Bug 1509620
Change-Id: I6154764a279dba83f5e91ba9e0cb5e227ca08e1b
Signed-off-by: Lauri Peltonen <lpeltonen@nvidia.com>
Reviewed-on: http://git-master/r/439846
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Remove functions that are not used in gk20a allocator.
Bug 1523403
Change-Id: I36b2b236258d61602cb3283b59c43b40f237d514
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/432174
Updated GM20B GPCPLL programming sequence to utilize new glitch-less
post divider:
- No longer bypass PLL for re-locking if it is already enabled, and
post divider as well as feedback divider are changing (input divider
change is still under bypass only).
- Use post divider instead of external linear divider to introduce
(VCO min/2) intermediated step when changing PLL frequency.
Bug 1450787
Signed-off-by: Alex Frid <afrid@nvidia.com>
Change-Id: I4fe60f8eb0d8e59002b641a6bfb29a53467dc8ce
Setup GPCPLL dynamic ramp coefficients based on update rate (instead
of hard-coding), since on GM20B high reference clock 38.4MHz allows
to use several update rates within supported range.
Bug 1450787
Change-Id: I0e14bcb8e3f65cc164fbb66b4adc688fcee9e2d6
Signed-off-by: Alex Frid <afrid@nvidia.com>
Moved GPCPLL locking under bypass procedure into separate function.
Added SYNC_MODE control during locking.
Bug 1450787
Change-Id: I8dbf9427fbdaf55ea20b6876750b518eb738de1b
Signed-off-by: Alex Frid <afrid@nvidia.com>
In channel_finish(), we submit WFI for all the channels
including channels with KEPLER_C class.
Since there is no need to submit WFI for channels with
KEPLER_C class, we can optimize to skip submitting
WFI and directly wait on last submit fence
Bug 1534272
Change-Id: I3838416cf22122728e7f1008e01d77b14a35deba
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Currently gk20a gets reference of host1x via phandle in
Device Tree. But runtime PM does not seem to be handling power
dependencies too well in this case and hence some times host1x
is off when we need it.
To fix this, exlicitly power on host1x while powering gpu up.
Do this via "busy" and "idle" callbacks from gk20a_platform
Bug 1534272
Bug 200022536
Change-Id: Ia562ee19722cfc8edc5626a5a058ab8edfe3d206
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Updated GPCPLL parameters according to GM20b specification.
Modified PLL programming, since on GM20b PLL post divider value is
equal to divider setting (which was not the case on GK20a this code
was inherited from).
Bug 1450787
Change-Id: Ia455ac49040047a3dbcd5d5211f2fbc71dc332ae
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/447751
GVS: Gerrit_Virtual_Submit
Reviewed-by: Hoang Pham <hopham@nvidia.com>
Tested-by: Hoang Pham <hopham@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Yu-Huan Hsu <yhsu@nvidia.com>
For LS PMU new ucode needs to be used.
Ucode has interface header file changes too.
This patch also has fixes for pmu dmem copy failure
Bug 1509680
Change-Id: I8c7018f889a82104dea590751e650e53e5524a54
Signed-off-by: Supriya <ssharatkumar@nvidia.com>
Reviewed-on: http://git-master/r/441734
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
Used GPU device name in clock get operation (instead of fixed name),
to make operation is common for GK20A and GM20B. Updated clock ids
in tegra clock framework accordingly.
Bug 1450787
Change-Id: Ifd5b9c3a6fd8db5b06e6dcd989285e8410794803
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/441711
Reviewed-by: Bo Yan <byan@nvidia.com>
Tested-by: Bo Yan <byan@nvidia.com>
Made GK20A and GM20B clock operations static, since they are invoked
only via HAL interfaces.
Bug 1450787
Change-Id: Ia30218ad4244bd8790b5ef96d1963678d0ba39e1
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: http://git-master/r/441710
Reviewed-by: Bo Yan <byan@nvidia.com>
Tested-by: Bo Yan <byan@nvidia.com>
This reverts commit 50497d4031103df1067f14ce4c1e14b15713efb9.
Simply returning error from mutex_acquire() causes the code
to call disable_elpg() which decreases elpg refcount
But we already have a race condition between pmu initialization
where we initialize elpg and runlist update where we call
this mutex_acquire and decrease the refcount
As a result of this race and returned error we might mess up
with the elpg refcount and cause abnormal behaviour
Hence revert this change for now until we have clean fix
considering this race as well
Bug 200024116
Change-Id: Ie64ca36f70aba6b15c2acc235a5d36d13c9025aa
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/441793
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
- gk20a_enable is reading the clock after unlocking the spinlock
to flush any previous write
- This could lead to a race if any write afterwards assume
the write has been completed already
- Read the clock before unlocking to ensure all previous writes
have been completed before letting any other thread use gk20a
Bug 200007520
Change-Id: I737fbbe825c68b25ca256c4a8ee2b99aa8baf0f5
Signed-off-by: Sang-Hun Lee <sanlee@nvidia.com>
Reviewed-on: http://git-master/r/418485
(cherry picked from commit 2aed542a719caa69620766bf2dceefe50626c189)
Reviewed-on: http://git-master/r/437842
Reviewed-by: Mitch Luban <mluban@nvidia.com>
Tested-by: Mitch Luban <mluban@nvidia.com>
gm20b/gm20x requires incrementing syncpoints twice to ensure that
the data has reached memory in all cases. This patch modifies
increment push buffer to account this requirement.
Bug 1491360
Change-Id: I5c2899b26ce0e1cdf9408bb9aaa576fc3054480f
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/437675
Reviewed-by: Automatic_Commit_Validation_User
This patch adds mm helpers to access compression backing store
from in-kernel shader.
Bug 1409151
Change-Id: Icb4f6dc0b5a35fdb97bc4221ab3657866f775fae
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440263
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
In cases where a kernel channel dies, we can reload the context by
just reloading the golden context buffer. This patch makes necessary
infrastructural changes to support this behaviour.
Bug 1409151
Change-Id: Ibe6a88bf7acea2d3aced2b86a7a687279075c386
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440262
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
This patch modifies channel interfaces to allow allocating the
channel for kernel use. This is needed if we want to run a shader
from kernel space.
Bug 1409151
Change-Id: I3544186bb1541120f85e01a19de106ef011c1b11
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-on: http://git-master/r/440261
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Lauri Peltonen <lpeltonen@nvidia.com>
GVS: Gerrit_Virtual_Submit
In pmu_mutex_acquire(), we return zero (success) if
pmu->initialized is not set
Since mutex_acquire() was successful, we then call
pmu_mutex_release()
If now pmu->initialized is set in some other thread
then we proceed to validate the mutex owner and
end up causing below warning :
pmu_mutex_release: requester 0x00000000 NOT match owner 0x00000008
Hence to fix this return error from mutex_acquire()
and mutex_release() if pmu->initialized is not yet set
and in that case we proceed to call elpg enable/disable
Bug 1533644
Change-Id: Ifbb9e6a8e13f6478a13e3f9d98ced11792cc881f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/439333
GVS: Gerrit_Virtual_Submit
Reviewed-by: Naveen Kumar S <nkumars@nvidia.com>
Tested-by: Naveen Kumar S <nkumars@nvidia.com>
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>