Commit Graph

4 Commits

Author SHA1 Message Date
Deepak Nibade
b81e9a2431 gpu: nvgpu: add refcounting for TSG
Add refcounting for TSGs and manage the refcounts as below :
- initialize ref when TSG is opened
- get ref when channel is bound to TSG
- drop the ref when channel is unbound (i.e. during channel close)
- drop the ref when TSG is closed
- when refcount drops to zero, we free the TSG

This refcounting makes it possible to close channels or TSG
in any order

Bug 1470692

Change-Id: Ia4b39164a4582c8169da62a91b9131094c67f5f8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/495667
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:10 -07:00
Deepak Nibade
76993ba18c gpu: nvgpu: rework TSG's channel list
Modify TSG's channel list as "ch_list" for all channels
instead of "ch_runnable_list" for only runnable list
We can traverse this list and check runnable status of
channel in active_channels to get runnable channels

Remove below APIs as they are no longer required :
gk20a_bind_runnable_channel_to_tsg()
gk20a_unbind_channel_from_tsg()

While closing the channel, call gk20a_tsg_unbind_channel()
to unbind the channel from TSG

bug 1470692

Change-Id: I0178fa74b3e8bb4e5c0b3e3b2b2f031491761ba7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/449227
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:45 -07:00
Deepak Nibade
ee66559a0b gpu: nvgpu: add TSG support for engine context
All channels in a TSG need to share same engine context
i.e. pointer in RAMFC of all channels in a TSG must point
to same NV_RAMIN_GR_WFI_TARGET

To get this, add a pointer to gr_ctx inside TSG struct so
that TSG can maintain its own unique gr_ctx
Also, change the type of gr_ctx in a channel to pointer
variable so that if channel is part of TSG it can point
to TSG's gr_ctx otherwise it will point to its own gr_ctx

In gk20a_alloc_obj_ctx(), allocate gr_ctx as below :

1) If channel is not part of any TSG
- allocate its own gr_ctx buffer if it is already not allocated

2) If channel is part of TSG
- Check if TSG has already allocated gr_ctx (as part of TSG)
- If yes, channel's gr_ctx will point to that of TSG's
- If not, then it means channels is first to be bounded to
  this TSG
- And in this case we will allocate new gr_ctx on TSG first
  and then make channel's gr_ctx to point to this gr_ctx

Also, gr_ctx will be released as below ;

1) If channels is not part of TSG, then it will be released
   when channels is closed
2) Otherwise, it will be released when TSG itself is closed

Bug 1470692

Change-Id: Id347217d5b462e0e972cd3d79d17795b37034a50
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/417065
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:10:17 -07:00
Deepak Nibade
e6eb4b59f6 gpu: nvgpu: add kernel APIs for TSG support
Add support to create/destroy TSGs using node "/dev/nvhost-tsg-gpu"

Provide below IOCTLs to bind/unbind channels to/from TSGs :

NVGPU_TSG_IOCTL_BIND_CHANNEL
NVGPU_TSG_IOCTL_UNBIND_CHANNEL

Bug 1470692

Change-Id: Iaf9f16a522379eb943906624548f8d28fc6d4486
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/416610
2015-03-18 12:10:16 -07:00