We have 3 header files for FECS tracing support
include/nvgpu/gr/fecs_trace.h : common header
include/nvgpu/ctxsw_trace.h : header that includes both common and
os-specific functions
os/linux/ctxsw_trace.h : linux specific header
Remove the second header since it is not needed.
Move all structures that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all function declarations that are needed in common code to
include/nvgpu/gr/fecs_trace.h
Move all linux specific declarations in os/linux/ctxsw_trace.h and
rename this file as os/linux/fecs_trace_linux.h
Also rename os/linux/ctxsw_trace.c to os/linux/fecs_trace_linux.c
Jira NVGPU-1880
Change-Id: I05cc4489c4b6a64880b7d59c02b22cd2244d5e22
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2070766
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move below calls to gr/fecs_trace unit
gk20a_fecs_trace_bind_channel()
gk20a_fecs_trace_unbind_channel()
And rename them to
nvgpu_gr_fecs_trace_bind_channel()
nvgpu_gr_fecs_trace_unbind_channel()
We are not accessing any fifo/ch/tsg construct in gr/fecs_trace unit
hence update parameter list of above APIs to receive inst_block,
gr_ctx, subctx pointers directly instead of receiving channel_gk20a
Delete gk20a/fecs_trace_gk20a.* files since they are no longer
required. All the contents in those files are now moved to gr/fecs_trace
unit
Jira NVGPU-1880
Change-Id: I7ef9f0b66781b45155035237172ae400f02740e4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2032707
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move struct gk20a_fecs_trace_record to gr/fecs_trace unit and rename
it as struct nvgpu_fecs_trace_record
Move all of the APIs in nvgpu/fecs_trace.h to nvgpu/gr/fecs_trace.h
and rename them in nvgpu_gr_fecs_trace_*() format
Delete nvgpu/fecs_trace.h
Add new HAL unit common/gr/fecs_trace/fecs_trace_gm20b.c for register
accesses needed for gr/fecs_trace unit
Add below new HALs in this HAL unit
g->ops.fecs_trace.get_read_index()
g->ops.fecs_trace.get_write_index()
g->ops.fecs_trace.set_read_index()
Jira NVGPU-1880
Change-Id: Ib6ee32ba0d2f8a8a3e82491057e2f01a0275fcf4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024973
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move struct gk20a_fecs_trace to new gr/fecs_trace unit and rename
it as struct nvgpu_gr_fecs_trace
Add enable_lock mutex and enable_count to this structure to support
QNX use cases
Remove init field from struct gk20a_fecs_trace
Rename gk20a_fecs_trace_init() to nvgpu_gr_fecs_trace_init() and
move it to new unit
Rename gk20a_fecs_trace_deinit() to nvgpu_gr_fecs_trace_deinit()
and move it to new unit
Update gk20a_fecs_trace_enable() to start thread only when
enable_count == 1, otherwise we just increment enable_count
Update gk20a_fecs_trace_disable() to stop thread when
enable_count == 0, otherwise we just decrement enable_count
Before this patch struct gk20a_fecs_trace was not visible in new
unit, and hence all mutex_acquire for list_lock were done in
fecs_trace_gk20a.c file
Since new struct is now available in new unit, move mutex_lock/release
calls to gr/fecs_trace unit now
Jira NVGPU-1880
Change-Id: I5abfa0165fa1c31716f3d6f2f669284f8959d7cf
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2024562
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>