Move gk20a_gr_alloc_global_ctx_buffers from gr_gk20a.c to gr.c as
static function as gr_alloc_global_ctx_buffers. This function is
used locally by gr_init_setup_sw function.
Remove alloc_global_ctx_buffers hal function.
JIRA NVGPU-1885
Change-Id: I85f1ed85259cd564577b69af8cf01c1a2802004b
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2093834
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Renamed and moved from fifo to channel
gk20a_debug_dump_all_channel_status_ramfc -> nvgpu_channel_debug_dump_all
gk20a_dump_channel_status_ramfc -> gk20a_channel_debug_dump
gv11b_dump_channel_status_ramfc -> gv11b_channel_debug_dump
Moved nvgpu_channel_dump_info struct to channel.h
Moved nvgpu_channel_hw_state struct to channel.h
Moved dump_channel_status_ramfc fifo ops to channel ops
as debug_dump
JIRA NVGPU-2978
Change-Id: I696e5029d9e6ca4dc3516651b4d4f5230fe8b0b0
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2092709
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be
used. Fix is either to use the return value or change the function
to return void. In the case of the pmu_setup_elpg operation, all
implementations were always returning 0, so this patch changes the
signature to return void instead.
JIRA NVGPU-3036
Change-Id: I6f0a79314535ba9e3c65d28399b117b058bb23ca
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2092680
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved init/deinit eng method buffers from fifo to tsg
- tsg.init_eng_method_buffers
- tsg.deinit_eng_method_buffers
Moved gv11b_fifo_init_ramfc_eng_method_buffer to the
following tsg HAL:
- tsg.bind_channel_eng_method_buffers
This HAL is now called during bind_channel.
Added the following ramin HAL:
- ramin.set_ramfc_eng_method_buffer
Jira NVGPU-2979
Change-Id: I96f6ff15d2176d4e3714fa8fe65a9126b3fff82c
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2087185
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved the following HALs from fifo to tsg:
- tsg.bind_channel
- tsg.unbind_channel
- tsg.unbind_channel_check_hw_state
- tsg.unbind_channel_check_ctx_reload
- tsg.unbind_channel_check_eng_faulted
bind_channel and unbind_channel HALs are optional,
and only implemented for vgpu:
- vgpu_tsg_bind_channel
- vgpu_tsg_unbind_channel
Moved the following code from fifo to tsg:
- nvgpu_tsg_bind_channel
- nvgpu_tsg_unbind_channel
- nvgpu_tsg_unbind_channel_check_hw_state
- nvgpu_tsg_unbind_channel_check_ctx_reload
- gv11b_tsg_unbind_channel_check_eng_faulted
tsg is now explictly passed to bind/unbind operations,
along with ch
Jira NVGPU-2979
Change-Id: I337a3d73ceef5ff320b036b14739ef0e831a28ee
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084029
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add api nvgpu_gr_setup_set_preemption_mode() in common.gr.setup to
set various preemption modes
Define new hal g->ops.gr.setup.set_preemption_mode() that calls above
common api
Move corresponding code from gr_gp10b.c to common.gr.setup unit
Jira NVGPU-1886
Change-Id: I7cb0187a4809156e5f90f39727a782b17219afa3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2092170
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below apis in common.gr.setup to allocate/free context
nvgpu_gr_setup_alloc_obj_ctx()
nvgpu_gr_setup_free_gr_ctx()
Define two new hals
g->ops.gr.setup.alloc_obj_ctx()
g->ops.gr.setup.free_gr_ctx()
Move corresponding code from gr_gk20a.c to common.gr.setup unit
Jira NVGPU-1886
Change-Id: Icf170a6ed8979afebcedaa98e3df1483437b427b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2092169
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
SEC2 isr handling requires message processing in common msg unit. That
unit requires interfaces from hal to know if the msg interrupt was
received, set the msg interrupt and handle other interrupts.
JIRA NVGPU-2025
Change-Id: I3b5ad8968ea9298cc769113417931c4678009cf1
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085753
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
SEC2 message handling unit can't be part of command handling unit as it
creates circular dependencies with the SEC2 tasks (ACR bootstrap)
SEC2 allocator unit shall encompass DMEM allocator and other allocators
used by SEC2.
JIRA NVGPU-2075
Change-Id: Ic2b8204d8225f2056785f035cbecdb776a9ecfe9
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085749
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
SEC2 commands and messages management is based on sharing the data
through sequences. Functions for sending commands/allocating
payload update sequence data acquiring lock and those for
working on received messages read/free the sequence data
releasing lock.
JIRA NVGPU-2075
Change-Id: I988662d6ce6f9a15d67bab2a58d3f2689ffd804a
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085748
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
SEC2 command and message handling should not deal with SEC2 queues
implementation. Only generic queue APIs should be invoked. Prepare
SEC2 queues unit for this. In future if underlying queues impleme-
ntation has to be changed then it can be done in the queues unit.
JIRA NVGPU-2075
Change-Id: Ia9786255bbc22f6f2a9686bd98eef8666d1ab370
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2085746
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Move the perfmon unit source code to common/pmu/perfmon/ folder
- Separate perfmon unit headers under include/nvgpu/pmu/pmu_perfmon.h
- Make a new structure: nvgpu_pmu_perfmon for perfmon unit
- This new struct combines all perfmon unit variables like
perfmon_query, perfmon_ready etc. into one
structure as a part of perfmon unit refactoring.
- Use pmu_perfmon struct to access all perfmon variables.
- Eg: pmu->pmu_perfmon->perfmon_query, pmu->pmu_perfmon->perfmon_ready
and so on.
JIRA NVGPU-1961
Change-Id: I57516c646bfb256004dd7b719e40fafd3c2a09b2
Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2080555
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved fb interrupt handling related code to fb intr sub-unit.
Moved following hals from fb hal to fb intr hal and renamed to:
void (*enable)(struct gk20a *g);
void (*disable)(struct gk20a *g);
void (*isr)(struct gk20a *g);
gk20a_readl/writel are replaced with nvgpu_read/writel.
Hals are populated with new function names and code is modified
to call new hals.
Moved ecc interrupt to gv11b_fb_intr_handle_ecc in a separate file:
fb_intr_ecc_gv11b.c/h
JIRA NVGPU-2034
Change-Id: I80c7110c902c4e082561cf7cbe65c20eb9acb661
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090070
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be
used. Fix is either to use the return value or change the function
to return void.
The ACR operations acr_fill_bl_dmem_desc and patch_wpr_info_to_ucode
were returning an int that was always 0. This patch changes their
signature to return void instead.
JIRA NVGPU-3032
Change-Id: I4116db3b5aec92ee9b914f5549d6c7c994affd4a
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090042
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
force_reset_ch obtains a tsg from a channel first before proceeding
with other work. Thus, force_reset_ch is moved as part of tsg unit to
avoid circular dependency between channel and tsg. TSGs can depend on
channels but channel cannot depend on TSGs.
Jira NVGPU-2978
Change-Id: Ib1879681287971d2a4dbeb26ca852d6b59b50f6a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084927
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
a) free_channel_ctx_header is used to free the channel's underlying subctx
and belongs to the hal.channel unit instead of fifo. Moved the same and
renamed the HAL ops to free_ctx_header. The function
gv11b_free_subctx_header is moved to channel_gv11b.* files and also
renamed to gv11b_channel_free_subctx_header.
b) ch_abort_clean_up is moved to hal.channel unit
c) channel_resume and channel_suspend are used to resume and suspend all
the serviceable channels. This belongs to hal.channel unit and are
moved from the hal.fifo unit.
The HAL ops channel_resume and channel_suspend are renamed to
resume_all_serviceable_ch and suspend_all_serviceable_ch respectively.
gk20a_channel_resume and gk20a_channel_suspend are also renamed to
nvgpu_channel_resume_all_serviceable_ch and
nvgpu_channel_suspend_all_serviceable_ch respectively.
d) set_error_notifier HAL ops belongs to hal.channel and is moved
accordingly.
Jira NVGPU-2978
Change-Id: Icb52245cacba3004e2fd32519029a1acff60c23c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Bugs in current version are listed below:
1. Function alloc() or alloc_pte() allocate memory for len=0.
2. Function alloc() or alloc_pte() don't unlock nvgpu_allocator if
pte_size is invalid.
3. Function alloc_pte() and alloc_fixed() set alloc_made flag
unconditionally.
4. Function release_carveout() tries to acquire nvgpu_allocator lock
twice causing unresponsive state.
5. If buddy allocation fails in balloc_do_alloc() or
balloc_do_alloc_fixed() function, previously allocated buddies are not
merged. This causes seg fault in ops->fini().
6. With gva_space enabled and base=0, buddy_allocator updated base not
checked for pde alignment.
7. In balloc_do_alloc_fixed(), align_order computed using __fls()
results in one order higher than requested.
8. Initializing buddy allocator with size=0, initializes very large
memory and will trigger seg fault with the changes in this patch.
Setting size=1G so that further execution is successful.
This patch fixes above listed bugs and updates following:
1. With gva_space enabled, BALLOC_PTE_SIZE_ANY is considered as
BALLOC_PTE_SIZE_SMALL which allows alloc() to be used.
2. GPU_BALLOC_MAX_ORDER changed to 63U. Condition added to check that
max_order is never greater than GPU_BALLOC_MAX_ORDER.
3. BUG() changed to nvgpu_do_assert().
JIRA NVGPU-3005
Change-Id: I20c28e20aa3404976d67f7884b4f8cbd5c908ba7
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075646
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Created PMU super surface unit & moved structs/functions related to
super surface under a unit, separated super surface structs into
private/public based on its usage/access, made changes to supper
surface dependent files to reflect supper surface changes
respective to unit.
JIRA NVGPU-3045
Change-Id: I6ac426052eb60f00b432d9533460aa0afd939fe3
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088405
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
lsfm-LS falcon manager
Created lsfm unit under common/pmu/lsfm, moved functions &
variables related to lsfm functionality under lsfm unit,
within lsfm unit created separate files based on init which
does chip specific s/w init, separated private/public
functionality.
JIRA NVGPU-3021
Change-Id: Iad4a4e5533122fb2387a4980581a0d7bcdb37d67
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2080546
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move following functions from gr_gk20a.c to common.gr.init
gk20a_init_gr_support ---> nvgpu_gr_init_support
gk20a_gr_reset ---> nvgpu_gr_reset
gk20a_enable_gr_hw ---> nvgpu_gr_enable_hw
Move all static functions called from those functions to
common.gr.init under gr.c file.
JIRA NVGPU-1885
Change-Id: I695235f97738654e7c686a345d3f84d1daaacd72
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2082363
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch fixes below two issues.
1.Currently clk arb exit is called after GPU registers are released.
This causes crash when clk arb WQ accesses GPU HW register for status.
The ideal way is to exit the clk_arb which removes the WQ from running
before calling lockout register.
2.Check if dGPU is dying during processing of PMU Commands.
This prevents race condition when PMU is waiting for response and device
is shutdown.
Bug 200488054
Change-Id: I812b07af7db4494d5ea2ed6197742ceb23d30a4b
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081916
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move the gk20a_gr_handle_gpc_exception function from gr_gk20a.c
to gr_intr.c as nvpu_gr_intr_handle_gpc_exception.
Move static function gk20a_gr_handle_tpc_exception to
gr_intr.c as gr_intr_handle_tpc_exception
JIRA NVGPU-3016
Change-Id: I42862b00d1946e029673d8f95e0262a44244a87a
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090405
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>