Below 5.7 violations are reported in common.gr.config unit :
nvgpu/drivers/gpu/nvgpu/common/gr/gr_config.c:628:
identifier_reuse: Identifier "sm_info" is already used to represent a type.
Fix them by renaming struct sm_info to struct nvgpu_sm_info
Jira NVGPU-3225
Change-Id: I26f70a4ed2a5a845e0dc9daeb8fb5474e35d42fb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2110986
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
create a new unit common.fbp which initializes fbp support and provides
APIs to retrieve fbp data.
Create private header with below data
struct nvgpu_fbp {
u32 num_fbps;
u32 max_fbps_count;
u32 fbp_en_mask;
u32 *fbp_rop_l2_en_mask;
};
Expose below public APIs to initialize/remove fbp support:
nvgpu_fbp_init_support()
nvgpu_fbp_remove_support()
vgpu_fbp_init_support() for vGPU
Expose below APIs to retrieve fbp data
nvgpu_fbp_get_num_fbps()
nvgpu_fbp_get_max_fbps_count()
nvgpu_fbp_get_fbp_en_mask()
nvgpu_fbp_get_rop_l2_en_mask()
Use above APIs to retrieve fbp data in all the code.
Remove corresponding fields from struct nvgpu_gr since they are no
longer referred from that structure
Jira NVGPU-3124
Change-Id: I027caf4874b1f6154219f01902020dec4d7b0cb1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2108617
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
channel.c calling nvgpu_gr_flush_channel_tlb() creating circular
dependency between gr and fifo. Avoid this by moving channel tlb
related data to struct nvgpu_gr_intr in gr_intr_priv.h and
initialized this data in gr_intr.c.
Created following new gr intr hal and called this new hal from channel.c
void (*flush_channel_tlb)(struct gk20a *g);
JIRA NVGPU-3214
Change-Id: I2d259bf52db967273030680f50065af94a17f417
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109274
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove gr_priv.h from outside gr files.
Add hal function in gr.init for get_no_of_sm. This helps
to avoid dereferencing gr in couple of files for g->gr->config and
avoid gr_priv.h include in those files.
Replace nvgpu_gr_config_get_no_of_sm call with
g->ops.gr.init.get_no_of_sm for files outside gr unit.
Jira NVGPU-3218
Change-Id: I435bb233f70986e31fbfcb900ada3b3bda0bc787
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109182
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently max_css_buffer_size is incorrectly stored in struct nvgpu_gr
Add a new hal g->ops.css.get_max_buffer_size() to get the size and
remove the variable from struct nvgpu_gr
Jira NVGPU-3125
Change-Id: If78fd86559526b84031051e281a98327a46fc11d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2105652
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
cyclestats_snapshot data and lock is right now stored in struct nvgpu_gr
Use case itself is not specific to GR engine but in general it applies
to other units outside of GR too.
Hence it makes sense to move both data and lock to struct gk20a instead
of keeping them in struct nvgpu_gr
Update all cyclestats_snapshot code to refer data/lock from struct gk20a
Remove gr_priv.h header include from cyclestats_snapshot.c
Some of the functions were mistakenly declared in gr_gk20a.h.
Move them to cyclestats_snapshot.h and rename them to form nvgpu_css_*()
Jira NVGPU-1103
Change-Id: I3fb32fe96f0ca6613f4640c8bd227b9e0e02dca3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2104848
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
gr_gk20a.ctx_vars struct right now stores sizes for golden_image, zcull,
pm_ctxsw, and gfxp_preemption_buffer.
but these sizes should be really owned by respective units and should
be assigned to units as soon as they are queried from FECS
Add new structure to nvgpu_gr_falcon to hold sizes that will be queried
from FECS
struct nvgpu_gr_falcon_query_sizes {
u32 golden_image_size;
u32 pm_ctxsw_image_size;
u32 preempt_image_size;
u32 zcull_image_size;
};
gr.falcon unit now queries sizes from FECS and fills this structure.
gr.falcon unit also exposes below APIs to query above sizes
u32 nvgpu_gr_falcon_get_golden_image_size(struct nvgpu_gr_falcon *falcon);
u32 nvgpu_gr_falcon_get_pm_ctxsw_image_size(struct nvgpu_gr_falcon *falcon);
u32 nvgpu_gr_falcon_get_preempt_image_size(struct nvgpu_gr_falcon *falcon);
u32 nvgpu_gr_falcon_get_zcull_image_size(struct nvgpu_gr_falcon *falcon);
gr.gr unit now calls into gr.falcon unit to initailize sizes, and then
uses above exposed APIs to set sizes into respective units
vGPU will too fill up struct nvgpu_gr_falcon_query_sizes with all the sizes
and then above APIs will be used to set sizes into respective units
All of above means size variables in gr_gk20a.ctx_vars struct are no more
being referred. Delete them.
Jira NVGPU-3112
Change-Id: I8b8e64ee0840c3bdefabc8ee739e53a30791f2b3
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2103478
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
struct gr_gk20a defines boolean flag golden_image_initialized to
indicate if golden_image is initialized or not
common.gr.obj_ctx also added a flag of its own to check if golden_image
is ready
Add new API nvgpu_gr_obj_ctx_is_golden_image_ready() in
common.gr.obj_ctx unit to get status of golden_image
Use this new API everywhere to check if golden image is ready
Remove g->gr.ctx_vars.golden_image_initialized
Also remove ctx_mutex from struct gr_gk20a
Add new flag golden_image_initialized to struct nvgpu_pmu_pg and set it
when golden image is initialized. This is needed to avoid circular
dependency between GR and PMU
Jira NVGPU-3112
Change-Id: Id391294cede6424e15a9a9de29c40d013b509534
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Debug boolean flags force_preemption_gfxp and force_preemption_cilp are
right now stored in gr_gk20a.ctx_vars struct
These flags logically are property of gr.ctx units since they indicate
whether each context should be forced to gfxp/cilp preemption mode by
default
Move these flags to struct nvgpu_gr_ctx_desc and remove them from
gr_gk20a.ctx_vars
Expose below APIs from gr.ctx unit to check if flags are set
nvgpu_gr_ctx_desc_force_preemption_gfxp()
nvgpu_gr_ctx_desc_force_preemption_cilp()
Move debugfs creation code to create corresponding debugfs to
gr_gk20a_debugfs_init() and change debugfs type from "u32" to "file"
Struct gr.gr_ctx_desc is created only during first poweron.
Return error if this struct is not available.
Remove unnecessary initialization of these variables from platform
specific probe functions
Jira NVGPU-3112
Change-Id: I8b2de27f0c71dd2ea5abcf94221c2e15c80073ea
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2099398
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Removed unused struct from gr_gk20a.h
Change static allocation for struct gr_gk20a to dynamic type.
Change all the files that being affected by that change.
Call gr allocation from corresponding init_support functions, which
are part of the probe functions.
nvgpu_pci_init_support in pci.c
vgpu_init_support in vgpu_linux.c
gk20a_init_support in module.c
Call gr free before the gk20a free call in nvgpu_free_gk20a.
Rename struct gr_gk20a to struct nvgpu_gr
JIRA NVGPU-3132
Change-Id: Ief5e664521f141c7378c4044ed0df5f03ba06fca
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095798
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move zcull size initialization to hal.gr.zcull unit.
This removes zcull dependency on falcon unit
Add new variable zcull_image_size to gr_gk20a.ctx_vars struct
Pass the size to nvgpu_gr_zcull_init()/vgpu_gr_init_gr_zcull() as
parameter to initialize zcull info
Jira NVGPU-3112
Change-Id: I54d966073dad658b4aad3a529f44c0478208b10c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2098507
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use API nvgpu_gr_obj_ctx_get/set_golden_image_size() exposed by
gr.obj_ctx unit to get/set size of golden image
Call nvgpu_gr_obj_ctx_init() from vgpu_gr_init_gr_setup_sw() to
initialize golden image size in gr.obj_ctx unit even on vGPU
Move g->ops.gr.falcon.init_ctx_state() call early in
vgpu_gr_init_gr_setup_sw() so that gr.ctx_vars struct is prepared
before fields in it accessed during rest of GR initialization
Jira NVGPU-3112
Change-Id: Ie827ad6f30cc3d931519a1f9a709861d26f8da26
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2096162
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new API nvgpu_gr_hwpm_map_get_size() in gr.hwpm_map unit to get
size of hwpm_map.
Use this API to get size and allocate each pm_ctx
Move nvgpu_gr_hwpm_map_init() call to gr.gr unit in gr_init_setup_sw()
instead of calling it from gr.falcon unit
Add nvgpu_gr_hwpm_map_init() to vGPU initialization to initialize
hwpm_map size on vGPU
Jira NVGPU-3112
Change-Id: Ifc669dcc9ecae273cea6978f5639f312cd451019
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2096160
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Created class unit under hal and moved all valid class check related
functionality to this unit. Moved all class defs from gr to a new header
include/nvgpu/class.h.
Moved following hals from gr to newly created class unit:
bool (*is_valid_class)(struct gk20a *g, u32 class_num); -->
bool (*is_valid)(u32 class_num);
bool (*is_valid_gfx_class)(struct gk20a *g, u32 class_num); -->
bool (*is_valid_gfx)(u32 class_num);
bool (*is_valid_compute_class)(struct gk20a *g, u32 class_num); -->
bool (*is_valid_compute)(u32 class_num);
JIRA NVGPU-3109
Change-Id: I01123e9b984613d4bddb2d8cf23d63410e212408
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095542
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
a) free_channel_ctx_header is used to free the channel's underlying subctx
and belongs to the hal.channel unit instead of fifo. Moved the same and
renamed the HAL ops to free_ctx_header. The function
gv11b_free_subctx_header is moved to channel_gv11b.* files and also
renamed to gv11b_channel_free_subctx_header.
b) ch_abort_clean_up is moved to hal.channel unit
c) channel_resume and channel_suspend are used to resume and suspend all
the serviceable channels. This belongs to hal.channel unit and are
moved from the hal.fifo unit.
The HAL ops channel_resume and channel_suspend are renamed to
resume_all_serviceable_ch and suspend_all_serviceable_ch respectively.
gk20a_channel_resume and gk20a_channel_suspend are also renamed to
nvgpu_channel_resume_all_serviceable_ch and
nvgpu_channel_suspend_all_serviceable_ch respectively.
d) set_error_notifier HAL ops belongs to hal.channel and is moved
accordingly.
Jira NVGPU-2978
Change-Id: Icb52245cacba3004e2fd32519029a1acff60c23c
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
g->ops.gr.commit_inst() HAL is used to commit gr context to engine
There is nothing h/w specific in HAL implementation anymore and the
sequence can be unified by checking support for subcontext feature
Remove gr_gv11b_commit_inst() and gr_gk20a_commit_inst() and unify
the sequence in nvgpu_gr_obj_ctx_commit_inst() API in common.gr.obj_ctx
unit. Use this API instead of hal.
Channel subcontext is now directly allocated in gk20a_alloc_obj_ctx()
vGPU code will directly call vGPU implementation vgpu_gr_commit_inst()
Delete the hal apis Since they are no longer needed
Jira NVGPU-1887
Change-Id: Iae1f6be4ab52e3e8628f979f477a300e65c92200
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090497
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved following functionality from gr to gr falcon common
gr_gk20a_init_ctxsw -> nvgpu_gr_falcon_init_ctxsw
gr_gk20a_init_ctx_state -> nvgpu_gr_falcon_init_ctx_state
gk20a_init_gr_bind_fecs_elpg -> nvgpu_gr_falcon_bind_fecs_elpg
Replaced code in gr_gk20a.c by calling corresponding gr falcon common
calls and moved all relevant code to gr falcon unit.
Moved following gr ops from gr to gr falcon:
int (*init_ctx_state)(struct gk20a *g);
Moved functionality from gr to relevant gr falcon hals:
gr_gk20a_init_ctx_state -> gm20b_gr_falcon_init_ctx_state
gr_gp10b_init_ctx_state -> gp10b_gr_falcon_init_ctx_state
JIRA NVGPU-1881
Change-Id: I027e1972a7747275311df99679235804dc0e16fe
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084391
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
These HALs are used to initialize and set preeemption modes
g->ops.gr.init_ctxsw_preemption_mode()
g->ops.gr.set_ctxsw_preemption_mode()
g->ops.gr.update_ctxsw_preemption_mode()
They are all h/w independent except for the functional support for
GFXP/CILP preemption support which is only present on gp10b+ chips
Add a characteristics flag NVGPU_SUPPORT_PREEMPTION_GFXP for these
preemption modes and set this flag for gp10b+ chips
Use this flag and unify all above HALs into below common functions
nvgpu_gr_obj_ctx_init_ctxsw_preemption_mode()
nvgpu_gr_obj_ctx_set_ctxsw_preemption_mode()
nvgpu_gr_obj_ctx_update_ctxsw_preemption_mode()
vGPU specific code also directly calls below vGPU specific APIs
vgpu_gr_init_ctxsw_preemption_mode()
vgpu_gr_set_ctxsw_preemption_mode()
g->ops.gr.update_ctxsw_preemption_mode() is not needed for vGPU since
it is handled by vserver
Above g->ops.gr.*_ctxsw_preemption_mode() HALs are no more required
hence delete them
Jira NVGPU-1887
Change-Id: I9b3164bcf01e5e3c27e52369c9364e0ee23a9662
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2088507
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Below hals are used to get preemption buffer sizes
g->ops.gr.get_ctx_spill_size()
g->ops.gr.get_ctx_pagepool_size()
g->ops.gr.get_ctx_betacb_size()
g->ops.gr.get_ctx_attrib_cb_size()
Move them to hal.gr.init unit
Copy over corresponding gp10b/gv11b definitions
Remove pagepool and attrib_cb size hals from gv11b since gv11b can
re-use gp10b hals
Add spill size and betacb size hals for gv100 and tu104 too since
register values are different on those chips
Remove g->ops.gr.init_gfxp_rtv_cb() hal and replace it by
g->ops.gr.init.get_gfxp_rtv_cb_size() which returns the size of RTV
cb size
Jira NVGPU-2961
Change-Id: I3f2f973c120dbfd22067366f87d06b5c9162defb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084747
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove use of struct gk20a and struct gr_gk20a from common.gr.config
hal functions.
This requires a reference to struct gk20a *g for many nvgpu_* ops. Also,
nvgpu_gr_config is updated to include sm_count_per_tpc.
JIRA NVGPU-1884
Change-Id: I874c2b3970d97ef3940b74d8ef121a7261061670
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075681
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move g->ops.gr.get_global_ctx_cb_buffer_size() and
g->ops.gr.get_global_ctx_pagepool_buffer_size() hals to hal.gr.init
unit
Move corresponding hal definitions to hal.gr.init unit
Jira NVGPU-2961
Change-Id: Ifff3e2073f6d9bca5b37244f7e107bad885e7ca7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077215
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit vgpu subctx under common/vgpu/gr to manage
GR subcontext. This unit provides interfaces to allocate
and free subctx header.
Rename vgpu_subctx_gv11b* files to subctx_vgpu* files
renaming functions vgpu_gv11b_alloc_subctx_header to
vgpu_alloc_subctx_header and vgpu_gv11b_free_subctx_header
to vgpu_free_subctx_header which are called only if
NVGPU_SUPPORT_TSG_SUBCONTEXTS is enabled or free_channel_ctx_header
HAL op is set which is set only for gv11b for virtualization.
Also assign fifo HAL op free_channel_ctx_header to
vgpu_channel_free_ctx_header for vgpu gv11b which in turn
calls vgpu_free_subctx_header.
Jira GVSCI-334
Change-Id: Ib46e7be911632eba01cd21881077683b795f8bad
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075872
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove below variables from struct gr_gk20a
u32 bundle_cb_default_size;
u32 min_gpm_fifo_depth;
u32 bundle_cb_token_limit;
u32 attrib_cb_default_size;
u32 alpha_cb_default_size;
u32 attrib_cb_gfxp_default_size;
u32 attrib_cb_gfxp_size;
u32 attrib_cb_size;
u32 alpha_cb_size;
Instead add below hals in hal.gr.init unit to get all of above sizes
u32 (*get_bundle_cb_default_size)(struct gk20a *g);
u32 (*get_min_gpm_fifo_depth)(struct gk20a *g);
u32 (*get_bundle_cb_token_limit)(struct gk20a *g);
u32 (*get_attrib_cb_default_size)(struct gk20a *g);
u32 (*get_alpha_cb_default_size)(struct gk20a *g);
u32 (*get_attrib_cb_gfxp_default_size)(struct gk20a *g);
u32 (*get_attrib_cb_gfxp_size)(struct gk20a *g);
u32 (*get_attrib_cb_size)(struct gk20a *g, u32 tpc_count);
u32 (*get_alpha_cb_size)(struct gk20a *g, u32 tpc_count);
u32 (*get_global_attr_cb_size)(struct gk20a *g, u32 max_tpc);
Define these hals for all gm20b/gp10b/gv11b/gv100/tu104 chips
Also add hal.gr.init support for gv100 chip
Remove all accesses to variables from struct gr_gk20a and start using
newly defined hals
Remove below hals to initialize sizes since they are no more required
g->ops.gr.bundle_cb_defaults(g);
g->ops.gr.cb_size_default(g);
g->ops.gr.calc_global_ctx_buffer_size(g);
Also remove definitions of above hals from all the chip files
Jira NVGPU-2961
Change-Id: I130b578ababf22328d68fe19df581e46aebeccc9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077214
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved cbc related code and data from gr to cbc unit.
Ltc and cbc related data is moved from gr header:
1. Ltc related data moved from gr_gk20a -> gk20a and it
will be moved eventually to ltc unit:
u32 slices_per_ltc;
u32 cacheline_size;
2. cbc data moved from gr_gk20a -> nvgpu_cbc
u32 compbit_backing_size;
u32 comptags_per_cacheline;
u32 gobs_per_comptagline_per_slice;
u32 max_comptag_lines;
struct gk20a_comptag_allocator comp_tags;
struct compbit_store_desc compbit_store;
3. Following config data moved gr_gk20a -> gk20a
u32 comptag_mem_deduct;
u32 max_comptag_mem;
These are part of initial config which should be available
during nvgpu_probe. So it can't be moved to nvgpu_cbc.
Modified code to use above updated data structures.
Removed cbc init sequence from gr and added in
common cbc unit. This sequence is getting called
from common nvgpu init code.
JIRA NVGPU-2896
JIRA NVGPU-2897
Change-Id: I1a1b1e73b75396d61de684f413ebc551a1202a57
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2033286
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create Compression Bit Cache(CBC) unit to have comptags
cache related functionality in one place. In this patch
Moved following gpu ops from ltc to cbc and renamed accordingly:
void (*init)(struct gk20a *g, struct gr_gk20a *gr);
u64 (*get_base_divisor)(struct gk20a *g);
int (*alloc_comptags)(struct gk20a *g, struct gr_gk20a *gr);
int (*ctrl)(struct gk20a *g, enum gk20a_cbc_op op,
u32 min, u32 max);
u32 (*fix_config)(struct gk20a *g, int base);
To avoid ambiguity renamed function pointer from
init_comptags to alloc_comptags.
Moved following function from ltc.h to cbc.h:
nvgpu_ltc_alloc_cbc -> nvgpu_cbc_alloc
Also changed file name that implemented
nvgpu_cbc_alloc functionality from
os/ltc.c -> os/linux-cbc.c
JIRA NVGPU-2897
Change-Id: Ide32a98567e9a3f0a784d62221a6f484f8343e53
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030194
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-17.7 requires the return value of all functions to be used.
Fix is either to use the return value or change the function to return
void. This patch ensures that WARN and WARN_ON always return void; and
introduces a new nvgpu_do_assert construct to trigger the equivalent
of WARN_ON(true) so that stack can be dumped (depends on OS support)
JIRA NVGPU-677
Change-Id: Ie2312c5588ceb5b1db825d15a096149b63b69af4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2018706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- rename vgpu_gr_gm20b_init_cyclestats() to vgpu_gr_init_cyclestats()
moving to gr_vgpu.c common to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctxsw_preemption_mode() to
vgpu_gr_init_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_ctxsw_preemption_mode() to
vgpu_gr_set_ctxsw_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_set_preemption_mode() to
vgpu_gr_set_preemption_mode() moving to ctx_vgpu.c common
to all vgpu chips.
- rename vgpu_gr_gp10b_init_ctx_state() to vgpu_gr_init_ctx_state()
moving to ctx_vgpu.c common to all vgpu chips.
- combine vgpu_gr_gv11b_commit_ins() to vgpu_gr_commit_inst()
executing alloc/free subctx header code only if chip supports
subctx.
- remove inclusion of hw header files from vgpu gr code by
introducing hal ops for the following:
- alloc_global_ctx_buffers:
- hal op for getting global ctx cb buffer
- hal op for getting global ctx pagepool buffer size
- set_ctxsw_preemption_mode:
- hal op for getting ctx spill size
- hal op for getting ctx pagepool size
- hal op for getting ctx betacb size
- hal op for getting ctx attrib cb size
These chip specific function definitions are currently implemented in
chip specific gr files which will need to be moved to hal units.
Also use these hal ops for corresponding functions for native. This
makes gr_gv11b_set_ctxsw_preemption_mode() function redundant. Use
gr_gp10b_set_ctxsw_preemption_mode() for gv11b as well.
Jira GVSCI-334
Change-Id: I60be86f932e555176a972c125e3ea31270e6cba7
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new directory gr under common vgpu path moving all
vgpu common gr files under that directory.
Move vgpu gr ctx implementations to a new file ctx_vgpu.c
and create corresponding header file.
Also modify parameters of some functions in ctx_vgpu.c to
not access channel/tsg/fifo constructs.
Jira GVSCI-334
Change-Id: I3498b10db62194df2871eb81fc5c5cb04b42abc3
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013350
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>