In order to maintain separate mappings of GR TSG and global context
buffers for different subcontexts, we need to separate the memory
struct and the mapping struct for the buffers. This patch moves
the mappings of all GR ctx buffers to new structure
nvgpu_gr_ctx_mappings.
This will be instantiated per subcontext in the upcoming patches.
Summary of changes:
1. Various context buffers were allocated and mapped separately.
All TSG context buffers are now stored in gr_ctx->mem[] array
since allocation and mapping is unified for them.
2. Mapping/unmapping and querying the GPU VA of the context
buffers is now handled in ctx_mappings unit. Structure
nvgpu_gr_ctx_mappings in nvgpu_gr_ctx holds the maps.
On ALLOC_OBJ_CTX this struct is instantiated and deleted
on free_gr_ctx.
3. Introduce mapping flags for TSG and global context buffers.
This is to map different buffers with different caching
attribute. Map all buffers as cacheable except
PRIV_ACCESS_MAP, RTV_CIRCULAR_BUFFER, FECS_TRACE, GR CTX
and PATCH ctx buffers. Map all buffers as privileged.
4. Wherever VM or GPU VA is passed in the obj_ctx allocation
functions, they are now replaced by nvgpu_gr_ctx_mappings.
5. free_gr_ctx API need not accept the VM as mappings struct
will hold the VM. mappings struct will be kept in gr_ctx.
6. Move preemption buffers allocation logic out of
nvgpu_gr_obj_ctx_set_graphics_preemption_mode.
7. set_preemption_mode and gr_gk20a_update_hwpm_ctxsw_mode
functions need update to ensure buffers are allocated
and mapped.
8. Keep the unit tests and documentation updated.
With these changes there is clear seggregation of allocation and
mapping of GR context buffers. This will simplify further change
to add multiple address spaces support. With multiple address
spaces in a TSG, subcontexts created after first subcontext
just need to map the buffers.
Bug 3677982
Change-Id: I3cd5f1311dd85aad1cf547da8fa45293fb7a7cb3
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2712222
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
To achieve permanent fault coverage, the CTAs launched by
each kernel in the mission and redundant contexts must execute on
different hardware resources. This feature proposes modifications
in the software to modify the virtual SM id to TPC mapping across
the mission and redundant contexts. The virtual SM identifier to TPC
mapping is done by nvgpu when setting up the patch context.
The recommendation for the redundant setting is to offset the
assignment by one TPC, and not by one GPC. This will ensure that both
GPC and TPC diversity. The SM and Quadrant diversity will happen
naturally. For kernels with few CTAs, the diversity is guaranteed
to be 100%. In case of completely random CTA allocation,
e.g. large number of CTAs in the waiting queue, the diversity is
1 - 1/#SM, or 87.5% for GV11B, 97.9% for TU104.
Added NvGpu CFLAGS to enable/disable the SM diversity support
"CONFIG_NVGPU_SM_DIVERSITY".
This support is only enabled on gv11b and tu104 QNX non safety build.
JIRA NVGPU-4685
Change-Id: I8e3eaa72d8cf7aff97f61e4c2abd10b2afe0fe8b
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2268026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Removed unused struct from gr_gk20a.h
Change static allocation for struct gr_gk20a to dynamic type.
Change all the files that being affected by that change.
Call gr allocation from corresponding init_support functions, which
are part of the probe functions.
nvgpu_pci_init_support in pci.c
vgpu_init_support in vgpu_linux.c
gk20a_init_support in module.c
Call gr free before the gk20a free call in nvgpu_free_gk20a.
Rename struct gr_gk20a to struct nvgpu_gr
JIRA NVGPU-3132
Change-Id: Ief5e664521f141c7378c4044ed0df5f03ba06fca
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095798
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Use API nvgpu_gr_obj_ctx_get/set_golden_image_size() exposed by
gr.obj_ctx unit to get/set size of golden image
Call nvgpu_gr_obj_ctx_init() from vgpu_gr_init_gr_setup_sw() to
initialize golden image size in gr.obj_ctx unit even on vGPU
Move g->ops.gr.falcon.init_ctx_state() call early in
vgpu_gr_init_gr_setup_sw() so that gr.ctx_vars struct is prepared
before fields in it accessed during rest of GR initialization
Jira NVGPU-3112
Change-Id: Ie827ad6f30cc3d931519a1f9a709861d26f8da26
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2096162
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new API nvgpu_gr_hwpm_map_get_size() in gr.hwpm_map unit to get
size of hwpm_map.
Use this API to get size and allocate each pm_ctx
Move nvgpu_gr_hwpm_map_init() call to gr.gr unit in gr_init_setup_sw()
instead of calling it from gr.falcon unit
Add nvgpu_gr_hwpm_map_init() to vGPU initialization to initialize
hwpm_map size on vGPU
Jira NVGPU-3112
Change-Id: Ifc669dcc9ecae273cea6978f5639f312cd451019
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2096160
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Create a new directory gr under common vgpu path moving all
vgpu common gr files under that directory.
Move vgpu gr ctx implementations to a new file ctx_vgpu.c
and create corresponding header file.
Also modify parameters of some functions in ctx_vgpu.c to
not access channel/tsg/fifo constructs.
Jira GVSCI-334
Change-Id: I3498b10db62194df2871eb81fc5c5cb04b42abc3
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013350
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>