gpu: nvgpu: cpu access for ctxheader

Before updating ctxheader in gr_gk20a_ctx_patch_smpc()
add cpu access with nvgpu_mem_begin.
After updating ctxheader, close cpu access
with nvgpu_mem_end.

Reviewed usage of ctxheader in other places and its
cpu access is taken care correctly.

Bug 200333285

Change-Id: I88ab0b040f95240673a4be55bcfe880a1440655b
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1564764
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: David Martinez Nieto <dmartineznie@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This commit is contained in:
seshendra Gadagottu
2017-09-20 12:20:56 -07:00
committed by mobile promotions
parent 82ef5f7b3b
commit 1c0ea341cc

View File

@@ -6586,12 +6586,22 @@ static int gr_gk20a_ctx_patch_smpc(struct gk20a *g,
ctxsw_prog_main_image_patch_count_o(),
ch_ctx->patch_ctx.data_count);
if (ctxheader->gpu_va) {
/*
* Main context can be gr_ctx or pm_ctx.
* CPU access for relevant ctx is taken
* care of in the calling function
* __gr_gk20a_exec_ctx_ops. Need to take
* care of cpu access to ctxheader here.
*/
if (nvgpu_mem_begin(g, ctxheader))
return -ENOMEM;
nvgpu_mem_wr(g, ctxheader,
ctxsw_prog_main_image_patch_adr_lo_o(),
vaddr_lo);
nvgpu_mem_wr(g, ctxheader,
ctxsw_prog_main_image_patch_adr_hi_o(),
vaddr_hi);
nvgpu_mem_end(g, ctxheader);
} else {
nvgpu_mem_wr(g, mem,
ctxsw_prog_main_image_patch_adr_lo_o(),