From 3e5424bee390719fc5804eaf4baa8e1bc12f1b09 Mon Sep 17 00:00:00 2001 From: Divya Date: Tue, 18 Apr 2023 11:16:07 +0000 Subject: [PATCH] gpu: nvgpu: gv11b: ap_compute fix - During nvgpu_poweron, PERFMON_INIT RPC and ACR_INIT_WPR_REGION command is sent to PMU in two different threads. - For perfmon RPC method is used and for ACR, CMD-MSG queue is used. - Since the pmu thread and poweron thread run in parallel, the pmu sequence acquired by both can have the same seq_id. - For Perfmon RPC, nvgpu_pmu_seq_free_release() is called followed by nvgpu_pmu_seq_release(). - This causes clearing of sequence for the next command. - To resolve this, instead of nvgpu_pmu_seq_free_release(), just free the rpc-payload after getting ack for perfmon and then do sequence release. - This ensures that the ACR cmd sent just after perfmon RPC does not get the same seq_id and the sequence is not cleared. Bug 4074021 Change-Id: Id9972cb719458062d8c7d9e226a25599026c052b Signed-off-by: Divya Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2889840 Reviewed-by: svcacv Reviewed-by: svc-mobile-coverity Reviewed-by: svc-mobile-cert Reviewed-by: Mahantesh Kumbar GVS: Gerrit_Virtual_Submit --- drivers/gpu/nvgpu/common/pmu/ipc/pmu_msg.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/nvgpu/common/pmu/ipc/pmu_msg.c b/drivers/gpu/nvgpu/common/pmu/ipc/pmu_msg.c index 39ec45d24..86e675599 100644 --- a/drivers/gpu/nvgpu/common/pmu/ipc/pmu_msg.c +++ b/drivers/gpu/nvgpu/common/pmu/ipc/pmu_msg.c @@ -641,11 +641,17 @@ void nvgpu_pmu_rpc_handler(struct gk20a *g, struct pmu_msg *msg, exit: rpc_payload->complete = true; - /* free allocated memory and release the sequence */ + /* + * free allocated memory and set seq_free_status to + * true to sync the memory free + */ if (rpc_payload->is_mem_free_set) { seq = nvgpu_pmu_sequences_get_seq(pmu->sequences, msg->hdr.seq_id); - nvgpu_pmu_seq_free_release(g, pmu->sequences, seq); + if (seq->seq_free_status == false) { + nvgpu_kfree(g, rpc_payload); + seq->seq_free_status = true; + } } }