gpu: nvgpu: gk20a: check ctx valid bit

When determining the chid for the current context, first check
the ctx valid bit.

Bug 1485555

Change-Id: I6c3096d800a6cef38b656d525437a2c4f8b45774
Signed-off-by: Mayank Kaushik <mkaushik@nvidia.com>
Reviewed-on: http://git-master/r/496140
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Geoffrey Gerfin <ggerfin@nvidia.com>
Tested-by: Geoffrey Gerfin <ggerfin@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
This commit is contained in:
Mayank Kaushik
2014-09-04 18:35:25 -07:00
committed by Dan Willemsen
parent ed0c49a0b1
commit 545fadee0a

View File

@@ -5224,6 +5224,7 @@ static int gk20a_gr_handle_notify_pending(struct gk20a *g,
/* Used by sw interrupt thread to translate current ctx to chid.
* For performance, we don't want to go through 128 channels every time.
* curr_ctx should be the value read from gr_fecs_current_ctx_r().
* A small tlb is used here to cache translation */
static int gk20a_gr_get_chid_from_ctx(struct gk20a *g, u32 curr_ctx)
{
@@ -5232,6 +5233,13 @@ static int gk20a_gr_get_chid_from_ctx(struct gk20a *g, u32 curr_ctx)
u32 chid = -1;
u32 i;
/* when contexts are unloaded from GR, the valid bit is reset
* but the instance pointer information remains intact. So the
* valid bit must be checked to be absolutely certain that a
* valid context is currently resident. */
if (!gr_fecs_current_ctx_valid_v(curr_ctx))
return -1;
spin_lock(&gr->ch_tlb_lock);
/* check cache first */