Commit Graph

594 Commits

Author SHA1 Message Date
Deepak Nibade
a3068cebc6 gpu: nvgpu: patch SMPC only for main context image
In __gr_gk20a_exec_ctx_ops(), we right now call gr_gk20a_ctx_patch_smpc()
even if operations are on pm_ctx image which is incorrect since this is
only required for SMPC operations on main context image

Fix this by not calling gr_gk20a_ctx_patch_smpc() for pm_ctx image

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: I5111fb0e6ea1f329750b42a37a98f5c006b47deb
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011095
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:42 -08:00
Deepak Nibade
5b2eb887d5 gpu: nvgpu: add gr/ctx and gr/subctx APIs to configure patch context
gr_gk20a_ctx_patch_smpc() updates patch countext count and mode by
directly calling g->ops.gr.ctxsw_prog HALs

Move the configuration of patch context to gr/ctx and gr/subctx units
with below APIs and call these from gr_gk20a_ctx_patch_smpc()
nvgpu_gr_ctx_reset_patch_count()
nvgpu_gr_ctx_set_patch_ctx()
nvgpu_gr_subctx_set_patch_ctx()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: Ib1ccbc036aa0916e7bd0a002d16b74430a7e47c9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011094
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:38 -08:00
Deepak Nibade
fe27a7f934 gpu: nvgpu: add gr/ctx and gr/subctx APIs to set hwpm ctxsw mode
gr_gk20a_update_hwpm_ctxsw_mode() right now validates the incoming
hwpm mode, checks if it is already set, and if not, it will go ahead
and set the new hwpm mode by calling g->ops.gr.ctxsw_prog HALs

Instead of programming hwpm mode in gr_gk20a.c, move the programming
to gr/ctx and gr/subctx units by adding below APIs
nvgpu_gr_ctx_prepare_hwpm_mode() - validate the incoming mode and
                                   check if it is already set
nvgpu_gr_ctx_set_hwpm_mode() - set pm mode in graphics context
nvgpu_gr_subctx_set_hwpm_mode() - set pm mode in subcontext

Add gpu_va field to struct pm_ctx_desc to store the gpu_va to be
programmed into context

Rename NVGPU_DBG_HWPM_CTXSW_MODE_* to NVGPU_GR_CTX_HWPM_CTXSW_MODE_*
and move them to gr/ctx.h

Remove below HALs since they are no longer used
g->ops.gr.ctxsw_prog.set_pm_mode_no_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_ctxsw()
g->ops.gr.ctxsw_prog.set_pm_mode_stream_out_ctxsw()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: Id2a4d498182ec0e3586dc7265f73a25870ca2ef7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011093
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:34 -08:00
Deepak Nibade
dd12b9b320 gpu: nvgpu: add gr/ctx API to set smpc ctxsw mode
gr_gk20a_update_smpc_ctxsw_mode() right now directly sets the SMPC
mode in context image by calling g->ops.gr.ctxsw_prog HAL

Add new API nvgpu_gr_ctx_set_smpc_mode() in gr/ctx unit to set SMPC
mode and use it in gr_gk20a_update_smpc_ctxsw_mode()

Jira NVGPU-1527

Change-Id: Ib9a74781d6bb988caffc2a79345be773fd4942e4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011092
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:25 -08:00
Deepak Nibade
2af1558d42 gpu: nvgpu: add gr/ctx API to init zcull in context
gr_gk20a_init_golden_ctx_image() right now directly initializes
zcull state in context image by calling g->ops.gr.ctxsw_prog HAL

Add new API nvgpu_gr_ctx_init_zcull() in gr/ctx unit to do this
initialization and use it in gr_gk20a_init_golden_ctx_image()

Jira NVGPU-1527

Change-Id: I8cf58168cbc9c01fdd663e1ade50b7804118ef01
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011091
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:21 -08:00
Deepak Nibade
bac95b36d8 gpu: nvgpu: move zcull context setup to gr/ctx and gr/subctx units
In gr_gk20a_ctx_zcull_setup(), we configure context/subcontext with
zcull details
This API now does it directly by calling g->ops.gr.ctxsw_prog HAL

Move all context/subcontext setup to gr/ctx and gr/subctx units
respectively
Define and use below new APIs for same
gr/ctx : nvgpu_gr_ctx_zcull_setup()
gr/subctx : nvgpu_gr_subctx_zcull_setup()

Jira NVGPU-1527
Jira NVGPU-1613

Change-Id: I1b7b16baea60ea45535c623b5b41351610ca433e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011090
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:16 -08:00
Deepak Nibade
319eca3498 gpu: nvgpu: move get_ctx_id API to gr/ctx unit
API gr_gk20a_get_ctx_id() extracts ID of the context and as such
belongs to gr/ctx unit
Move it to gr/ctx and rename it as nvgpu_gr_ctx_get_ctx_id()

All the book keeping for valid ID is also done in same API using
ctx_id_valid flag in gr/ctx unit

Use new API in gr_gp10b_set_cilp_preempt_pending() to get the
context ID

Jira NVGPU-1527

Change-Id: I198262765e95133220f20cfbb1516d4a0758e30d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011089
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-11 10:25:13 -08:00
Deepak Nibade
a5eb150635 gpu: nvgpu: add new gr/config unit to initialize GR configuration
Add new unit gr/config to initialize GR configuration like GPC/TPC
count, MAX count and mask

Create new structure nvgpu_gr_config that stores all the configuration
and that is owned by the new unit

Move below fields from struct gr_gk20a to nvgpu_gr_config in gr/config.h
Struct gr_gk20a now only holds the pointer to struct nvgpu_gr_config

u32 max_gpc_count;
u32 max_tpc_per_gpc_count;
u32 max_zcull_per_gpc_count;
u32 max_tpc_count;

u32 gpc_count;
u32 tpc_count;
u32 ppc_count;
u32 zcb_count;

u32 pe_count_per_gpc;

u32 *gpc_tpc_count;
u32 *gpc_ppc_count;
u32 *gpc_zcb_count;
u32 *pes_tpc_count[GK20A_GR_MAX_PES_PER_GPC];

u32 *gpc_tpc_mask;
u32 *pes_tpc_mask[GK20A_GR_MAX_PES_PER_GPC];
u32 *gpc_skip_mask;

u8 *map_tiles;
u32 map_tile_count;
u32 map_row_offset;

Remove gr->sys_count since it was already no longer used

common/gr/config/gr_config.c unit now exposes the APIs to initialize
the configuration and also to query the configuration values

nvgpu_gr_config_init() is called to initialize GR configuration from
gr_gk20a_init_gr_config() and gr_gk20a_init_map_tiles() is simply
renamed as nvgpu_gr_config_init_map_tiles()

Expose new API nvgpu_gr_config_deinit() to deinit the configuration

Expose nvgpu_gr_config_get_*() APIs to query above configuration
fields stored in nvgpu_gr_config structure

Update vgpu_gr_init_gr_config() to initialize the configuration
from gr->config structure

Chip specific HALs that access GR register for initialization
are implemented in common/gr/config/gr_config_gm20b.c
Set these HALs for all GPUs

Jira NVGPU-1879

Change-Id: Ided658b43124ea61b9f273b82b73fdde4ed3c8f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2012167
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-08 12:55:53 -08:00
Adeel Raza
f81c083766 gpu: nvgpu: gk20a: MISRA rule 15.6 fixes
MISRA rule 15.6 requires that all if/else/loop blocks should be enclosed
by brackets. This patch adds brackets to single line if/else/loop blocks
in the gk20a directory.

JIRA NVGPU-775

Change-Id: Ic8c4d84f961392bf9a0841bdc654a37858a604d0
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2011657
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-05 19:23:54 -08:00
Deepak Nibade
254253732c gpu: nvgpu: add new unit for GR subcontext
Add new unit common/gr/subctx.c to manage GR subcontext
This unit provides interfaces to allocate/free/load GR subcontext

Add new header file include/nvgpu/gr/subctx.h to declare all the
interfaces.

Right now channel_gk20a structure directly includes a nvgpu_mem
for context header.
Declare a new structure nvgpu_gr_subctx for subcontext and include
this from channel_gk20a

Make all necessary changes to refer ctx_header from subctx instead
of directly referencing it from channel

Jira NVGPU-1613

Change-Id: I9eb1ee8f26fa88d2881f9b294935b65e9cbcc9b4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990129
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-02 03:03:43 -08:00
Seema Khowala
013ca60edd gpu: nvgpu: remove code for ch not bound to tsg
- Remove handling for channels that are no more bound to tsg
  as channel could be referenceable but no more part of a tsg
- Use tsg_gk20a_from_ch to get pointer to tsg for a given channel
- Clear unhandled gr interrupts

Bug 2429295
JIRA NVGPU-1580

Change-Id: I9da43a2bc9a0282c793b9f301eaf8e8604f91d70
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972492
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-01 11:58:57 -08:00
Terje Bergstrom
a9f404cb99 gpu: nvgpu: Introduce NVGPU_DEBUGGER build flag
Introduce build flag for NVGPU_DEBUGGER. Also introduces Makefile flag
NVGPU_REDUCED and disables NVGPU_DEBUGGER when doing a reduced
build.

Make user space build enable the reduced build.

Change-Id: I84d6142811f674f2a7652e093b63ea5e93d9143e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2002190
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-01 09:46:07 -08:00
Nicolas Benech
e9c00c0da9 gpu: nvgpu: add error codes to mm_l2_flush
gv11b_mm_l2_flush was not checking error codes from the various
functions it was calling. MISRA Rule-17.7 requires the return value
of all functions to be used. This patch now checks return values and
propagates the error upstream.

JIRA NVGPU-677

Change-Id: I9005c6d3a406f9665d318014d21a1da34f87ca30
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998809
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-30 16:44:35 -08:00
Vinod G
1b1ebb0a8d gpu: nvgpu: log mme esr register information
Add new hal to log the mme exception register information. Support
added for Turing only. On mme exception interrupt, read the
mme_hww_esr register and log the error based on esr register bits.

JIRA NVGPU-1241

Change-Id: Ied3db0cc8fe6e2a82ecafc9964875e2686ca0d72
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2005807
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-30 04:42:58 -08:00
Terje Bergstrom
0f84c9024f gpu: nvgpu: Add nvgpu_bsearch() wrapper
Add a wrapper nvgpu_bsearch() for a standard binary search. It has two
implementations: Linux version calls Linux kernel bsearch() and
POSIX/QNX build uses stdlib bsearch().

Change-Id: Ic244df3cf3adb52b2192c175ec9b5dd06bce3ec8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2003370
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-29 21:55:37 -08:00
Scott Long
80d8d9f8d1 gpu: nvgpu: MISRA 10.1 fixes to gr
MISRA Rule 10.1 states that operands shall not be of an inappopriate
essential type.  For example, shift and bitwise operations should only
be performed on operands of essentially unsigned type.

This patch modifies gr exception handling to no longer use bitwise OR
when generating return status values.

Instead, the first non-zero status value is saved off and returned.

This has the added benefit of not potentially ORing together errno
values and generating an undefined status code.

JIRA NVGPU-650

Change-Id: If725a560c122d2cbf12e79b58161402da2023b5b
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1999098
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-22 13:14:29 -08:00
Deepak Nibade
b40c655e12 gpu: nvgpu: move regops to separate unit
Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h

Move rest of the platform HAL files to common/regops/ as well

Fix all the header includes to include new public header

Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow

Jira NVGPU-620

Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-21 23:04:28 -08:00
Prateek sethi
126187f232 gpu: nvgpu: Fix uninitialized memory access
gk20a_remove_gr_support() is freeing the local_golden_image and
local_golden_image->context. But there are instances where
local_golden_image is not allocated since freeing an
unallocated golden context image accesses the contents of
local_golden_image causes a fault.

Check golden_image_initialized flag before freeing
local_golden_image->context.

Jira NVGPU-1648
Bug 2461665

Change-Id: I19235d2ec9d77ba4ef00257f43436448f5f70b25
Signed-off-by: Prateek sethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997665
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-20 01:34:19 -08:00
Deepak Nibade
0ff5a49f45 gpu: nvgpu: move patch context update calls to gr/ctx unit
We use below APIs to update patch context
gr_gk20a_ctx_patch_write_begin()
gr_gk20a_ctx_patch_write_end()
gr_gk20a_ctx_patch_write()

Since patch context is owned by gr/ctx unit, move these APIs
to this unit and rename them to
nvgpu_gr_ctx_patch_write_begin()
nvgpu_gr_ctx_patch_write_end()
nvgpu_gr_ctx_patch_write()

Jira NVGPU-1527

Change-Id: Iee19c7a71d074763d3dcb9b1997cb2a3159d5299
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989214
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-17 10:26:58 -08:00
Deepak Nibade
58bc18b794 gpu: nvgpu: load context image from gr/ctx unit
We currently load and create new graphics context image in
gr_gk20a_load_golden_ctx_image()
This API will first load local golden image in new context
image and then initialize context appropriately by calling
g->ops.gr.ctxsw_prog() HALs

Move this sequence to gr/ctx unit and rename the API as
nvgpu_gr_ctx_load_golden_ctx_image()

Note that call to g->ops.gr.update_ctxsw_preemption_mode()
is moved out of this API and called directly from
gk20a_alloc_obj_ctx()

Jira NVGPU-1527

Change-Id: Id5a5b2cd2c0704fbefe536d581a37a60ec185ea9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989157
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-17 10:26:50 -08:00
Philip Elcan
3d62c3256f gpu: nvgpu: gk20a: fix misc MISRA 10.3 issues
MISRA Rule 10.3 prohibits assigning to an object of different essential
or narrower type. This fixes some miscellaneous violations in
gr_gk20a.c.

JIRA NVGPU-1008

Change-Id: I46aa3bcdee23f53ab79615d37c1a797de1b74137
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990390
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:08 -08:00
Philip Elcan
a6f9d1c40e gpu: nvgpu: gk20a: add casts for MISRA 10.3
MISRA Rule 10.3 prohibits assignment to an object from an object of
different essential type or a narrower type. This adds casts in
gr_gk20a.c to address these violations. BUG_ON() is added for cases
where there is potential for losing data in the cast.

JIRA NVGPU-1008

Change-Id: Ic4e2449c536abe8127272d4ca46f76336fae46c8
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990389
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:05 -08:00
Philip Elcan
60ef4d2e3b gpu: nvgpu: gk20a: fix MISRA 10.3 violations
MISRA 10.3 prohibits assignment from an object of different essential or
narrower type. This fixes a number of MISRA 10.3 violations in
gr_gk20a.c in constant values.

JIRA NVGPU-1008

Change-Id: I93eeabe4ab0217a8043a2a025a42d5b95e177bc3
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990388
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:01 -08:00
Philip Elcan
edd5a73bbf gpu: nvgpu: gk20a: fix function returns
This fixes MISRA 10.3 violation for assignment of narrower or different
type. The fixes are cases where functions were mixing u32s and ints
for function return values.

JIRA NVGPU-1008

Change-Id: I58c7e499c918ece0abb4012da9fe6b7a604b0419
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990386
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:57 -08:00
Philip Elcan
1b2dd4904a gpu: nvgpu: gk20a: cleanup function return
gk20a_init_gr_prepare does not return any meaningful status, so just
make it a void.

JIRA NVGPU-1008

Change-Id: I25f528407e123e84e6bc5450dbd8ee38e75ad3fd
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990385
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:54 -08:00
Philip Elcan
f910525e14 gpu: nvgpu: cleanup idle_wait and wait_empty APIs
All cases where the wait_empty HAL API and the wait_idle, wait_fe_idle
APIs were being called used the same parameters, so move those
parameters inside the APIs.

JIRA NVGPU-1008

Change-Id: Ib864260f5a4c6458d81b7d2326076c0bd9c4b5af
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990384
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:50 -08:00
Sai Nikhil
7ffbbdae6e gpu: nvgpu: MISRA Rule 7.2 misc fixes
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.

This patch adds a "U" suffix to integer literals which are being
assigned to unsigned integer variables. In most cases the integer
literal is a hexadecimal value.

JIRA NVGPU-844

Change-Id: I8a68c4120681605261b11e5de00f7fc0773454e8
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959189
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 18:49:13 -08:00
Deepak Nibade
4883f14fbb gpu: nvgpu: map global_ctx buffers from gr/ctx unit
Currently all the global contex buffers are mapped into each graphics
context. Move all the mapping/unmapping support to gr/ctx unit since
all the mappings are owned by context itself

Add nvgpu_gr_ctx_map_global_ctx_buffers() that maps all the global
context buffers into given gr_ctx
Add nvgpu_gr_ctx_get_global_ctx_va() that returns VA of the mapping
for requested index

Remove g->ops.gr.map_global_ctx_buffers() since it is no longer
required. Also remove below APIs
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_unmap_global_ctx_buffers()
gr_tu104_map_global_ctx_buffers()

Remove global_ctx_buffer_size from nvgpu_gr_ctx since it is no
longer used

Jira NVGPU-1527

Change-Id: Ic185c03757706171db0f5a925e13a118ebbdeb48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987739
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 10:46:48 -08:00
Deepak Nibade
1c17ae310c gpu: nvgpu: add new unit for GR context
Add new unit common/gr/ctx.c to manage GR context

This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers

Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.

Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit

Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference

Remove gr_gp10b_alloc_buffer() since it is no longer used

Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer

Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size

Jira NVGPU-1527

Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 10:46:29 -08:00
Deepak Nibade
9241635805 gpu: nvgpu: move local golden image to global ctx unit
Local golden image is copy of global GR context buffer hence move its
ownership to global context unit

Add new structure nvgpu_gr_global_ctx_local_golden_image to hold all meta
data for local golden image and move it to struct gr_gk20a

Expose and use new APIs to initialize/deinitialize and load local golden image

Jira NVGPU-1625

Change-Id: Ieb68e52c205ca0ecd27f8bf4bb31922a01e7ae54
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1984952
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-08 14:16:39 -08:00
Sai Nikhil
a6dcfcfa07 gpu: nvgpu: gk20a: MISRA Rule 10.1 fixes
MISRA rule 10.1 mandates that the correct data types are used as
operands of operators. For example, only unsigned integers can be used
as operands of bitwise operators.

This patch fixes rule 10.1 vioaltions for gk20a.

JIRA NVGPU-777
JIRA NVGPU-1006

Change-Id: I965eae017350156c6692fd585292b7a54e4190d8
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971010
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-06 19:24:22 -08:00
Richard Zhao
98c034869a gpu: nvgpu: remove GOLDEN_CTX from global buffers
Current code creats golden image using dedicated gr_ctx called
GOLDEN_CTX. But on RM server it's no easy to create a GOLDEN_CTX since
virtual addresses are managed by guest OSes. There's no special reason
why we have to use a separate gr_ctx for golden image. This patch moves
it to use current channel gr_ctx. And the function will be re-useable
by RM server.

Jira GVSCI-191

Change-Id: I9920703e61f7e1d8b3ad6612811e47a3815d0c0f
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1983702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-04 13:13:50 -08:00
Aparna Das
d4f1a138dc gpu: nvgpu: add vmid param to fecs trace bind_channel
OS specific implementation of fecs trace bind_channel function
needs to handle special case for vserver to retrieve vmid from
channel id. Native code should be independent of server code.
Modify struct fecs_trace member function bind_channel to pass
vmid parameter enabling retrieving and passing vmid from server
code.

Jira GVSCI-44

Change-Id: I96223376f2068e2cbf60a9c9b35ff564a65e5dc3
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-04 11:15:06 -08:00
Deepak Nibade
93a05937f0 gpu: nvgpu: remove g->ops.gr.dump_ctxsw_stats
g->ops.gr.dump_ctxsw_stats is redundant since we can directly call
g->ops.gr.ctxsw_prog.dump_ctxsw_stats

Also clean up gr_gp10b_dump_ctxsw_stats since it too becomes redundant

Jira NVGPU-1527

Change-Id: I0ac5bcf6cf3dca30954d302766431496971708f4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986814
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-03 23:05:42 -08:00
Sagar Kamble
5efc446a06 gpu: nvgpu: make all falcons struct nvgpu_falcon*
With intention to make falcon header free of private data we are making
all falcon struct members (pmu.flcn, sec2.flcn, fecs_flcn, gpccs_flcn,
nvdec_flcn, minion_flcn, gsp_flcn) in the gk20a, pointers to struct
nvgpu_falcon. Falcon structures are allocated/deallocated by
falcon_sw_init & _free respectively.

While at it, remove duplicate gk20a.pmu_flcn and gk20a.sec2_flcn,
refactor flcn_id assignment and introduce falcon_hal_sw_free.

JIRA NVGPU-1594

Change-Id: I222086cf28215ea8ecf9a6166284d5cc506bb0c5
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-03 02:58:38 -08:00
Deepak Nibade
ef580aee38 gpu: nvgpu: add new unit for GR global context buffers
Add new unit common/gr/global_ctx.c to manage GR global context buffers

This unit provides interfaces to allocate/free/map/unmap all the global
context buffers. It also provides APIs to get/set size of the buffers,
and to get memory handle of the buffers

Use interfaces exposed by this unit instead of directly accessing global
context buffers in common code

Add new header file include/nvgpu/gr/global_ctx.h to declare all the
interfaces.

Rename "struct gr_ctx_buffer_desc" to "struct nvgpu_gr_global_ctx_buffer_desc"
which holds all data for each global context
Remove void *priv since it is no longer used
Add size to the desc structure to store the requested size

Remove global_ctx_buffer_size from struct nvgpu_gr_ctx since it is no longer
used for any real purpose

Jira NVGPU-1625

Change-Id: I3feaf47bc2fdf192f36b136f2ef80a49d1782c5d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1977884
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-02 10:55:45 -08:00
Deepak Nibade
bb677160e5 gpu: nvgpu: check tu104 specific timestamp buffer full error code
In gk20a_gr_handle_fecs_error(), we right now check the error code in
mailbox to identify if we hit timestamp buffer full error interrupt
This error code right now is hard coded to 0x26

But on Turing ucode this error code is set to 0x32

Add new HAL g->ops.fecs_trace.get_buffer_full_mailbox_val() to get
correct error code per platform and use this in
gk20a_gr_handle_fecs_error()

Bug 200471541
Bug 2469604

Change-Id: I7325354b39d35b1c8b218e554814316d22950469
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978144
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-31 09:43:39 -08:00
Ranjanikar Nikhil Prabhakarrao
f0762ed483 gpu: nvgpu: add speculative barrier
Data can be speculativerly stored and
code flow can be hijacked.

To mitigate this problem insert a
speculation barrier.

Bug 200447167

Change-Id: Ia865ff2add8b30de49aa970715625b13e8f71c08
Signed-off-by: Ranjanikar Nikhil Prabhakarrao <rprabhakarra@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972221
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-30 22:26:01 -08:00
Debarshi Dutta
ac4c2d4ae0 gpu: nvgpu: move fifo RC_TYPE_* definitions to common header
The RC_TYPE_* definitions in fifo_gk20a.h are generic and are moved to
a newly constructed common header <nvgpu/fifo.h>

Jira NVGPU-1237

Change-Id: Ia1bb80b9b0047675c7abfb6ce6ccd42a2e99f41f
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970031
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 21:54:39 -08:00
Deepak Nibade
fdc15553bc gpu: nvgpu: add new HAL to initialize preemption mode
g->ops.gr.alloc_gr_ctx HAL right now allocates graphics context and
also initializes preemption mode for various platforms

Separate out a new HAL g->ops.gr.init_ctxsw_preemption_mode that
initializes preemption mode and call it from gk20a_alloc_obj_ctx()
after context is created

g->ops.gr.alloc_gr_ctx now only allocates the context as the name
suggests

Jira NVGPU-1527

Change-Id: I8a44672d5ab2ebfe315e6334115265e4ee4f24f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 00:35:39 -08:00
Deepak Nibade
6bbcdb51c6 gpu: nvgpu: remove redundant GR ops
g->ops.gr.enable_cde_in_fecs and g->ops.gr.update_boosted_ctx
are no longer required since we can directly call
g->ops.gr.ctxsw_prog.set_cde_enabled and
g->ops.gr.ctxsw_prog.set_pmu_options_boost_clock_frequencies
respectively

remove those functions and the ops

Jira NVGPU-1526

Change-Id: Idb0ad5f634e78aac44ec325ba2b7f59c612b29e8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972184
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 00:35:29 -08:00
Sagar Kamble
147d5d9402 gpu: nvgpu: update GPCCS falcon base addr init
GPCCS falcon base address was being set without invoking hal api. Remove
FALCON_GPCCS_BASE. This patch defines gpu_ops.gr.gpccs_falcon_base_addr
hal api to get this base address.

JIRA NVGPU-1587

Change-Id: Icfa7a26d1bb2d67c81f05a43f6ce906f59706b3d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969431
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-12 15:14:20 -08:00
Sagar Kamble
c6fc301a9b gpu: nvgpu: update FECS falcon base addr init
FECS falcon base address was being set without invoking hal api. Remove
FALCON_FECS_BASE. This patch defines gpu_ops.gr.fecs_falcon_base_addr hal
api to get this base address.

JIRA NVGPU-1587

Change-Id: I9c8e60be4ee81a154020c982893725a12ebb72ef
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969430
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-12 15:14:16 -08:00
Deepak Nibade
6777bd5ed2 gpu: nvgpu: add separate unit for gr/ctxsw_prog
Add separate new unit gr/ctxsw_prog that provides interface to access
h/w header files hw_ctxsw_prog_*.h

Add below chip specific files that access above h/w unit and provide
interface through g->ops.gr.ctxsw_prog.*() HAL for rest of the units

common/gr/ctxsw_prog/ctxsw_prog_gm20b.c
common/gr/ctxsw_prog/ctxsw_prog_gp10b.c
common/gr/ctxsw_prog/ctxsw_prog_gv11b.c

Remove all the h/w header includes from rest of the units and code.
Remove direct calls to h/w headers ctxsw_prog_*() and use HALs
g->ops.gr.ctxsw_prog.*() instead

In gr_gk20a_find_priv_offset_in_ext_buffer(), h/w header
ctxsw_prog_extended_num_smpc_quadrants_v() is only defined on gk20a
And since we don't support gk20a remove corresponding code

Add missing h/w header ctxsw_prog_main_image_pm_mode_ctxsw_f() for
some chips
Add new h/w header ctxsw_prog_gpccs_header_stride_v()

Jira NVGPU-1526

Change-Id: I170f5c0da26ada833f94f5479ff299c0db56a732
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966111
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-11 14:41:04 -08:00
Amurthyreddy
2bded93b28 gpu: nvgpu: MISRA 10.4 enum fixes
MISRA rule 10.4 only allows arithmetic conversions on operands of the
same essential type category.

Fix violations where an arithmetic conversion is performed on enum and
non-enum types.

JIRA NVGPU-993

Change-Id: Idaf523d7d3aa85294711b77b34821e729d2e747c
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964125
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-11 09:05:16 -08:00
Seema Khowala
790ba09554 gpu: nvgpu: handle timestamp buffer full ctxsw_intr0
If enabled, fecs trace updating happens from ucode
side even when there is no fecs trace dumper application
to consume it. Due to this, trace buffer will get
eventually full and ucode will trigger ctxsw_intr0.
Reset fecs_trace buffer to handle timestamp buffer full
ctxsw_intr0.

Bug 2361571
Bug 200472922

Change-Id: Ia26a17635fc6bd6e8663b8af983acc91839ecfcd
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965370
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 14:54:02 -08:00
Seema Khowala
2c379cad0f gpu: nvgpu: add handling for ctxsw_intr0
ctxsw_intr0 is triggered by ucode even if it
is not enabled by driver. Add handling
for processing ctxsw_intr0. fecs mailbox(6)
is used to report fecs/gpccs misc error codes.
Also dump falcon stats for unhandled fecs intr.

Bug 2361571
Bug 200472922

Change-Id: Iefb3c0d46ad1d08db07fd3c08cff91a77835908c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966984
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 14:53:53 -08:00
Scott Long
5bdffee1a8 gpu: nvgpu: MISRA 10.3 fixes to gr
MISRA Rule 10.3 states that the value of an expression shall not
be assigned to an object with a narrower essential type or of a
different esseential type category.

For example, assigning an unsigned 32bit value (u32) to a signed
32bit value (int) is not permitted.

This patch modifies the gr_gk20a_init_golden_ctx_image() and
gk20a_init_sw_bundle() routines to use an int (instead of u32)
for return status handling making them consistent with the other
gr routines used in this part of the gr object allocation path.

JIRA NVGPU-647

Change-Id: I53c47d9a169bd0d4cdbce107bd4ad8e7978ae01d
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965735
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:05:02 -08:00
Konsta Holtta
4e6d9afab8 gpu: nvgpu: store ch ptr in gr isr data
Store a channel pointer that is either NULL or a referenced channel to
avoid confusion about channel ownership. A pure channel ID is dangerous.

Jira NVGPU-1460

Change-Id: I6f7b4f80cf39abc290ce9153ec6bf5b62918da97
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955401
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-27 12:24:47 -08:00
Konsta Holtta
7df3d58750 gpu: nvgpu: add safe channel id lookup
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.

The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.

However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.

The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.

Jira NVGPU-1460

Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-27 12:24:38 -08:00