Commit Graph

683 Commits

Author SHA1 Message Date
Terje Bergstrom
a9f404cb99 gpu: nvgpu: Introduce NVGPU_DEBUGGER build flag
Introduce build flag for NVGPU_DEBUGGER. Also introduces Makefile flag
NVGPU_REDUCED and disables NVGPU_DEBUGGER when doing a reduced
build.

Make user space build enable the reduced build.

Change-Id: I84d6142811f674f2a7652e093b63ea5e93d9143e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2002190
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-02-01 09:46:07 -08:00
Nicolas Benech
e9c00c0da9 gpu: nvgpu: add error codes to mm_l2_flush
gv11b_mm_l2_flush was not checking error codes from the various
functions it was calling. MISRA Rule-17.7 requires the return value
of all functions to be used. This patch now checks return values and
propagates the error upstream.

JIRA NVGPU-677

Change-Id: I9005c6d3a406f9665d318014d21a1da34f87ca30
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998809
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-30 16:44:35 -08:00
Vinod G
1b1ebb0a8d gpu: nvgpu: log mme esr register information
Add new hal to log the mme exception register information. Support
added for Turing only. On mme exception interrupt, read the
mme_hww_esr register and log the error based on esr register bits.

JIRA NVGPU-1241

Change-Id: Ied3db0cc8fe6e2a82ecafc9964875e2686ca0d72
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2005807
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-30 04:42:58 -08:00
Terje Bergstrom
0f84c9024f gpu: nvgpu: Add nvgpu_bsearch() wrapper
Add a wrapper nvgpu_bsearch() for a standard binary search. It has two
implementations: Linux version calls Linux kernel bsearch() and
POSIX/QNX build uses stdlib bsearch().

Change-Id: Ic244df3cf3adb52b2192c175ec9b5dd06bce3ec8
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2003370
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-29 21:55:37 -08:00
Scott Long
80d8d9f8d1 gpu: nvgpu: MISRA 10.1 fixes to gr
MISRA Rule 10.1 states that operands shall not be of an inappopriate
essential type.  For example, shift and bitwise operations should only
be performed on operands of essentially unsigned type.

This patch modifies gr exception handling to no longer use bitwise OR
when generating return status values.

Instead, the first non-zero status value is saved off and returned.

This has the added benefit of not potentially ORing together errno
values and generating an undefined status code.

JIRA NVGPU-650

Change-Id: If725a560c122d2cbf12e79b58161402da2023b5b
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1999098
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-22 13:14:29 -08:00
Deepak Nibade
b40c655e12 gpu: nvgpu: move regops to separate unit
Move regops (gk20a/regops_gk20a.c) to separate unit common/regops/regops.c
Move corresponding header (gk20a/regops_gk20a.h) to include/nvgpu/regops.h

Move rest of the platform HAL files to common/regops/ as well

Fix all the header includes to include new public header

Remove *_apply_smpc_war() declarations from headers. Corresponding
functions were cleaned up already, and declarations were left somehow

Jira NVGPU-620

Change-Id: I8b8065b9c91f69809bdeb1b4caecdc7582c8a992
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1998723
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-21 23:04:28 -08:00
Prateek sethi
126187f232 gpu: nvgpu: Fix uninitialized memory access
gk20a_remove_gr_support() is freeing the local_golden_image and
local_golden_image->context. But there are instances where
local_golden_image is not allocated since freeing an
unallocated golden context image accesses the contents of
local_golden_image causes a fault.

Check golden_image_initialized flag before freeing
local_golden_image->context.

Jira NVGPU-1648
Bug 2461665

Change-Id: I19235d2ec9d77ba4ef00257f43436448f5f70b25
Signed-off-by: Prateek sethi <prsethi@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997665
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-20 01:34:19 -08:00
Deepak Nibade
0ff5a49f45 gpu: nvgpu: move patch context update calls to gr/ctx unit
We use below APIs to update patch context
gr_gk20a_ctx_patch_write_begin()
gr_gk20a_ctx_patch_write_end()
gr_gk20a_ctx_patch_write()

Since patch context is owned by gr/ctx unit, move these APIs
to this unit and rename them to
nvgpu_gr_ctx_patch_write_begin()
nvgpu_gr_ctx_patch_write_end()
nvgpu_gr_ctx_patch_write()

Jira NVGPU-1527

Change-Id: Iee19c7a71d074763d3dcb9b1997cb2a3159d5299
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989214
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-17 10:26:58 -08:00
Deepak Nibade
58bc18b794 gpu: nvgpu: load context image from gr/ctx unit
We currently load and create new graphics context image in
gr_gk20a_load_golden_ctx_image()
This API will first load local golden image in new context
image and then initialize context appropriately by calling
g->ops.gr.ctxsw_prog() HALs

Move this sequence to gr/ctx unit and rename the API as
nvgpu_gr_ctx_load_golden_ctx_image()

Note that call to g->ops.gr.update_ctxsw_preemption_mode()
is moved out of this API and called directly from
gk20a_alloc_obj_ctx()

Jira NVGPU-1527

Change-Id: Id5a5b2cd2c0704fbefe536d581a37a60ec185ea9
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1989157
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-17 10:26:50 -08:00
Philip Elcan
3d62c3256f gpu: nvgpu: gk20a: fix misc MISRA 10.3 issues
MISRA Rule 10.3 prohibits assigning to an object of different essential
or narrower type. This fixes some miscellaneous violations in
gr_gk20a.c.

JIRA NVGPU-1008

Change-Id: I46aa3bcdee23f53ab79615d37c1a797de1b74137
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990390
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:08 -08:00
Philip Elcan
a6f9d1c40e gpu: nvgpu: gk20a: add casts for MISRA 10.3
MISRA Rule 10.3 prohibits assignment to an object from an object of
different essential type or a narrower type. This adds casts in
gr_gk20a.c to address these violations. BUG_ON() is added for cases
where there is potential for losing data in the cast.

JIRA NVGPU-1008

Change-Id: Ic4e2449c536abe8127272d4ca46f76336fae46c8
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990389
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:05 -08:00
Philip Elcan
60ef4d2e3b gpu: nvgpu: gk20a: fix MISRA 10.3 violations
MISRA 10.3 prohibits assignment from an object of different essential or
narrower type. This fixes a number of MISRA 10.3 violations in
gr_gk20a.c in constant values.

JIRA NVGPU-1008

Change-Id: I93eeabe4ab0217a8043a2a025a42d5b95e177bc3
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990388
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:45:01 -08:00
Philip Elcan
edd5a73bbf gpu: nvgpu: gk20a: fix function returns
This fixes MISRA 10.3 violation for assignment of narrower or different
type. The fixes are cases where functions were mixing u32s and ints
for function return values.

JIRA NVGPU-1008

Change-Id: I58c7e499c918ece0abb4012da9fe6b7a604b0419
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990386
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:57 -08:00
Philip Elcan
1b2dd4904a gpu: nvgpu: gk20a: cleanup function return
gk20a_init_gr_prepare does not return any meaningful status, so just
make it a void.

JIRA NVGPU-1008

Change-Id: I25f528407e123e84e6bc5450dbd8ee38e75ad3fd
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990385
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:54 -08:00
Philip Elcan
f910525e14 gpu: nvgpu: cleanup idle_wait and wait_empty APIs
All cases where the wait_empty HAL API and the wait_idle, wait_fe_idle
APIs were being called used the same parameters, so move those
parameters inside the APIs.

JIRA NVGPU-1008

Change-Id: Ib864260f5a4c6458d81b7d2326076c0bd9c4b5af
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1990384
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-14 13:44:50 -08:00
Sai Nikhil
7ffbbdae6e gpu: nvgpu: MISRA Rule 7.2 misc fixes
MISRA Rule 7.2 Definition: A "u" or "U" suffix shall be applied to all
integer constants that are represented in an unsigned type.

This patch adds a "U" suffix to integer literals which are being
assigned to unsigned integer variables. In most cases the integer
literal is a hexadecimal value.

JIRA NVGPU-844

Change-Id: I8a68c4120681605261b11e5de00f7fc0773454e8
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1959189
Reviewed-by: Scott Long <scottl@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 18:49:13 -08:00
Deepak Nibade
4883f14fbb gpu: nvgpu: map global_ctx buffers from gr/ctx unit
Currently all the global contex buffers are mapped into each graphics
context. Move all the mapping/unmapping support to gr/ctx unit since
all the mappings are owned by context itself

Add nvgpu_gr_ctx_map_global_ctx_buffers() that maps all the global
context buffers into given gr_ctx
Add nvgpu_gr_ctx_get_global_ctx_va() that returns VA of the mapping
for requested index

Remove g->ops.gr.map_global_ctx_buffers() since it is no longer
required. Also remove below APIs
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_unmap_global_ctx_buffers()
gr_tu104_map_global_ctx_buffers()

Remove global_ctx_buffer_size from nvgpu_gr_ctx since it is no
longer used

Jira NVGPU-1527

Change-Id: Ic185c03757706171db0f5a925e13a118ebbdeb48
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987739
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 10:46:48 -08:00
Deepak Nibade
1c17ae310c gpu: nvgpu: add new unit for GR context
Add new unit common/gr/ctx.c to manage GR context

This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers

Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.

Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit

Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference

Remove gr_gp10b_alloc_buffer() since it is no longer used

Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer

Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size

Jira NVGPU-1527

Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-09 10:46:29 -08:00
Deepak Nibade
9241635805 gpu: nvgpu: move local golden image to global ctx unit
Local golden image is copy of global GR context buffer hence move its
ownership to global context unit

Add new structure nvgpu_gr_global_ctx_local_golden_image to hold all meta
data for local golden image and move it to struct gr_gk20a

Expose and use new APIs to initialize/deinitialize and load local golden image

Jira NVGPU-1625

Change-Id: Ieb68e52c205ca0ecd27f8bf4bb31922a01e7ae54
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1984952
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-08 14:16:39 -08:00
Sai Nikhil
a6dcfcfa07 gpu: nvgpu: gk20a: MISRA Rule 10.1 fixes
MISRA rule 10.1 mandates that the correct data types are used as
operands of operators. For example, only unsigned integers can be used
as operands of bitwise operators.

This patch fixes rule 10.1 vioaltions for gk20a.

JIRA NVGPU-777
JIRA NVGPU-1006

Change-Id: I965eae017350156c6692fd585292b7a54e4190d8
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971010
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-06 19:24:22 -08:00
Richard Zhao
98c034869a gpu: nvgpu: remove GOLDEN_CTX from global buffers
Current code creats golden image using dedicated gr_ctx called
GOLDEN_CTX. But on RM server it's no easy to create a GOLDEN_CTX since
virtual addresses are managed by guest OSes. There's no special reason
why we have to use a separate gr_ctx for golden image. This patch moves
it to use current channel gr_ctx. And the function will be re-useable
by RM server.

Jira GVSCI-191

Change-Id: I9920703e61f7e1d8b3ad6612811e47a3815d0c0f
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1983702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-04 13:13:50 -08:00
Aparna Das
d4f1a138dc gpu: nvgpu: add vmid param to fecs trace bind_channel
OS specific implementation of fecs trace bind_channel function
needs to handle special case for vserver to retrieve vmid from
channel id. Native code should be independent of server code.
Modify struct fecs_trace member function bind_channel to pass
vmid parameter enabling retrieving and passing vmid from server
code.

Jira GVSCI-44

Change-Id: I96223376f2068e2cbf60a9c9b35ff564a65e5dc3
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970693
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-04 11:15:06 -08:00
Deepak Nibade
93a05937f0 gpu: nvgpu: remove g->ops.gr.dump_ctxsw_stats
g->ops.gr.dump_ctxsw_stats is redundant since we can directly call
g->ops.gr.ctxsw_prog.dump_ctxsw_stats

Also clean up gr_gp10b_dump_ctxsw_stats since it too becomes redundant

Jira NVGPU-1527

Change-Id: I0ac5bcf6cf3dca30954d302766431496971708f4
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1986814
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-03 23:05:42 -08:00
Sagar Kamble
5efc446a06 gpu: nvgpu: make all falcons struct nvgpu_falcon*
With intention to make falcon header free of private data we are making
all falcon struct members (pmu.flcn, sec2.flcn, fecs_flcn, gpccs_flcn,
nvdec_flcn, minion_flcn, gsp_flcn) in the gk20a, pointers to struct
nvgpu_falcon. Falcon structures are allocated/deallocated by
falcon_sw_init & _free respectively.

While at it, remove duplicate gk20a.pmu_flcn and gk20a.sec2_flcn,
refactor flcn_id assignment and introduce falcon_hal_sw_free.

JIRA NVGPU-1594

Change-Id: I222086cf28215ea8ecf9a6166284d5cc506bb0c5
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1968242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-03 02:58:38 -08:00
Deepak Nibade
ef580aee38 gpu: nvgpu: add new unit for GR global context buffers
Add new unit common/gr/global_ctx.c to manage GR global context buffers

This unit provides interfaces to allocate/free/map/unmap all the global
context buffers. It also provides APIs to get/set size of the buffers,
and to get memory handle of the buffers

Use interfaces exposed by this unit instead of directly accessing global
context buffers in common code

Add new header file include/nvgpu/gr/global_ctx.h to declare all the
interfaces.

Rename "struct gr_ctx_buffer_desc" to "struct nvgpu_gr_global_ctx_buffer_desc"
which holds all data for each global context
Remove void *priv since it is no longer used
Add size to the desc structure to store the requested size

Remove global_ctx_buffer_size from struct nvgpu_gr_ctx since it is no longer
used for any real purpose

Jira NVGPU-1625

Change-Id: I3feaf47bc2fdf192f36b136f2ef80a49d1782c5d
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1977884
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2019-01-02 10:55:45 -08:00
Deepak Nibade
bb677160e5 gpu: nvgpu: check tu104 specific timestamp buffer full error code
In gk20a_gr_handle_fecs_error(), we right now check the error code in
mailbox to identify if we hit timestamp buffer full error interrupt
This error code right now is hard coded to 0x26

But on Turing ucode this error code is set to 0x32

Add new HAL g->ops.fecs_trace.get_buffer_full_mailbox_val() to get
correct error code per platform and use this in
gk20a_gr_handle_fecs_error()

Bug 200471541
Bug 2469604

Change-Id: I7325354b39d35b1c8b218e554814316d22950469
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1978144
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-31 09:43:39 -08:00
Ranjanikar Nikhil Prabhakarrao
f0762ed483 gpu: nvgpu: add speculative barrier
Data can be speculativerly stored and
code flow can be hijacked.

To mitigate this problem insert a
speculation barrier.

Bug 200447167

Change-Id: Ia865ff2add8b30de49aa970715625b13e8f71c08
Signed-off-by: Ranjanikar Nikhil Prabhakarrao <rprabhakarra@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972221
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-30 22:26:01 -08:00
Debarshi Dutta
ac4c2d4ae0 gpu: nvgpu: move fifo RC_TYPE_* definitions to common header
The RC_TYPE_* definitions in fifo_gk20a.h are generic and are moved to
a newly constructed common header <nvgpu/fifo.h>

Jira NVGPU-1237

Change-Id: Ia1bb80b9b0047675c7abfb6ce6ccd42a2e99f41f
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1970031
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 21:54:39 -08:00
Deepak Nibade
fdc15553bc gpu: nvgpu: add new HAL to initialize preemption mode
g->ops.gr.alloc_gr_ctx HAL right now allocates graphics context and
also initializes preemption mode for various platforms

Separate out a new HAL g->ops.gr.init_ctxsw_preemption_mode that
initializes preemption mode and call it from gk20a_alloc_obj_ctx()
after context is created

g->ops.gr.alloc_gr_ctx now only allocates the context as the name
suggests

Jira NVGPU-1527

Change-Id: I8a44672d5ab2ebfe315e6334115265e4ee4f24f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972254
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 00:35:39 -08:00
Deepak Nibade
6bbcdb51c6 gpu: nvgpu: remove redundant GR ops
g->ops.gr.enable_cde_in_fecs and g->ops.gr.update_boosted_ctx
are no longer required since we can directly call
g->ops.gr.ctxsw_prog.set_cde_enabled and
g->ops.gr.ctxsw_prog.set_pmu_options_boost_clock_frequencies
respectively

remove those functions and the ops

Jira NVGPU-1526

Change-Id: Idb0ad5f634e78aac44ec325ba2b7f59c612b29e8
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1972184
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-14 00:35:29 -08:00
Sagar Kamble
147d5d9402 gpu: nvgpu: update GPCCS falcon base addr init
GPCCS falcon base address was being set without invoking hal api. Remove
FALCON_GPCCS_BASE. This patch defines gpu_ops.gr.gpccs_falcon_base_addr
hal api to get this base address.

JIRA NVGPU-1587

Change-Id: Icfa7a26d1bb2d67c81f05a43f6ce906f59706b3d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969431
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-12 15:14:20 -08:00
Sagar Kamble
c6fc301a9b gpu: nvgpu: update FECS falcon base addr init
FECS falcon base address was being set without invoking hal api. Remove
FALCON_FECS_BASE. This patch defines gpu_ops.gr.fecs_falcon_base_addr hal
api to get this base address.

JIRA NVGPU-1587

Change-Id: I9c8e60be4ee81a154020c982893725a12ebb72ef
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1969430
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-12 15:14:16 -08:00
Deepak Nibade
6777bd5ed2 gpu: nvgpu: add separate unit for gr/ctxsw_prog
Add separate new unit gr/ctxsw_prog that provides interface to access
h/w header files hw_ctxsw_prog_*.h

Add below chip specific files that access above h/w unit and provide
interface through g->ops.gr.ctxsw_prog.*() HAL for rest of the units

common/gr/ctxsw_prog/ctxsw_prog_gm20b.c
common/gr/ctxsw_prog/ctxsw_prog_gp10b.c
common/gr/ctxsw_prog/ctxsw_prog_gv11b.c

Remove all the h/w header includes from rest of the units and code.
Remove direct calls to h/w headers ctxsw_prog_*() and use HALs
g->ops.gr.ctxsw_prog.*() instead

In gr_gk20a_find_priv_offset_in_ext_buffer(), h/w header
ctxsw_prog_extended_num_smpc_quadrants_v() is only defined on gk20a
And since we don't support gk20a remove corresponding code

Add missing h/w header ctxsw_prog_main_image_pm_mode_ctxsw_f() for
some chips
Add new h/w header ctxsw_prog_gpccs_header_stride_v()

Jira NVGPU-1526

Change-Id: I170f5c0da26ada833f94f5479ff299c0db56a732
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966111
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-11 14:41:04 -08:00
Amurthyreddy
2bded93b28 gpu: nvgpu: MISRA 10.4 enum fixes
MISRA rule 10.4 only allows arithmetic conversions on operands of the
same essential type category.

Fix violations where an arithmetic conversion is performed on enum and
non-enum types.

JIRA NVGPU-993

Change-Id: Idaf523d7d3aa85294711b77b34821e729d2e747c
Signed-off-by: Amurthyreddy <amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1964125
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-11 09:05:16 -08:00
Seema Khowala
790ba09554 gpu: nvgpu: handle timestamp buffer full ctxsw_intr0
If enabled, fecs trace updating happens from ucode
side even when there is no fecs trace dumper application
to consume it. Due to this, trace buffer will get
eventually full and ucode will trigger ctxsw_intr0.
Reset fecs_trace buffer to handle timestamp buffer full
ctxsw_intr0.

Bug 2361571
Bug 200472922

Change-Id: Ia26a17635fc6bd6e8663b8af983acc91839ecfcd
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965370
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 14:54:02 -08:00
Seema Khowala
2c379cad0f gpu: nvgpu: add handling for ctxsw_intr0
ctxsw_intr0 is triggered by ucode even if it
is not enabled by driver. Add handling
for processing ctxsw_intr0. fecs mailbox(6)
is used to report fecs/gpccs misc error codes.
Also dump falcon stats for unhandled fecs intr.

Bug 2361571
Bug 200472922

Change-Id: Iefb3c0d46ad1d08db07fd3c08cff91a77835908c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1966984
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 14:53:53 -08:00
Scott Long
5bdffee1a8 gpu: nvgpu: MISRA 10.3 fixes to gr
MISRA Rule 10.3 states that the value of an expression shall not
be assigned to an object with a narrower essential type or of a
different esseential type category.

For example, assigning an unsigned 32bit value (u32) to a signed
32bit value (int) is not permitted.

This patch modifies the gr_gk20a_init_golden_ctx_image() and
gk20a_init_sw_bundle() routines to use an int (instead of u32)
for return status handling making them consistent with the other
gr routines used in this part of the gr object allocation path.

JIRA NVGPU-647

Change-Id: I53c47d9a169bd0d4cdbce107bd4ad8e7978ae01d
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1965735
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-12-07 11:05:02 -08:00
Konsta Holtta
4e6d9afab8 gpu: nvgpu: store ch ptr in gr isr data
Store a channel pointer that is either NULL or a referenced channel to
avoid confusion about channel ownership. A pure channel ID is dangerous.

Jira NVGPU-1460

Change-Id: I6f7b4f80cf39abc290ce9153ec6bf5b62918da97
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955401
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-27 12:24:47 -08:00
Konsta Holtta
7df3d58750 gpu: nvgpu: add safe channel id lookup
Add gk20a_channel_from_id() to retrieve a channel, given a raw channel
ID, with a reference taken (or NULL if the channel was dead). This makes
it harder to mistakenly use a channel that's dead and thus uncovers bugs
sooner. Convert code to use the new lookup when applicable; work remains
to convert complex uses where a ref should have been taken but hasn't.

The channel ID is also validated against FIFO_INVAL_CHANNEL_ID; NULL is
returned for such IDs. This is often useful and does not hurt when
unnecessary.

However, this does not prevent the case where a channel would be closed
and reopened again when someone would hold a stale channel number. In
all such conditions the caller should hold a reference already.

The only conditions where a channel can be safely looked up by an id and
used without taking a ref are when initializing or deinitializing the
list of channels.

Jira NVGPU-1460

Change-Id: I0a30968d17c1e0784d315a676bbe69c03a73481c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-27 12:24:38 -08:00
Scott Long
38dee046b0 gpu: nvgpu: more nvgpu_memcpy changes
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs to
qualified/unqualified types.

To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.

This change switches over non-offending memcpy() uses in gr/pmu/volt
code to nvgpu_memcpy() with appropriate casts applied to maintain
consistency within the nvgpu source base.

Also fixed a Rule 8.3 violation in vfe_var.c by sync'ing the param
names between declarations of the devinit_get_vfe_var_table()
routine.

JIRA NVGPU-849

Change-Id: I004b461988bd3a26212b6fbf660ee7fa742ea1ba
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1952984
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-26 16:35:17 -08:00
Sai Nikhil
f215026a8f gpu: nvgpu: change size related gpu_ops poniters
The return type of the function pointer *calc_global_ctx_buffer_size()
is changed from int to u32 and all its implementations.

The arg type of size in *set_big_page_size() is changed from int to
u32 and all it implementations. These changes are necessary because
size should be an unsigned value.

JIRA NVGPU-992

Change-Id: I3e4cd1d83749777aa8588a44a48772e26f190c4d
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950503
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-26 10:44:53 -08:00
Konsta Holtta
5991f6b856 gpu: nvgpu: pass gr_ctx to map_global_ctx_buffers
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.
Also pass the channel vm and vpr flag instead of the whole channel as
only those are needed.

Jira NVGPU-1149

Change-Id: Ic0921ccaf65f208105b25f08f8d7b581a56b40fe
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925431
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:28 -08:00
Konsta Holtta
ca632a2e66 gpu: nvgpu: pass gr_ctx to commit_global_ctx_buffers
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I710afc48c0ed11b727cc1b9b6f440110aa404693
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925430
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:19 -08:00
Konsta Holtta
b9d391d391 gpu: nvgpu: pass gr_ctx to commit_global_cb_manager
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: Ia99a8cde17b2534cb6dbb976ee9cc9b5a3becf6c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925429
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:10 -08:00
Konsta Holtta
8fba129317 gpu: nvgpu: pass gr_ctx to ctx_patch_smpc
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I5a6f9455503687d9a043f88080903d146260166c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:01 -08:00
Konsta Holtta
95f1d19b94 gpu: nvgpu: pass gr_ctx to alloc_channel_patch_ctx
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.
Also pass the channel vm instead of the whole channel.

Jira NVGPU-1149

Change-Id: Id9d65841f09459e7acfc8c4ce4c6de7db054dbd8
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:52 -08:00
Konsta Holtta
50438811c8 gpu: nvgpu: inline alloc_tsg_gr_ctx
gr_gk20a_alloc_tsg_gr_ctx() is just g->ops.gr.alloc_gr_ctx() and one
assignment. Move that to the call site.

Jira NVGPU-1149

Change-Id: I2c7f0168c55468d2125c19a7041bc5d962ba9e44
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:42 -08:00
Konsta Holtta
d8b80c4e2a gpu: nvgpu: pass gr_ctx to init_golden_ctx_image
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I22e333247229db06bb79c40be30b5d2b48b350d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925425
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:33 -08:00
Konsta Holtta
1825a79a7c gpu: nvgpu: pass gr_ctx to load_golden_ctx_image
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: Ie77a1b5e5372ba30ec3a5926768cf945f21c3afa
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822030
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:04 -08:00
Konsta Holtta
7c648d0572 gpu: nvgpu: pass gr_ctx to update_ctxsw_preemption
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I2138673b4facd8f5d15698f5dd14a99d84e873c4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822029
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:30:55 -08:00