Commit Graph

2643 Commits

Author SHA1 Message Date
Vedashree Vidwans
a039261724 gpu: nvgpu: add gr.process_context_buffer_priv_segment gops
1. Add below gr gops to process context buffer's priv segment.
int (*process_context_buffer_priv_segment)(struct gk20a *g,
					 enum ctxsw_addr_type addr_type,
					 u32 pri_addr,
					 u32 gpc_num, u32 num_tpcs,
					 u32 num_ppcs, u32 ppc_mask,
					 u32 *priv_offset);
Update all chips to use gr_gk20a_process_context_buffer_priv_segment()
as new gr hal.
2. Add and use ppc, tpc and etpc count functions to retrieve total count.

Bug 2960720
JIRA NVGPU-5502

Change-Id: I6cec36c323ff49ded853cd5cbfd9e0a28602b8ed
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2340372
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Seema Khowala <seemaj@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
a629b48013 gpu: nvgpu: split channel sema wakeup function
Extract the functionality to post semaphore signals to one channel into
a separate function for readability.

Jira NVGPU-5491

Change-Id: Ib5e8d34f42a64c253b3b3b8cb9e2c5dd2656fd1f
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2340466
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
23d6b36101 gpu: nvgpu: add dma_fence semaphore support
Support exporting and importing semaphore-based synchronization with the
stable dma-fence API. The "Android" sync fence API used until now is
deprecated.

The Android sync framework is still kept as the default.

Jira NVGPU-5353

Change-Id: I9e57947adeb4d2ef5d59135ed7d008553c44f97c
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2336119
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seshendra Gadagottu
3bd0430aa8 gpu: nvgpu: for nvgpu-next do not reset grce engines twice
NVGPU_ENGINE_GRCE engines are getting reset twice, once in
nvgpu_init_prepare_hw() and other time in nvgpu_ce_init_support().

To avoid this, remove NVGPU_ENGINE_GRCE engines reset from
nvgpu_init_prepare_hw.

JIRA NVGPU-5288

Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Change-Id: Ic03dbff0a74e973ba423abfa004e49bdd8e451f7
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2336450
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
d44ed9d3a8 gpu: nvgpu: rollback gpfifo on error
Submitting new work may fail in the middle of writing the gpfifo
entries. Undo the increments on the gp_put shadow pointer in case of
error to avoid submitting wrong data during the next submit.

Jira NVGPU-5491

Change-Id: I064eaac8773b24da0a56db79ac6bfd07c008da03
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2340464
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
f388b1f596 gpu: nvgpu: simplify cmdbuf construction in submit
Split out the wait cmd and incr cmd setup work in submit path to
separate functions to minimize cyclomatic complexity and to increase
readability.

Jira NVGPU-5491

Change-Id: I7dfabd2de287ae10aaae9fb8d4d85d752db8631c
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2340463
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Dinesh
b79bee9cea gpu: nvgpu: CCM reduction for vidmem clear
This is added to make a common function  nvgpu_vidmem_clear_fence_wait
that can be used by multiple callers. This helps to reduce CCM and
code duplication in vidmem unit.

JIRA NVGPU-990

Change-Id: I3a7090588abda68900849443f6a8fa1bfa246bf4
Signed-off-by: Dinesh <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332691
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
3875c0825f gpu: nvgpu: avoid sema/channel dependencies
Move the per-channel hw semaphore object to be owned by the channel sync
(just like with syncpoints, too). Store just the channel ID in the hw
sema for debug prints to get rid of sema->channel dependencies. Make
nvgpu_semaphore_alloc() take a hw sema instead of a channel.

Fix up some channel-related documentation that has been incorrect.

Jira NVGPU-5353

Change-Id: I04d49da3aac50a4cea32e7393f48e6f85a80ca0d
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2339931
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Abdul Salam
97de1ba74d gpu: nvgpu: Use unified struct to store slave freq
Instead of using multiple struct use a single nvgpu_clk_slave_freq
to store the slave freq of gpcclk.
With this patch single struct can be used by both clk_arb and clk_domain.
This will remove nvgpu_set_fll_clk struct as nvgpu_clk_slave_freq serves
the purpose.

NVGPU-4692

Change-Id: Ie45d63e4376b83e153a9aa75e2c4631c6dad857b
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2339213
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
rmylavarapu
700bd83b41 gpu: nvgpu: Rename/clean boardobj unit
-Removed unwanded boardobj includes
-Renamed functions as struct as per usage

NVGPU-4484

Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Change-Id: I792a4b64075d5e87f911c1073717dbe7107227a1
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335991
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
rmylavarapu
8e545ef04b gpu: nvgpu: Fix boardobj allocation size
In current implementation we are allocating boardobj
in nvgpu_boardobj_construct_super for all units and assigning
that pointer to boardobj type, as the size differe for different
units assigning the boardobj pointer to a common type will
give violations. Fixing them by allocating mem a head
and later call construct_super for elements initialization.

NVGPU-4484

Change-Id: I9b5ed1a6d8418fec48a29eee38d55fc7d83fcfab
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335989
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
rmylavarapu
0115c26f1b gpu: nvgpu: Boardobj lite unit refactor
As boardobj unit is used only in PMU, the plan is to move
all the boardobj related functions/structures and Macros
to boardobj specific folders. This will remove unnecessary
usage of boardobj outside PMU.

NVGPU-4484

Change-Id: I9f0fda32e6affd1fce218eb0ac638a9dfc8b99c3
Signed-off-by: rmylavarapu <rmylavarapu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2335986
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
b8f398f6a7 gpu: nvgpu: clean up struct priv_cmd_entry
The valid flag is no longer useful as the lifetime of priv cmd entries
is clearer than before. Delete it. Delete also the stored gva that can
be calculated from the nvgpu_mem plus offset.

Jira NVGPU-4548

Change-Id: Ibf322acbb2ab1a454e9b644af24c02d291b75633
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
(cherry picked partially from commit
b9f6512e803873aaa92218dcbc090ff31a4f9c50)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332509
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seema Khowala
cfc5bac059 gpu: nvgpu: add assert in nvgpu_writel_check
nvgpu_writel_check outputs dbg message if updated read value
does not match with requested write value. Change dbg message to
error message.
Use BUG_ON for write mismatch as failure to update h/w register
is a bug and tells s/w to either add fixed delay or use timeout
to check for updated register value.

JIRA NVGPU-5490

Change-Id: Ib11b7862d2990a56259d2f8c10d75c12c84bae5d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2338004
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
e5b23f33b9 gpu: nvgpu: add internal CONFIG_SYNC wrapper
The sync file support in Linux has been stabilized and the new config is
called CONFIG_SYNC_FILE. Even if maybe not so intended, both the
stabilized version and the legacy CONFIG_SYNC can coexist; to begin with
supporting the stabilized version, add CONFIG_NVGPU_SYNCFD_ANDROID and
CONFIG_NVGPU_SYNCFD_NONE as choice configs of which one will be set. A
later patch will extend this with a choice for CONFIG_SYNC_FILE.

Jira NVGPU-5353

Change-Id: I67582b68d700b16c46e1cd090f1b938067a364e3
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2336118
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
cd7194cbc0 gpu: nvgpu: modify gmmu page table entry functions
Move below chip agnostic gmmu pte functions to common/mm/gmmu/pte.c.
- gmmu_aperture_mask()
- pte_dbg_print()

Default big page size for all chips is 64K. So, move
gp10b_mm_get_default_big_page_size() to common file and rename as
nvgpu_gmmu_default_big_page_size().

Modify gv11b_gpu_phys_addr() to use get_iommu_bit() hal.

JIRA NVGPU-4666

Change-Id: I512c42723faf2d03e5b367879c9c385dcf52cdc2
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329560
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
05df07945a gpu: nvgpu: avoid channel dependency in priv cmdbuf
The priv cmdbuf queue needs only the vm_gk20a of the channel that owns
it. Pass the vm to the queue constructor and have the channel code store
the queue to itself instead of poking at the channel from the queue
code. Adjust the cmdbuf queue api to take the queue, not the channel.

Move the inflight job fallback calculation to the channel code. The size
of the channel gpfifo isn't needed in the queue; just the job count is.

[not part of the cherry-pick: a bunch of MISRA mitigations.]

Jira NVGPU-4548

Change-Id: I4277dc67bb50380cb157f3aa3c5d57b162a8f0ba
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329659
(cherry picked from commit 83b2276f7bea563602eee20ce24b70ce70c8475a)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332508
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
991002c88b gpu: nvgpu: hide struct priv_cmd_entry
The type for entries allocated from the priv cmd queue is no longer
necessary to be visible for its users other than as an opaque handle,
except for a few minor debug prints. Make those prints output the entry
pointer value instead and move the struct definition to priv_cmdbuf.c.

Jira NVGPU-4548

Change-Id: Ia75ff41d840ac928561525a46d5973640e4b5f7e
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329658
(cherry picked from commit 3292cdadbc78ca129d1e0878c3947b0839487fc2)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332507
Reviewed-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Abdul Salam
076c85f813 gpu: nvgpu: Move clk_pmu struct from public include to unit include
As a part of refactoring move nvgpu_clk_pmupstate from public to
private include.
Also remove all function pointers from the struct and use functions
as only single pstate is supported.
This function pointers were created to address multiple pstate support
which no more needed now.

NVGPU-4690

Change-Id: Iee556feed4a25902faba87a606418861185e4089
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2334826
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
6974f784e2 gpu: nvgpu: Update cbc_init_support() return error
Currently, nvgpu_cbc_init_support() doesn't return error if
cbc.alloc_comptags() fails. Modify nvgpu_cbc_init_support() to check
error returned by cbc.alloc_comptags(). If alloc_comptags() fails, free
allocated cbc memory and set cbc pointer to NULL.

JIRA NVGPU-4666

Change-Id: Id7edaeebc81e7d7029d98bcdbffaf6506c8f0979
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2317822
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Abdul Salam
88d3640bc5 gpu: nvgpu: Refacotor clk_domain Unit
As a part of refactoring this patch does the following
*Move local struct to unit specific header file
*Move nvgpu_pmu_clk_domain_freq_to_volt from clk.c to
clk_domain.c
*Move PMU specific struct to ucode_clk_inf.h
*Merge content from nvgpu/clk.h to pmu/clk/clk.h
*Update yaml file
This will help to have arch consistency across all units.

Change-Id: Ied5c6ee637e7fd5bbdee3f5bc3f6cf216454428a
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2333366
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Divya Singhatwaria
c060e754fc gpu: nvgpu: ELPG dump stats at shutdown
ELPG_DISALLOW command fails during gk20a shutdown.
It was due to nvgpu_can_busy() which was returning
0 before without acknowledging the ELPG_DISALLOW
command.

Since the system is shutting down so fix this issue
by setting the ACK for disallow command without
waiting for actual ACK from PMU.
In doing so the state machine is also maintained
properly and the driver does not dump fail stats.

BUG 200588696

Change-Id: I943d8e6108fa0f9c418ccb1a7f061307823f1ec6
Signed-off-by: Divya Singhatwaria <dsinghatwari@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2308557
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
9bee2fe660 gpu: nvgpu: prealloc priv cmdbuf metadata
Move preallocation of priv cmdbuf metadata structs to the priv cmdbuf
level and do it always, not only on deterministic channels. This makes
job tracking simpler and loosens dependencies from jobs to cmdbuf
internals. The underlying dma memory for the cmdbuf data has always been
preallocated.

Rename the priv cmdbuf functions to have a consistent prefix.

Refactor the channel sync wait and incr ops to free any priv cmdbufs
they allocate. They have been depending on the caller to free their
resources even on error conditions, requiring the caller to know how
they work.

The error paths that could occur after a priv cmdbuf has been allocated
have likely been wrong for a long time. Usually the cmdbuf queue allows
allocating only from one end and freeing from only the other end, as
that's natural with the hardware job queue. However, in error conditions
the just recently allocated entries need to be put back. Improve the
interface for this.

[not part of the cherry-pick:] Delete the error prints about not enough
priv cmd buffer space. That is not an error. When obeying the
user-provided job sizes more strictly, momentarily running out of job
tracking resources is possible when the job cleanup thread does not
catch up quickly enough. In such a case the number of inflight jobs on
the hardware could be less than the maximum, but the inflight job count
that nvgpu sees via the consumed resources could reach the maximum.
Also remove the wrong translation to -EINVAL from err from one call to
nvgpu_priv_cmdbuf_alloc() - the -EAGAIN from the failed allocation is
important.

[not part of the cherry-pick: a bunch of MISRA mitigations.]

Jira NVGPU-4548

Change-Id: I09d02bd44d50a5451500d09605f906d74009a8a4
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329657
(cherry picked from commit 25412412f31436688c6b45684886f7552075da83)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332506
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
6fc1e41150 gpu: nvgpu: split submit on deterministic
Avoid repetitive branching on the c->deterministic flag and on build
time flags by breaking the submit function on the runtime flag into two
functions of which one gets called.

In deterministic mode the job tracking conditions are simpler, there are
a few extra prechecks to guarantee deterministic latency and the
railgate corner case, and deferred cleanup is never done.

In nondeterministic mode job tracking has more conditions, a power
reference is taken for the job lifetime, and deferred cleanup is
assumed.

These two paths still share some common code. Split it to two more
functions to act as easy building blocks so that the main logic is
apparent.

Jira NVGPU-4548

Change-Id: I64f91dcf09acb16f409dc04a12ad1e144d0cce56
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2333728
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
b077c6787d gpu: nvgpu: split sync and gpfifo work in submit
Make the big submit function somewhat shorter by splitting out the work
to do job allocation, sync command buffer creation and gpfifo writing
out to another function. To emphasize the difference between tracked and
fast submits, add two separate functions for those two cases.

Jira NVGPU-4548

Change-Id: I97432a3d70dd408dc5d7c520f2eb5aa9c76d5e41
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2333727
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Antony Clince Alex
96bea78d55 gpu: nvgpu: add init_hw, intr_enable hals to ce gops
Add following two HALs to ce gops:
- init_hw:
  Build a list of non-stall interrupt vectors and register them
  with struct nvgpu_mc.
- intr_enable:
  Enable ce engine stall, non-stall interrupts.

Jira: NVGPU-5034

Change-Id: Ibdc768c2bce778237233803ebbbd5190362b4578
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329166
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Abdul Salam
af3311ddea gpu: nvgpu: Refactor clock_domain unit
As a part of refactoring move nvgpu_clk_domain struct from public
to private.
This will help to have arch consistency across all units.
Use public functions to fetch the data across other units.
The following functions are added to access data in clk_domain unit.
*nvgpu_pmu_clk_domain_get_f_points()--> To get freq points
*nvgpu_pmu_clk_domain_update_clk_info() --> To update change seq script
with clock domain data

NVGPU-4689

Change-Id: Idc85e3cf5bbe1b80766ce6c9f07b3305ef04cbdc
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2332185
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Tejal Kudav
25461c7621 gpu: nvgpu: Move nvlink HAL code to /hal
Remove the nvlink register read/write code from /common.
Move the register handling code to /hal and add
HALs to to expose this functionality to common code.

JIRA NVGPU-2964

Change-Id: Iafba9f4e29cc0f1130dbf5dd14fbbf8b6b5bb8ec
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2329195
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Seema Khowala
d013e42f60 gpu: nvgpu: rename INTR_* defines
Rename INTR_* to MC_INTR_* defines.

JIRA NVGPU-5032

Change-Id: Iee291e2003171e3cf02b6452f1567747093e5773
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331742
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
dd2fb50a1a gpu: nvgpu: require deferred cleanup for aggressive sync destroy
Aggressive sync destroy is used on some platforms where the amount of
syncpoints is limited. It can cause sync objects to get allocated and
freed in the submit path and when jobs are cleaned up, so require
deferred cleanup. Allocations do not belong to job tracking in a
deterministic submit path.

Although this has been technically allowed before, deterministic
channels have likely not been a priority on those old platforms with
aggressive sync destroy set.

Update virtualized gp10b platform data to match on a gp10b-vgpu compat
string instead of gk20a-vgpu. gk20a (Tegra T124) hasn't been supported
for a long time. Delete the aggressive sync destroy field from this
platform. It's got enough syncpoints to not dynamically allocate them;
having this property set for gp10b-vgpu has likely been a mistake.

This is not a completely pure cherry-pick: also extend the gpu
characteristics to not advertise full deterministic submit support when
aggressive sync destroy is off. This platform flag cannot be adjusted by
the user unlike many other flags.

Jira NVGPU-4548

Change-Id: I283f546d48b79ac94b943d88e5dce55710858330
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2322042
(cherry picked from commit b1ba2b997b2174e365bcb0782ef3e67260ff9e57)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328411
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
0b70fff5db gpu: nvgpu: fix job count calculation for non-pow2
The CIRC_SPACE and CIRC_CNT macros work as expected when the buffer size
is a power of two. The userspace-supplied number of inflight jobs is not
necessarily so. Compare the get and put pointers manually.

Jira NVGPU-4548

Change-Id: Ifa7bd6d78f82ec8efcac21fcca391053a2f6f311
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328572
(cherry picked from commit 33dffa1cfb142eea0f28474566c31b632eee04f5)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331340
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
47c3d4582c gpu: nvgpu: hide priv cmdbuf gva and size
Add an accessor function in the priv cmdbuf object for gva and size to
be written in a gpfifo entry once the cmdbuf build is finished. This
helps in eventually hiding the struct priv_cmd_entry as an
implementation detail.

Add a sanity check to verify that the buffer has been filled exactly to
the requested size. The cmdbufs are used to hold wait and increment
commands for syncpoints or gpu semaphores. A prefence buffer can hold a
number of wait commands of equal size, and the postfence buffer holds
exactly one increment.

Jira NVGPU-4548

Change-Id: I83132bf6de52794ecc419e033e9f4599e488fd68
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325102
(cherry picked from commit d1831463a487666017c4c80fab0292a0b85c7d83)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331339
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Dinesh
1c1da3d6b4 gpu: nvgpu: Syncpoint invalid value to ~0.
As qnx syncpoint's invalid value is ~0, change the code
to handle this.

Bug 200603716

Change-Id: I5ec79688cd9e60066725781f1effe57692ec0c27
Signed-off-by: Dinesh <dt@nvidia.com>
(cherry picked from commit 705260565a75bc90683841c4c08e4c857bda39f0)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331208
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
e9747d5477 gpu: nvgpu: remove wait_fence_fd from incr_user
The wait_fence_fd parameter in nvgpu_channel_sync_incr_user() has not
been used since commit 1a4647272f ("gpu: nvgpu: remove fence
dependency tracking") where it was used to save a dependency fd to
sema-based post fences. The commit probably should have removed this
param; it has no purpose in the current design.

Jira NVGPU-4548

Change-Id: Id7e68b24f8e9ba0e43ff01b7af946434580b166e
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2326604
(cherry picked from commit f8031142270fb87ac41597ae70a80505078ae6d5)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328423
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
aa1322f975 gpu: nvgpu: move syncpt priv cmd allocation
channel_sync_syncpt_gen_wait_cmd() is rather simple now and is called
from two places where one has the buf preallocated and the other
doesn't. Remove the preallocated flag from the function, moving the
allocation to the single place where it is needed.

Jira NVGPU-4548

Change-Id: I48083f4f6f1093d64b67c63582291392a3481932
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325101
(cherry picked from commit afb566721e2b4c15349ff79d51f5eddc49b66014)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331338
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
39844fb27c gpu: nvgpu: hide priv cmdbuf mem writes
Add an API to append data to a priv cmdbuf entry. Hold the write pointer
offset internally in the entry instead of having the user keep track of
where those words are written to.

This helps in eventually hiding struct priv_cmd_entry from users and
provides a more consistent interface in general. The wait and incr
commands are now slightly easier to read as well when they're just
arrays of data.

A syncfd-backed prefence may be composed of several individual fences.
Some of those (or even a fence backed by just one) may be already
expired, and currently the syncfd export design releases and nulls
semaphores when expired (see gk20a_sync_pt_has_signaled()) so for those
the wait cmdbuf is appended with zeros; the specific function is for
this purpose.

Jira NVGPU-4548

Change-Id: I1057f98c1b5b407460aa6e1dcba917da9c9aa9c9
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325099
(cherry picked from commit 6a00a65a86d8249cfeb06a05682abb4771949f19)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2331336
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Tejal Kudav
0c9f589f3f gpu: nvgpu: Remove TLC error regs from dev_reginit
The TLC error registers will be programmed as part of
interrupt and error initialization code. This will help move
all common.nvlink_turing_intr unit related code together.

JIRA NVGPU-4350

Change-Id: I1c291f346eee890ee973889473b44227306d0400
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2327621
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Petlozu Pravareshwar <petlozup@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
tkudav
3856381b43 gpu: nvgpu: Clear nvlink error persistent state
Error logging bits within the nvlink blocks like TLC and MIF are
persistent through reset, to enable them to be polled following
a reset event.  That means that they are in an unknown state at
cold reset, and may contain error state after a warm reset event.
Software is expected to reset them, either by writing ones to the
status bits or by writing to the DEBUG_RESET register at the IOCTRL
top level, to clear the state out before enabling error reporting.

JIRA NVGPU-4352

Change-Id: Iab4e96388fd827c0d694eada61b20f24bbddd1ff
Signed-off-by: tkudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2317683
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Tejal Kudav
5af8cedf05 gpu: nvgpu: Nvlink interrupt handling
Enable logging and error reporting for MIF, DLPL, and TLC blocks.
Configure the NVLIPT and IOCTRL interrupt registers to rollup
the MIF and TLC errors on the link-specific fatal line and the
DLPL interrupts on link-specific intr_a(fatal) line. Both
link_err_fatal and link_intr_a are rolled up to stall interrupt line.
In the handling ISR, clear the interrupt status registers and print
an error.
Move the interrupt handling HAL code to /common/hal.

JIRA NVGPU-4350
JIRA NVGPU-4351
JIRA NVGPU-5231
JIRA NVGPU-4354
JIRA NVGPU-4355
JIRA NVGPU-4356

Change-Id: I14812499caf506592f3ae84d6681d857730d31ff
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2313221
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
d58d6ff321 gpu: nvgpu: use job count for priv cmdbuf size
Reduce the priv cmdbuf allocation size to match the actual space needed
in the worst case when num_in_flight is not specified. Although
synchronization may indeed take up to 2/3 of the gpfifo entries, the
number of jobs is what matters and it will be the remaining 1/3.

Each job uses up at most one wait and incr command from the pre and post
fences, so half of the 2/3 will be only wait commands and the other half
will be only incr commands.

Jira NVGPU-4548

Change-Id: Ib3566a76b97d8f65538d961efb97408ef23ec281
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325233
(cherry picked from commit 515deae4f58fedc7d004988f0f85470a7a894ddf)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328413
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
116c385089 gpu: nvgpu: alloc priv cmdbuf based on chip
The semaphore wait and incr sizes are not 8 and 10 for gv11b onwards.
Use the specific HAL API to retrieve their sizes and compute the priv
cmdbuf queue based on them instead of the up-to-gp10b values.

We haven't run out of space likely for several reasons:

1) userspace may not need both pre and post fences for absolutely each
   submitted job
2) submitted jobs may consist of more than one gpfifo entry, reducing
   the relative required sync capacity
3) the queue size is rounded up to the next power of two which leaves
   some margin for error in this calculation
4) the gpfifo size based num-in-flight guess has been twice as big as it
   needs to be (fixed in a next patch)

Jira NVGPU-4548

Change-Id: I172b5c0d8bb7d2231cc45cbed5e1e8b60ce7c707
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323148
(cherry picked from commit 03fb194d105242c3eb20a9857a22743f5f64b9b9)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328412
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
00203b42f2 gpu: nvgpu: split add_sema_cmd to wait and incr
The internal add_sema_cmd() used when making cmd buf entries has so many
branches it makes sense to split it at the bool acquire flag into two
functions. The wait part doesn't even need the wfi flag, and the incr
part doesn't need offset.

Jira NVGPU-4548

Change-Id: Iab26b9bc14564e2958935ab7ffda03aa873dd9b1
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323320
(cherry picked from commit 9fe2830aa9ee2b0b165edc959defa74dfb49c6ba)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328410
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
6202ead057 gpu: nvgpu: split sema sync hal to wait and incr
Instead of one HAL op with a boolean flag to decide whether to do one
thing or another entirely different thing, use two separate HAL ops for
filling priv cmd bufs with semaphore wait and semaphore increment
commands. It's already two ops for syncpoints, and explicit commands are
more readable than boolean flags.

Change offset into cmdbuf in sem wait HAL to be relative to the cmdbuf,
so the HAL adds the cmdbuf internal offset to it.

While at it, modify the syncpoint cmdbuf HAL ops' prototypes to be
consistent.

Jira NVGPU-4548

Change-Id: Ibac1fc5fe2ef113e4e16b56358ecfa8904464c82
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323319
(cherry picked from commit 08c1fa38c0fe4effe6ff7a992af55f46e03e77d0)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328409
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Abdul Salam
b029f3b2b0 gpu: nvgpu: Reactor clk_fll unit
As a part of refactor move struct nvgpu_avfsfllobjs from public header
to private header.
This will help to have arch consistency across all units.
Use public functions to fetch the data across other units.

NVGPU-4690

Change-Id: I73a750695c2ae7d3e46d1d692d10e40f13ec3cb3
Signed-off-by: Abdul Salam <absalam@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/#/c/linux-nvgpu/+/2326675/
(cherry picked from commit 41e374461da5dc9e2b4ac67a0855fd8dd20e1450)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328538
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Ramesh Mylavarapu <rmylavarapu@nvidia.com>
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00
Konsta Hölttä
1dcd4957f0 gpu: nvgpu: extract job from channel.c
Start moving job and job list related functionality out of the big
channel.c file. The lowest level job list stuff is moved, as is resource
preallocation which is tied to the job list. Adding and cleaning jobs
still stays in channel.c for now.

The joblist is still owned by the channel as a direct struct field.

Jira NVGPU-4548

Change-Id: I2733484d8ce6bd7b1fe0c32a867139c682616dfd
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323149
(cherry picked from commit cbd20803ee10058da9d258e9e8cb91b34d2278d5)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328408
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
72151c579f gpu: nvgpu: hide priv cmd queue type
Move struct priv_cmd_queue to priv_cmdbuf.c so that its definition does
not need to be visible to all users of channel.h. This also forces it to
be separately allocated (during channel init time).

While at it, rename the functions to allocate and free priv cmdbuf
queues now that they're not in channel.c anymore. A private command
buffer queue is a piece of dma memory from which entries for incr and
wait command lists are suballocated. As the name implies, it's a queue;
allocations and frees of the bufs must happen in certain order.

Jira NVGPU-4548

Change-Id: I1b47029f3a478e1942f24292918b7b59a5d91528
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323147
(cherry picked from commit 1fcf9b04275f44638059c0147dc16c1dc6956510)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328407
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
b3d16b23d5 gpu: nvgpu: extract priv cmdbuf from channel.c
Move private command buffer related functionality to priv_cmdbuf.c. This
is used only for kernel mode submits, so it makes sense to group it out,
and the priv cmdbuf stuff is used also by things that don't care about
channels.

Jira NVGPU-4548

Change-Id: Idbb42e3ed3984e16c654bb9aa2b7564b780048a4
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323146
(cherry picked from commit bb67bfc7ab8e87236f31bc4f6c80dab042609f21)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2328406
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Konsta Hölttä
52835c39ae gpu: nvgpu: do not skip completed syncpt prefences
A corner case has existed since ancient times for syncpoint-backed
prefences to not cause a gpu wait if the fence is found to be completed
in the submit path. This adds some unnecessary complexity, so don't
check for completion in software. Let the gpu "wait" for these
known-to-be-trivial waits too. Necessary priv cmdbuf space has been
allocated anyway.

Originally nvhost had 16-bit fences which would wrap around relatively
quickly, so waiting for an old fence could have looked like waiting for
a fence that will expire long in the future. With 32-bit thresholds,
this hasn't been the case for several Tegra generations anymore, and
nvhost doesn't ignore waits like this either.

The wait priv cmdbuf in submit path can still be missing even with a
prefence supplied because the Android sync framework supports sync fds
that contain zero fences inside; this can happen at least when merging
fences that have all been expired. In such conditions the wait cmdbuf
wouldn't even get allocated.

[this is squashed with commit 8b3b0cb12d118 (gpu: nvgpu: allow no wait
cmd with valid input fence) from
https://git-master.nvidia.com/r/c/linux-nvgpu/+/2325677]

Jira NVGPU-4548

Change-Id: Ie81fd8735c2614d0fedb7242dc9869d0961610eb
Signed-off-by: Konsta Hölttä <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321762
(cherry picked from commit 8f3dac44934eb727b1bf4fb853f019cf4c15a5cd)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2324254
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Vedashree Vidwans
c6908922e5 gpu: nvgpu: move generic preempt hals to common
- Move fifo.preempt_runlists_for_rc and fifo.preempt_tsg hals to common
source file as nvgpu_fifo_preempt_runlists_for_rc and
nvgpu_fifo_preempt_tsg.

Jira NVGPU-4881

Change-Id: I31f7973276c075130d8a0ac684c6c99e35be6017
Signed-off-by: Vedashree Vidwans <vvidwans@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2323866
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
2020-12-15 14:13:28 -06:00
Dinesh
8a94781aa9 gpu: nvgpu: Change pramin lock to mutex
As spinlock contention will eat cpu cycle, the pramin lock
can be changed to mutex.
Vidmem allocation is fully protected and vidmem pending is
an atomic variable. So the lock acquisition is removed.


JIRA NVGPU-4550

Change-Id: I0cecb8f4ee7e840fd698311572aedebbc8f49177
Signed-off-by: Dinesh <dt@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2321251
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: automaticguardword <automaticguardword@nvidia.com>
Reviewed-by: Ankur Kishore <ankkishore@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
GVS: Gerrit_Virtual_Submit
2020-12-15 14:13:28 -06:00