Commit Graph

514 Commits

Author SHA1 Message Date
Deepak Nibade
43c340de54 gpu: nvgpu: add HALs to allocate/map/commit global context buffers
Add below new HALs to allocate/map/commit global context buffers
gops.gr.alloc_global_ctx_buffers()
gops.gr.map_global_ctx_buffers()
gops.gr.commit_global_ctx_buffers()

Set these HALs for all the supported GPUs

We right now re-use below APIs to set these HALs
gr_gk20a_alloc_global_ctx_buffers()
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_commit_global_ctx_buffers()

Jira NVGPUT-27

Change-Id: I975a54e8d1716af057f982d543787748d35a256e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1743362
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:08 -07:00
Terje Bergstrom
ed65f1f26e gpu: nvgpu: Move setting priv interrupt to priv_ring
Registers to set priv interrupts are in priv_ring, but the code was
in bus HAL. Move the code and related HALs to priv_ring instead.

JIRA NVGPU-588

Change-Id: I708d11f77405dbba86586a0d1da42f65bcc1de9d
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730889
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:07 -07:00
Deepak Nibade
328a7bd3ff gpu: nvgpu: initialze bundle64 state
We receive bundle with address and 64 bit values from ucode on some platforms
This patch adds the support to handle 64 bit values

Add struct av64_gk20a to store an address and corresponding 64 bit value
Add struct av64_list_gk20a to store count and list of av64_gk20a

Add API alloc_av64_list_gk20a() to allocate the list that supports 64bit
values

In gr_gk20a_init_ctx_vars_fw(), if we see NETLIST_REGIONID_SW_BUNDLE64_INIT,
load the bundle64 state into above local structures

Add new HAL gops.gr.init_sw_bundle64() and call it from gk20a_init_sw_bundle()
if defined

Also load the bundle for simulation cases in gr_gk20a_init_ctx_vars_sim()

Jira NVGPUT-96

Change-Id: I1ab7fb37ff91c5fbd968c93d714725b01fd4f59b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1736450
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:06 -07:00
Deepak Nibade
4252e00aa6 gpu: nvgpu: fix crash due to accessing incorrect TSG pointer
In gk20a_gr_isr(), we handle various errors including GPC/TPC errors.
And then if BPT errors are pending we call gk20a_gr_post_bpt_events() at the
end and pass channel pointer to it

gk20a_gr_post_bpt_events() extracts TSG pointer based on ch->tsgid

But in some race conditions it is possible that we clear the error and trigger
recovery and as a result channel is unbounded from TSG and closed by user space
before calling gk20a_gr_post_bpt_events()

And in that case the code above results in getting incorrect TSG pointer and
hence crashes as below

Unable to handle kernel paging request at virtual address ffffff8012000c08
...
[<ffffff8008081f84>] el1_da+0x24/0xb4
[<ffffff80086e72e0>] gk20a_tsg_get_event_data_from_id+0x30/0xb0
[<ffffff80086e7560>] gk20a_tsg_event_id_post_event+0x50/0xc8
[<ffffff800872922c>] gk20a_gr_isr+0x27c/0x12e0

To fix this extract the TSG pointer before handling all the errors and pass
this pointer to gk20a_gr_post_bpt_events() will post the events if they are
enabled and if TSG is still open

Bug 200404720

Change-Id: I4861c72e338a2cec96f31cb9488af665c5f2be39
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735415
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-06-14 06:44:06 -07:00
Deepak Nibade
4607098c3a gpu: nvgpu: support CAU ctxsw list
CAU (Counter Aggregation Unit) registers might be split out from SMPC registers
and moved into their own list on some platforms

In gr_gk20a_init_ctx_vars_fw() add support to check if pm_cau list is available
If list is available, count will be set to non-zero here

In add_ctxsw_buffer_map_entries_gpcs(), parse the pm_cau list if count is
non-zero

Bug 2139870

Change-Id: Ia630e7d03481a6f927c6739d28ebfe49f221326f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1733208
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Matthew Braun (SW-GPU) <matthewb@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-30 11:56:42 -07:00
Vinod G
ac687c95d3 gpu: nvgpu: Code updates for MISRA violations
Code related to MC module is updated for handling
MISRA violations

Rule 10.1: Operands shalln't be an inappropriate
essential type.
Rule 10.3: Value of expression shalln't be assigned
to an object with a narrow essential type.
Rule 10.4: Both operands in an operator shall have
the same essential type.
Rule 14.4: Controlling if statement shall have
essentially Boolean type.
Rule 15.6: Enclose if() sequences with braces.

JIRA NVGPU-646
JIRA NVGPU-659
JIRA NVGPU-671

Change-Id: Ia7ada40068eab5c164b8bad99bf8103b37a2fbc9
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1720926
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-18 14:53:58 -07:00
Seema Khowala
e540bf87ae gpu: nvgpu: dump fecs method data and push adr
Dump fecs method data and adr if ucode times out or
errors out. This is good to have for debugging.

Change-Id: I79c4bc3bc30cfd09f273f4eb6b53863653227ecd
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1707761
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-10 10:53:24 -07:00
Terje Bergstrom
dd739fcb03 gpu: nvgpu: Remove gk20a_dbg* functions
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.

Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.

Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-09 18:26:04 -07:00
Seema Khowala
c9463fdbb3 gpu: nvgpu: add rc_type i/p param to gk20a_fifo_recover
Add below rc_types to be passed to gk20a_fifo_recover
MMU_FAULT
PBDMA_FAULT
GR_FAULT
PREEMPT_TIMEOUT
CTXSW_TIMEOUT
RUNLIST_UPDATE_TIMEOUT
FORCE_RESET
SCHED_ERR
This is nice to have to know what triggered recovery.

Bug 2065990

Change-Id: I202268c5f237be2180b438e8ba027fce684967b6
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662619
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-03 21:43:06 -07:00
Seema Khowala
744f7f0498 gpu: nvgpu: add gr hal for fecs_ctxsw_mailbox size
fecs_ctxsw_mailbox_size varies per chip. Use hal to
get the size. Also dump fecs_ctxsw_status_1 to help
debug

Bug 2093809

Change-Id: I5a50281e9d78fe0e4a75d03971169e3e9679967a
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1698026
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-05-01 16:33:11 -07:00
Deepak Nibade
ae04f394cf gpu: nvgpu: add HAL to set ppriv timeouts
Add new HAL gops.bus.set_ppriv_timeout_settings() to set platform specific
ppriv timeouts
Set this HAL for all supported GPUs for now

Jira NVGPUT-35

Change-Id: I88b438a7bf381d0216e0947a16cd267461d0e8d7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1699314
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Richard Zhao <rizhao@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-22 07:33:43 -07:00
Richard Zhao
5ab3524f91 Revert "gpu: nvgpu: add hal op for gr set error notifier"
This reverts commit d6c6c6c483.

RM server has moved to gops.fifo.set_error_notifier.
gops.gr.set_error_notifier is not needed anymore.

Jira VQRM-3058

Change-Id: I0fe7f914778ce66701a699aece2b36a5cd8079da
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679708
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-19 12:17:34 -07:00
Deepak Nibade
a0dfb2b911 gpu: nvgpu: gv100: consider floorswept FBPA for getting unicast list
In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs
and just calculate the unicast list assuming all FBPAs are present
This generates incorrect list of unicast addresses

Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr
Set gr_gv100_get_active_fpba_mask() for GV100
Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips

gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate
unicast list only for active FBPAs

Bug 200398811
Jira NVGPU-556

Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694444
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-15 22:53:29 -07:00
Deepak Nibade
d91ea322e1 gpu: nvgpu: fix gpc/tpc index for SMPC broadcast conversion
In gv11b_gr_egpc_etpc_priv_addr_table(), we call
gv11b_gr_update_priv_addr_table_smpc() to convert SMPC broadcast address into
list of unicast addresses

But before calling gv11b_gr_update_priv_addr_table_smpc() we sometimes
incorrectly set gpc_num/tpc_num to zero and that leads to generating incorrect
list of unicast addresses

Remove this incorrect initialization of gpc_num/tpc_num

Also update gv11b_gr_egpc_etpc_priv_addr_table() to receive tpc_num along with
gpc_num

Bug 2099717
Jira NVGPU-580

Change-Id: Idd4e5f78dbe6ca1800efae93c66355d06417d1f2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1691373
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-10 11:23:30 -07:00
Deepak Nibade
78151bb6f9 gpu: nvgpu: use HAL for chiplet offset
We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and
NV_PMM_FBP_STRIDE which are incorrect for Volta

Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip
Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips
Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta

Use HAL instead of hard coded values wherever required

Bug 200398811
Jira NVGPU-556

Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690028
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-10 11:23:11 -07:00
Deepak Nibade
19aa748be5 gpu: nvgpu: add support to get unicast addresses on volta
We have new broadcast registers on Volta, and we need to generate correct
unicast addresses for them so that we can write those registers to context image

Add new GR HAL create_priv_addr_table() to do this conversion
Set gr_gk20a_create_priv_addr_table() for older chips
Set gr_gv11b_create_priv_addr_table() for Volta

gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate
appriate list of unicast register for each broadcast register

Bug 200398811
Jira NVGPU-556

Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690027
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-10 11:23:07 -07:00
Deepak Nibade
4314771142 gpu: nvgpu: add broadcast address decode support for volta
With Volta we have more number of broadcast registers than previous chips
and we don't decode them right now in gr_gk20a_decode_priv_addr()

Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all
previous chips
Add and use gr_gv11b_decode_priv_addr() for Volta

gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set
the broadcast flags apporiately

Define below new broadcast types
PRI_BROADCAST_FLAGS_PMMGPC
PRI_BROADCAST_FLAGS_PMM_GPCS
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB
PRI_BROADCAST_FLAGS_PMMFBP
PRI_BROADCAST_FLAGS_PMM_FBPS
PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC
PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP

Bug 200398811
Jira NVGPU-556

Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-10 11:23:03 -07:00
mpurohit
5bb1de5b31 gpu: nvgpu: gk20a: Use ENOSPC instead of ENOMEM
Modify gr_gk20a_add_zbc() to return -ENOSPC instead of -ENOMEM when
all slots are already used. ENOMEM is more meant for memory allocation
failures, anyway.
This is required for porting nvgpu_gpu_zbc test case using libnvrm_gpu
APIs.

Bug 1967537
JIRA VQRM-3348

Change-Id: Ib7bcb6ba94d2ca168bcad517a8a7260fbf278a91
Signed-off-by: Mahesh Purohit <mpurohit@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1688302
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-06 09:23:50 -07:00
Deepak Nibade
89e0745fa0 gpu: nvgpu: handle misaligned_addr SM exception
We right now do not handle misaligned_addr SM exception explicitly and hence
we incorrectly initiate CILP on this exception

Handle this exception explicitly in this sequence -
- set error notifier first
- clear the interrupt
- return error from gr_gv11b_handle_warp_esr_error_misaligned_addr() so that
  RC recovery is triggered by gk20a_gr_isr()

Ensure that the error value is propagated back to gk20a_gr_isr() correctly

Use nvgpu_set_error_notifier_if_empty() to set error notifier since this will
prevent overwriting of error notifier value in case gk20a_gr_isr() also tries
to write to some error notifier value

Bug 200388475
Jira NVGPU-554

Change-Id: I84c4d202a8068e738567ccd344e05d9d5f6ad2f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1686781
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-04 11:49:46 -07:00
seshendra Gadagottu
0ccb0bfc87 gpu: nvgpu: initialize ctxsw state for golden context creation
If golden context creation happens before any gpu railgate then
channel creation is always fine. If gpu railgate happens after gpu
finalize poweon, but before golden context creation, then golden
context creation is failing during first channel creation with
watchdog timeout from ctxsw because of invalid ctxsw state.

To Fix this issue, if the golden context is not created, then during
finalize power on always query ctxsw image sizes, which is making ctxsw
hw in correct state before golden context creation.

Bug 2051863

Change-Id: I81d221100a099b12bad3adc2d252de4621c335a5
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1682265
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-03 17:05:08 -07:00
Deepak Nibade
4b8432a663 gpu: nvgpu: fix address table for GPCS_TPC6 broadcast conversion
In gr_gk20a_create_priv_addr_table() and gv11b_gr_egpc_etpc_priv_addr_table(),
we create a table of unicast addresses from broadcast addresses
For GPC boardcast addresses like NV_PGRAPH_PRI_EGPCS_ETPC6_SM_*, we generate
the table assuming there are 7 TPCs in all the GPCs

But this is incorrect in some cases like GV100 where GPC0/1 have only 6 TPCs
And hence we end up generating registers which do not exist

Fix this by explicitly checking the number of TPCs and ensuring that address
generated is belongs to valid TPC

Bug 200400376
Jira NVGPU-564

Change-Id: I65d7d6cd7f0bf16171eb54ed71f1f3840ade3495
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1686806
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-04-03 08:23:08 -07:00
Richard Zhao
8d8ff9d34e gpu: nvgpu: add gops.fifo.set_error_notifier
RM Server overrides it for handling stall interrupts.

Jira VQRM-3058

Change-Id: I8b14f073e952d19c808cb693958626b8d8aee8ca
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1679709
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 18:54:29 -07:00
Seema Khowala
f81d83690f gpu: nvgpu: use gpc_tpc_count[gpc] for number of tpc in a gpc
Using tpc_count instead of gpc_tpc_count indexed by gpc, will result
in pbus error with decode error or client floorswept error codes.
tpc_count represents total number of tpc while gpc_tpc_count[gpc]
represents number of tpc in the indexed gpc.

Bug 1998067

Change-Id: I9adfb98a6c3e209cbb02a8cd5090f6b6adc1ec4b
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1682469
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-29 13:53:50 -07:00
Deepak Nibade
77b806fe7e gpu: nvgpu: gv100: fix PMA list alignment in ctxsw buffer
GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in
ctxsw buffer to be 256 byte aligned but same change is not applied to other
chip ucodes

ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and
define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other
chips except GV100

Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix
the required alignment in this function

Bug 1998067

Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676655
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-21 06:04:38 -07:00
Deepak Nibade
66751bc05d gpu: nvgpu: gv100: fix num_fbpas while adding ctxsw buffer entries
For LIST_nv_pm_fbpa_ctx_regs, we right now call
add_ctxsw_buffer_map_entries_subunits() to add registers corresponding
to all the FBPAs

But while configuring total number of registers, we do not consider
floorswept FBPAs and that causes misalignment in subsequent lists for GV100

Fix this by reading disabled/floorswept FBPAs from fuse and consider only those
FBPAs which are active for GV100

Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a
common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100

Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned
implementation to consider floorsweeping

Bug 1998067

Change-Id: Id560551bb0b8142791c117b6d27864566c90b489
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-21 06:04:35 -07:00
Aparna Das
ae1b86ed4f gpu: nvgpu: add gpu_va to update_hwpm_ctxsw_mode parameters()
It'll allow the function to use fixed mapping.

Jira VQRM-2982

Change-Id: I98159c5b199ce1854b1b40704392237cadb71ef2
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1660225
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-by: Nirav Patel <nipatel@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-16 07:34:12 -07:00
Shashank Singh
23a855b852 gpu: nvgpu: add fault_ch to record_sm_error_state
fault_ch is needed by rm-server to send the notification to guest VM.
rm-server is going to use gr sources from linux

Jira VQRM-2982

Change-Id: Ifb6e8a9630a471d07b89ffaa7f2ceb309220fd21
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1661665
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-13 14:09:33 -07:00
Shashank Singh
db089a73a5 gpu: nvgpu: add refcounting for ctxsw disable/enable
ctxsw disable could be called recursively for RM server. Suspend
contexts disables ctxsw at the beginning, then call tsg disable and
preempt. If preempt timeout happens, it goes to recovery path, which
will try to disable ctxsw again. More details on Bug 200331110.

Jira VQRM-2982

Change-Id: I4659c842ae73ed59be51ae65b25366f24abcaf22
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1671716
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-12 09:13:00 -07:00
Alex Waterman
418f31cd91 gpu: nvgpu: Enable IO coherency on GV100
This reverts commit 848af2ce6d.

This is a revert of a revert, etc, etc. It re-enables IO coherence again.

JIRA EVLR-2333

Change-Id: Ibf97dce2f892e48a1200a06cd38a1c5d9603be04
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1669722
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-07 18:04:41 -08:00
Aparna Das
ca95adb2d4 gpu: nvgpu: add hal op to handle semaphore pending
The vserver variant for gr handle semaphore pending needs different
functionality to send interrupt to VM. Add HAL operation to allow
overriding vserver usecase.

Jira VQRM-2982

Change-Id: I5fee5a491c6e54344f9da477eaf5881c50335bbc
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658298
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:52 -08:00
Aparna Das
f6cac2e0c4 gpu: nvgpu: add debugger.post_events HAL op
RM Server will need to set specific HAL op and notify vgpu client.

Jira VQRM-2982

Change-Id: I679565831635ff3fadf0bdc1af5fd7a8679b6fdd
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1660226
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:39 -08:00
Richard Zhao
d6b5d74c5e gpu: nvgpu: make gr functions that are used by vsrv global
Fixed vsrv link errors for gr unification.

Jira VQRM-2982

Change-Id: Icd46792191f1a9aaefbf86d2f3c0b4d5bce2384e
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1664706
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:30 -08:00
Aparna Das
98d91dd260 gpu: nvgpu: add hal op to handle post event id
The vserver variant for gr post event id needs different
functionality to send interrupt to VM. Add HAL operation
to allow overriding vserver usecase.

Jira VQRM-2982

Change-Id: I915d089ef751023968c1e8ab181c21afeec997a5
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658382
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:21 -08:00
Aparna Das
d654ab4863 gpu: nvgpu: add hal op to handle notify pending
The vserver variant for gr handle notify pending needs different
functionality to send interrupt to VM. Add HAL operation to allow
overriding vserver usecase.

Jira VQRM-2982

Change-Id: I4cb88d4d769a5d5cb98a4ee6ac3fbb74245cb5f2
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1658255
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:12 -08:00
Aparna Das
d6c6c6c483 gpu: nvgpu: add hal op for gr set error notifier
The vserver variant for gr set error notifier needs different
functionality to send interrupt to VM. Add HAL operation to
allow overriding vserver usecase.

Jira VQRM-2982

Change-Id: Ia445a27112bb6c5587dbb81100a9dafe5875b338
Signed-off-by: Aparna Das <aparnad@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1657830
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-06 14:52:03 -08:00
seshendra Gadagottu
acbbd4e357 gpu: nvgpu: modify logic for fecs arb idle check
Check these conditions for fecs arbiter idle:
1. Wait for gr_fecs_arb_ctx_cmd to idle
2. Wait for gr_fecs_ctxsw_status_1:arb_busy to idle

Just waiting for condition 1, only guarantees dispatching
of command but not completion of work.

Bug 2056266

Change-Id: I3fcbdb9cda91fee0ff345a24bf23ba7dabf268d0
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667550
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-05 22:22:49 -08:00
Seema Khowala
33d1e22230 gpu: nvgpu: clean up memory leaks in gr init
This is to resolve memory leak after modifying
tpc_fs_mask sysfs which sets gr.sw_ready to false and
forces gr to re-initialize.

echo 1 > /sys/devices/gpu.0/force_idle
echo 5 > /sys/devices/gpu.0/tpc_fs_mask
echo 0 > /sys/devices/gpu.0/force_idle

Bug 200393029

Change-Id: I76299f53fc87823071c672ec682c3eb51f72f513
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1666018
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-05 22:22:36 -08:00
Timo Alho
848af2ce6d Revert "Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"""
This reverts commit 89fbf39a05.

Bug 2075315

Change-Id: Id34a0376be5160b164931926ec600f77edf69667
Signed-off-by: Timo Alho <talho@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1668487
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
2018-03-05 08:39:57 -08:00
Alex Waterman
89fbf39a05 Revert "Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working""
This reverts commit 5a35a95654.

JIRA EVLR-2333

Change-Id: I923c32496c343d39d34f6d406c38a9f6ce7dc6e0
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1667167
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-02 22:10:14 -08:00
Seema Khowala
abe829338c gpu: nvgpu: do not alloc ctx buffers if already allocated
Bug 200393029

Change-Id: Ic2946958e34bcb9247179fcf2e8735c822155cce
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665338
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Rajkumar Kasirajan <rkasirajan@nvidia.com>
Reviewed-by: Shreshtha Sahu <ssahu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-03-01 12:39:08 -08:00
Alex Waterman
5a35a95654 Revert "gpu: nvgpu: Get coherency on gv100 + NVLINK working"
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.

Bug 2070609

Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
2018-02-28 13:49:22 -08:00
Alex Waterman
1170687c33 gpu: nvgpu: Use coherent aperture flag
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.

The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.

However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.

This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.

Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.

Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.

JIRA EVLR-2333

Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:43 -08:00
Alex Waterman
71f53272b2 gpu: nvgpu: FIFO sched fixes
Miscellaneous fixes for the sched code:

1. Make sure get_addr() on an SGL respects the use phys flag since
   the runlist needs physical addresses when NVLINK is in use.
2. Ensure the runlist is contiguous. Since the runlist memory is
   not virtually addressed the buffer must be physically contiguous.
3. Use all 64 bits of the runlist address in the runlist base addr
   register (and related fields).

JIRA EVLR-2333

Change-Id: Id4fd5ba4665d3e35ff1d6ca78dea6b58894a9a9a
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654667
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Tested-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-27 16:03:34 -08:00
Alex Waterman
b5d3cf444e gpu: nvgpu: Cleanup unused variables
There are numerous places where variables are assigned to but then
never used. This patch cleans up all these unused variables and
in some cases simplifies surrounding logic.

Also delete unused header includes and add necessary header includes.

JIRA NVGPU-525

Signed-off-by: Alex Waterman <alexw@nvidia.com>
Change-Id: Ice9ec2a0e97f262d0dcfebe22f83208dbea569d9
Reviewed-on: https://git-master.nvidia.com/r/1662548
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 21:53:38 -08:00
Alex Waterman
ea7eaed843 gpu: nvgpu: Don't alloc comptags on GV100
We don't have compression enabled on GV100 so it does not make sense
to allocate a huge CBC for this device.

Change-Id: I0ff908571f28c2ba6f439b0989398bf68dce16f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655279
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-23 05:04:30 -08:00
Terje Bergstrom
ec00a6c2db gpu: nvgpu: Use preallocated VPR buffer
To prevent deadlock while allocating VPR in nvgpu, allocate all the
needed VPR memory at probe time and use an internal allocator to
hand out space for VPR buffers.

Change-Id: I584b9a0f746d5d1dec021cdfbd6f26b4b92e4412
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655324
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-02-14 21:43:43 -08:00
Seema Khowala
791ce6bd54 gpu: nvgpu: gv11b: enable more gr exceptions
-pd, scc, ds, ssync, mme and sked exceptions are
 enabled. This will be useful for debugging
-Handle enabled interrupts
-Add gr ops to handle ssync hww. For legacy
 chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr.
 Since ssync hww is not enabled on legacy chips, added
 ssync hww exception handling for volta only.

Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644751
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 13:23:30 -08:00
Seema Khowala
9beefc4551 gpu: nvgpu: add fecs_host_int_enable hal
This will be used to enable fecs interrupts per
chip.

Change-Id: Id99412ca1a9c4caad999c3458b0e9701515db4b9
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1642554
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-31 13:23:21 -08:00
Alex Waterman
4967570033 gpu: nvgpu: add speculative load barrier (ctrl IOCTLs)
Data can be speculatively loaded from memory and stay in cache even
when bound check fails. This can lead to unintended information
disclosure via side-channel analysis.

To mitigate this problem insert a speculation barrier.

bug 2039126
CVE-2017-5753

Change-Id: Ib6c4b2f99b85af3119cce3882fe35ab47509c76f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1640500
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-25 14:25:34 -08:00
Terje Bergstrom
9cbb954266 gpu: nvgpu: Report mailbox id and value on ucode timeout
When we detect a timeout waiting for ctxsw ucode method to complete,
we print an error. The error does not detail the event we are waiting
which makes debugging difficult. Add the missing mailbox id and value
to the error.

Bug 2049965

Change-Id: I45204a2d6f1f39919a0133b1e0867213e1a5b671
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1643709
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-01-23 11:19:10 -08:00