Add below two new HALs
gops.fb.enable_hub_intr() to enable hub interrupts
gops.fb.disable_hub_intr() to disable hub interrupts
Set existing APIs gv11b_fb_enable/disable_hub_intr() to these HALs
Call the HALs everywhere instead of calling the APIs directly
Jira NVGPUT-44
Change-Id: Id299c6d228733ed365a71be6b180186776cc1306
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1725977
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
As part of the MISRA fixes, moving all the
gating_reglist files to common/clock_gating dir,
the new directory structure suggested to follow.
Removed unused gating_reglist files for gk20a
JIRA NVGPU-646
Change-Id: I388855befcf991ee68eeffed10fe9ac456210649
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1722330
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add NVGPU_IOCTL_CHANNEL_RESCHEDULE_RUNLIST ioctl to reschedule runlist,
and optionally check host and FECS status to preempt pending load of
context not belonging to the calling channel on GR engine during context
switch.
This should be called immediately after a submit to decrease worst case
submit to start latency for high interleave channel.
There is less than 0.002% chance that the ioctl blocks up to couple
miliseconds due to race condition of FECS status changing while being read.
For GV11B it will always preempt pending load of unwanted context since
there is no chance that ioctl blocks due to race condition.
Also fix bug with host reschedule for multiple runlists which needs to
write both runlist registers.
Bug 1987640
Bug 1924808
Change-Id: I0b7e2f91bd18b0b20928e5a3311b9426b1bf1848
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Semaphore methods currently being used in Volta are deprecated for future chips
And on Volta we support both old and new methods
So replace old methods by new methods on Volta itself so that new methods
get tested on silicon
Implement below HALs for Volta with new semaphore methods
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Also use new methods in these APIs
gv11b_fifo_add_syncpt_wait_cmd()
gv11b_fifo_add_syncpt_incr_cmd()
And change corresponding APIs to reflect correct size of command buffer
gv11b_fifo_get_syncpt_wait_cmd_size()
gv11b_fifo_get_syncpt_incr_cmd_size()
Jira NVGPUT-16
Change-Id: Ia3a37cd0560ddb54761dfea9bd28c4384cd8a11c
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704518
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/
release sequence and set it to gops.fifo.add_sema_cmd()
Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size()
to return respective command buffer sizes
Jira NVGPUT-16
Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704490
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL gops.mc.isr_nonstall() to handle nonstall interrupts
We already handle nonstall interrupts in nvgpu_intr_nonstall()
But this API is completely in linux specific code
Separate out os-independent code to handle nonstall interrupts in new API
mc_gk20a_isr_nonstall() and set it to HAL gops.mc.isr_nonstall() for all
existing chips
Call this HAL from nvgpu_intr_nonstall()
Jira NVGPUT-8
Change-Id: Iec6a56db03158a72a256f7eee8989a0a8a42ae2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1706589
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fifo.runlist_hw_submit() to submit a new runlist to hardware
gops.fifo.runlist_wait_pending() to wait until runlist write is successful
Set existing API gk20a_fifo_runlist_wait_pending() to
gops.fifo.runlist_wait_pending HAL
Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w
and set it to gops.fifo.runlist_hw_submit HAL
Jira NVGPUT-20
Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700548
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs
and just calculate the unicast list assuming all FBPAs are present
This generates incorrect list of unicast addresses
Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr
Set gr_gv100_get_active_fpba_mask() for GV100
Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips
gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate
unicast list only for active FBPAs
Bug 200398811
Jira NVGPU-556
Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694444
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and
NV_PMM_FBP_STRIDE which are incorrect for Volta
Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip
Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips
Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta
Use HAL instead of hard coded values wherever required
Bug 200398811
Jira NVGPU-556
Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690028
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We have new broadcast registers on Volta, and we need to generate correct
unicast addresses for them so that we can write those registers to context image
Add new GR HAL create_priv_addr_table() to do this conversion
Set gr_gk20a_create_priv_addr_table() for older chips
Set gr_gv11b_create_priv_addr_table() for Volta
gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate
appriate list of unicast register for each broadcast register
Bug 200398811
Jira NVGPU-556
Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690027
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With Volta we have more number of broadcast registers than previous chips
and we don't decode them right now in gr_gk20a_decode_priv_addr()
Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all
previous chips
Add and use gr_gv11b_decode_priv_addr() for Volta
gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set
the broadcast flags apporiately
Define below new broadcast types
PRI_BROADCAST_FLAGS_PMMGPC
PRI_BROADCAST_FLAGS_PMM_GPCS
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB
PRI_BROADCAST_FLAGS_PMMFBP
PRI_BROADCAST_FLAGS_PMM_FBPS
PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC
PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP
Bug 200398811
Jira NVGPU-556
Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now do not handle misaligned_addr SM exception explicitly and hence
we incorrectly initiate CILP on this exception
Handle this exception explicitly in this sequence -
- set error notifier first
- clear the interrupt
- return error from gr_gv11b_handle_warp_esr_error_misaligned_addr() so that
RC recovery is triggered by gk20a_gr_isr()
Ensure that the error value is propagated back to gk20a_gr_isr() correctly
Use nvgpu_set_error_notifier_if_empty() to set error notifier since this will
prevent overwriting of error notifier value in case gk20a_gr_isr() also tries
to write to some error notifier value
Bug 200388475
Jira NVGPU-554
Change-Id: I84c4d202a8068e738567ccd344e05d9d5f6ad2f0
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1686781
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Dump timeout save0 and save1 even if they could
be unreliable when fecs_tgt in set in save0 . This
is good to have for debug purposes.
-Add priv_ring hal for decode_error_code
-Decode fecs error code for supported error types
Bug 1998067
Change-Id: I60cb6902d099df4a7df45fa624e44d9e0d46360f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683014
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Current code does not compute priv error register offsets
properly. This leads to invalid decoding of priv errors, and
can also trigger additional priv errors.
- add GPU_LIT_GPC_PRIV_STRIDE define
- return proj_gpc_priv_stride for GPU_LIT_GPC_PRIV_STRIDE in hals
- use GPU_LIT_GPC_PRIV_STRIDE instead of GPU_LIT_GPC_STRIDE in
g->ops.priv_ring.isr() to compute priv error register offsets.
Bug 2093058
Change-Id: Ia7c36ccba0441126784bb0e00452f2cf1196ef71
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1682118
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in
ctxsw buffer to be 256 byte aligned but same change is not applied to other
chip ucodes
ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and
define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other
chips except GV100
Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix
the required alignment in this function
Bug 1998067
Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676655
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For LIST_nv_pm_fbpa_ctx_regs, we right now call
add_ctxsw_buffer_map_entries_subunits() to add registers corresponding
to all the FBPAs
But while configuring total number of registers, we do not consider
floorswept FBPAs and that causes misalignment in subsequent lists for GV100
Fix this by reading disabled/floorswept FBPAs from fuse and consider only those
FBPAs which are active for GV100
Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a
common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100
Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned
implementation to consider floorsweeping
Bug 1998067
Change-Id: Id560551bb0b8142791c117b6d27864566c90b489
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta
and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not
have correct behavior on GV11B due to that.
The patch adds an instance of css_hw_set_handled_snapshots
for Volta to fix that.
Bug 1960846
Bug 2068936
Change-Id: Ic057338d3b1b951a66d070267e69a90f136598b9
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1668568
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta
and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not
have correct behavior on GV11B due to that.
The patch adds an instance of css_hw_set_handled_snapshots
for Volta to fix that.
The patch also renames css_hw_set_handled_snapshots
to gk20a_css_hw_set_handled_snapshots to make it more clear
that the function is arch dependent.
Bug 1960846
Change-Id: I92c35a862ecd7f918dd1458c086fc7ae42ca8fc5
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The following changes implements the initial (as per bringup) nvlink driver.
(1) SW initialization of nvlink core driver structures
(2) Nvlink interrupt handling
(3) Device initialization (IOCTRL, pll and clocks, device level intr)
(4) Falcon support for minion
(5) Minion load and bootstrapping
(6) Link initialization and DL PROD settings
(7) Device Interface init (and switching HSHUB to nvlink)
(8) HS set/get mode for both link and sublink
(9) Topology discovery and VBIOS settings.
(10) Ensures we get physical contiguous memory when Nvlink is enabled
This driver includes a hack for the current single dev/single link limitation.
JIRA: EVLR-2331
JIRA: EVLR-2330
JIRA: EVLR-2329
JIRA: EVLR-2328
Change-Id: Idca9a819179376cc655784482b24b575a52fa9e5
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1656790
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
On GV11B, CBC base is calculated in similar fashion than it's
calculated on dGPUs. Thus, remove gv11b_ltc_cbc_fix_config() as it
would incorrectly multiply the CBC base by the LTC count.
Bug 2054860
Change-Id: Iaed717161547468c17e12236149d970c497885b3
Signed-off-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1654506
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add User space API NVGPU_AS_IOCTL_GET_SYNC_RO_MAP to get read-only syncpoint
address map in user space
We already map whole syncpoint shim to each address space with base address
being vm->syncpt_ro_map_gpu_va
This new API exposes this base GPU_VA address of syncpoint map, and unit size
of each syncpoint to user space.
User space can then calculate address of each syncpoint as
syncpoint_address = base_gpu_va + (syncpoint_id * syncpoint_unit_size)
Note that this syncpoint address is read_only, and should be only used for
inserting semaphore acquires.
Adding semaphore release with this address would result in MMU_FAULT
Define new HAL g->ops.fifo.get_sync_ro_map and set this for all GPUs supported
on Xavier SoC
Bug 200327559
Change-Id: Ica0db48fc28fdd0ff2a5eb09574dac843dc5e4fd
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1649365
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-pd, scc, ds, ssync, mme and sked exceptions are
enabled. This will be useful for debugging
-Handle enabled interrupts
-Add gr ops to handle ssync hww. For legacy
chips, ssync hww_esr register is gpcs_ppcs_ssync_hww_esr.
Since ssync hww is not enabled on legacy chips, added
ssync hww exception handling for volta only.
Change-Id: I63ba2eb51fa82e74832df26ee4cf3546458e5669
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1644751
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The message to tell RM server to unbind channel has to be sent after
client unbinds the channel and before client calls tsg release. The
channel has to belong to a tsg on RM server before client submit a
runlist to remove the channel. Or there's a bare channel problem.
By moving .tsg_unbind_channl one layer lower, gk20a_tsg_unbind_channel()
will be common functions for all chip, and it'll call tsg release after
call .tsg_unbind_channel. So vgpu won't need to worry about tsg was
released before sending msg to RM server.
Bug 200382695
Bug 200382785
Change-Id: I32acc122f3f9d5d0628049ccf673225f9e90c87a
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1645383
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>