Add below new HALs
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/
release sequence and set it to gops.fifo.add_sema_cmd()
Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size()
to return respective command buffer sizes
Jira NVGPUT-16
Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704490
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Switch all logging to nvgpu_log*(). gk20a_dbg* macros are
intentionally left there because of use from other repositories.
Because the new functions do not work without a pointer to struct
gk20a, and piping it just for logging is excessive, some log messages
are deleted.
Change-Id: I00e22e75fe4596a330bb0282ab4774b3639ee31e
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704148
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode is updated to include LDIV slowdown factor in gr_init_param command.
- Defined a new version gr_init_param_v2.
- Updated the PMU FW version code.
- Set the LDIV slowdown factor to 0x1e by default.
- Added sysfs entry to program ldiv_slowdown factor at runtime.
Bug 200391931
Change-Id: Ic66049588c3b20e934faff3f29283f66c30303e4
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1674208
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL gops.mc.isr_nonstall() to handle nonstall interrupts
We already handle nonstall interrupts in nvgpu_intr_nonstall()
But this API is completely in linux specific code
Separate out os-independent code to handle nonstall interrupts in new API
mc_gk20a_isr_nonstall() and set it to HAL gops.mc.isr_nonstall() for all
existing chips
Call this HAL from nvgpu_intr_nonstall()
Jira NVGPUT-8
Change-Id: Iec6a56db03158a72a256f7eee8989a0a8a42ae2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1706589
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Mostly just including necessary includes to make sure that
global function declarations actually match their implementations.
Also work around pointer munging warning:
/build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c: In function 'nvgpu_pmu_process_init_msg':
/build/ddpx/linux/kernel/nvgpu/drivers/gpu/nvgpu/common/pmu/pmu.c:348:4: error: dereferencing type-punned pointer will break strict-aliasing rules [-Werror=strict-aliasing]
(*(u32 *)gid_data.signature == PMU_SHA1_GID_SIGNATURE);
Work around this warning by simply moving the type punning.
This code is certainly dangerous - it assumes the endianness
of the header data is the same as the machine this code is
running on. Apparently it works, though, so this ignores
the warning.
JIRA NVGPU-525
Change-Id: Id704bae7805440bebfad51c8c8365e6d2b7a39eb
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1692454
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fifo.runlist_hw_submit() to submit a new runlist to hardware
gops.fifo.runlist_wait_pending() to wait until runlist write is successful
Set existing API gk20a_fifo_runlist_wait_pending() to
gops.fifo.runlist_wait_pending HAL
Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w
and set it to gops.fifo.runlist_hw_submit HAL
Jira NVGPUT-20
Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700548
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
In gr_gv11b/gk20a_create_priv_addr_table() we do not consider floorswept FBPAs
and just calculate the unicast list assuming all FBPAs are present
This generates incorrect list of unicast addresses
Fix this introducing new HAL ops.gr.split_fbpa_broadcast_addr
Set gr_gv100_get_active_fpba_mask() for GV100
Set gr_gk20a_split_fbpa_broadcast_addr() for rest of the chips
gr_gv100_get_active_fpba_mask() will first get active FPBA mask and generate
unicast list only for active FBPAs
Bug 200398811
Jira NVGPU-556
Change-Id: Idd11d6e7ad7b6836525fe41509aeccf52038321f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694444
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We currently use hard coded values of NV_PERF_PMMGPC_CHIPLET_OFFSET and
NV_PMM_FBP_STRIDE which are incorrect for Volta
Add new GR HAL get_pmm_per_chiplet_offset() to get correct value per-chip
Set gr_gm20b_get_pmm_per_chiplet_offset() for older chips
Set gr_gv11b_get_pmm_per_chiplet_offset() for Volta
Use HAL instead of hard coded values wherever required
Bug 200398811
Jira NVGPU-556
Change-Id: I947e7febd4f84fae740a1bc74f99d72e1df523aa
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690028
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We have new broadcast registers on Volta, and we need to generate correct
unicast addresses for them so that we can write those registers to context image
Add new GR HAL create_priv_addr_table() to do this conversion
Set gr_gk20a_create_priv_addr_table() for older chips
Set gr_gv11b_create_priv_addr_table() for Volta
gr_gv11b_create_priv_addr_table() will use the broadcast flags and then generate
appriate list of unicast register for each broadcast register
Bug 200398811
Jira NVGPU-556
Change-Id: Id53a9e56106d200fe560ffc93394cc0e976f455f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690027
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
With Volta we have more number of broadcast registers than previous chips
and we don't decode them right now in gr_gk20a_decode_priv_addr()
Add a new GR HAL decode_priv_addr() and set gr_gk20a_decode_priv_addr() for all
previous chips
Add and use gr_gv11b_decode_priv_addr() for Volta
gr_gv11b_decode_priv_addr() will decode all the broadcast registers and set
the broadcast flags apporiately
Define below new broadcast types
PRI_BROADCAST_FLAGS_PMMGPC
PRI_BROADCAST_FLAGS_PMM_GPCS
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCA
PRI_BROADCAST_FLAGS_PMM_GPCGS_GPCTPCB
PRI_BROADCAST_FLAGS_PMMFBP
PRI_BROADCAST_FLAGS_PMM_FBPS
PRI_BROADCAST_FLAGS_PMM_FBPGS_LTC
PRI_BROADCAST_FLAGS_PMM_FBPGS_ROP
Bug 200398811
Jira NVGPU-556
Change-Id: Ic673b357a75b6af3d24a4c16bb5b6bc15974d5b7
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1690026
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-Dump timeout save0 and save1 even if they could
be unreliable when fecs_tgt in set in save0 . This
is good to have for debug purposes.
-Add priv_ring hal for decode_error_code
-Decode fecs error code for supported error types
Bug 1998067
Change-Id: I60cb6902d099df4a7df45fa624e44d9e0d46360f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1683014
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Current code does not compute priv error register offsets
properly. This leads to invalid decoding of priv errors, and
can also trigger additional priv errors.
- add GPU_LIT_GPC_PRIV_STRIDE define
- return proj_gpc_priv_stride for GPU_LIT_GPC_PRIV_STRIDE in hals
- use GPU_LIT_GPC_PRIV_STRIDE instead of GPU_LIT_GPC_STRIDE in
g->ops.priv_ring.isr() to compute priv error register offsets.
Bug 2093058
Change-Id: Ia7c36ccba0441126784bb0e00452f2cf1196ef71
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1682118
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Seema Khowala <seemaj@nvidia.com>
GVS: Gerrit_Virtual_Submit
Tested-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
GV100 ucode is changed so that it expects LIST_nv_perf_pma_ctx_reg list in
ctxsw buffer to be 256 byte aligned but same change is not applied to other
chip ucodes
ADD new HAL (*add_ctxsw_reg_perf_pma) to configure PMA register list and
define a common HAL gr_gk20a_add_ctxsw_reg_perf_pma() for all other
chips except GV100
Define a separate HAL for GV100 gr_gv100_add_ctxsw_reg_perf_pma() and fix
the required alignment in this function
Bug 1998067
Change-Id: Ie172fe90e2cdbac2509f2ece953cd8552e66fc56
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676655
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
For LIST_nv_pm_fbpa_ctx_regs, we right now call
add_ctxsw_buffer_map_entries_subunits() to add registers corresponding
to all the FBPAs
But while configuring total number of registers, we do not consider
floorswept FBPAs and that causes misalignment in subsequent lists for GV100
Fix this by reading disabled/floorswept FBPAs from fuse and consider only those
FBPAs which are active for GV100
Add new HAL (*add_ctxsw_reg_pm_fbpa) to support this setting and define a
common HAL gr_gk20a_add_ctxsw_reg_pm_fbpa() for all chips except GV100
Define GV100 specific gr_gv100_add_ctxsw_reg_pm_fbpa() with above mentioned
implementation to consider floorsweeping
Bug 1998067
Change-Id: Id560551bb0b8142791c117b6d27864566c90b489
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676654
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When generating the aperture field for the PDE being programmed
we must use the next PD not the current PD. This is important for
cases on the dGPU where VIDMEM runs out.
In such cases the page table may reside in both VIDMEM and SYSMEM.
Thus, if a PDE points to a PDE in a different type of memory
(VIDMEM -> SYSMEM or SYSMEM -> VIDMEM) then the aperture will not
be programmed correctly if the code uses the current PD for
picking the next PD aperture.
Bug 2082475
Change-Id: Ic1a8d1e2c2237712039dc298b97095d3bbc6c844
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1676831
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode records supported feature list for a
particular chip as support mask sent
via PMU_PG_PARAM_CMD_GR_INIT_PARAM.
It then enables selective feature list through
enable mask sent via
PMU_PG_PARAM_CMD_SUB_FEATURE_MASK_UPDATE cmd.
Right now only ELPG state machine mask was enabled.
Only ELPG state machine was getting executed
but other crucial steps in ELPG entry/exit sequence
were getting skipped.
Bug 200392620.
Bug 200296076.
Change-Id: I5e1800980990c146c731537290cb7d4c07e937c3
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665767
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Tested-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now allocate a nvgpu managed syncpoint in c->sync and share
that with user space
But to avoid conflicts between user space and kernel space increments
allocate a separate "client managed" syncpoint for User space in c->user_sync
Add new API nvgpu_nvhost_get_syncpt_client_managed() to request a client managed
syncpoint from nvhost.
Note that nvhost/nvgpu do not keep track of MAX/threshold value of this syncpoint
Update gk20a_channel_syncpt_create() to receive a flag to indicate whether a
User space syncpoint is required or not
Unset NVGPU_SUPPORT_USER_SYNCPOINT for gp10b since we don't want to allocate
double syncpoints per channel on that platform
For gv11b, once we move to use user space submits, support for c->sync will be
dropped so we keep using only one syncpoint per channel
Bug 200326065
Jira NVGPU-179
Change-Id: I78d94de4276db1c897ea2a4fe4c2db8b2a179722
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665828
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Also revert other changes related to IO coherence. This may be the
culprit in a recent dev-kernel lockdown.
Bug 2070609
Change-Id: Ida178aef161fadbc6db9512521ea51c702c1564b
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1665914
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Srikar Srimath Tirumala <srikars@nvidia.com>
When using a coherent DMA API wee must make sure to program
any aperture fields with the coherent aperture setting. To
do this the nvgpu_aperture_mask() function was modified to
take a third aperture mask argument, a coherent setting, so
that code can use this function to generate coherent aperture
settings.
The aperture choice is some what tricky: the default version
of this function uses the state of the DMA API to determine
what aperture to use for SYSMEM: either coherent or
non-coherent internally. Thus a kernel user need only specify
the normal nvgpu_mem struct and the correct mask should be
chosen. Due to many uses of nvgpu_mem structs not created
directly from the DMA API wrapper it's easier to translate
SYSMEM to SYSMEM_COH after creation.
However, the GMMU mapping code, will encounter buffers from
userspace with difference coerency attributes than the DMA
API. Thus the __nvgpu_aperture_mask() really respects the
aperture setting passed in regardless of the DMA API state.
This aperture setting is pulled from NVGPU_VM_MAP_IO_COHERENT
since this is either passed in from userspace or set by the
kernel when using coherent DMA. The aperture field in attrs
is upgraded to coh if this flag is set.
This change also adds a coherent sysmem mask everywhere that
it can. There's a couple places that do not have a coherent
register field defined yet. These need to eventually be
defined and added.
Lastly the aperture mask code has been mvoed from the Linux
vm.c code to the general vm.c code since this function has
no Linux dependencies.
Note: depends on https://git-master.nvidia.com/r/1664536 for
new register fields.
JIRA EVLR-2333
Change-Id: I4b347911ecb7c511738563fe6c34d0e6aa380d71
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1655220
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The value of NV_PERF_PMASYS_MEM_BUMP is different for Volta
and NVGPU_IOCTL_CHANNEL_CYCLE_STATS_SNAPSHOT_CMD_FLUSH did not
have correct behavior on GV11B due to that.
The patch adds an instance of css_hw_set_handled_snapshots
for Volta to fix that.
The patch also renames css_hw_set_handled_snapshots
to gk20a_css_hw_set_handled_snapshots to make it more clear
that the function is arch dependent.
Bug 1960846
Change-Id: I92c35a862ecd7f918dd1458c086fc7ae42ca8fc5
Signed-off-by: Martin Radev <mradev@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1662427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>