To finish OS unification of the submit path, move the
gk20a_submit_channel_gpfifo* functions to a file that's accessible also
outside Linux code.
Also change the prefix of the submit functions from gk20a_ to nvgpu_.
Jira NVGPU-705
Change-Id: I8ca355d1eb69771fb016c7a21fc7f102ca7967d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1760421
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-is_preempt_pending hal does not need timeout_rc_type input param as
for volta, reset_eng_bitmask is saved if preempt times out. For
legacy chips, recovery triggers mmu fault and mmu fault handler
takes care of resetting engines.
-For volta, no special input param needed to differentiate between
preempt polling during normal scenario and preempt polling during
recovery. Recovery path uses preempt_ch_tsg hal to issue preempt.
This hal does not issue recovery if preempt times out.
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: Ie76a18ae0be880cfbeee615859a08179fb974fa8
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1709799
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-During polling eng preempt done, reset eng only if eng stall
intr is pending. Also stop polling for eng preempt done
if eng intr is pending.
-Add max retries for pre-si platforms for poll pbdma and eng
preempt done polling loops.
Bug 2125776
Bug 2108544
Bug 2105322
Bug 2092051
Bug 2048824
Bug 2043838
Bug 2039587
Bug 2028993
Bug 2029245
Bug 2065990
Bug 1945121
Bug 200401707
Bug 200393631
Bug 200327596
Change-Id: I66b07be9647f141bd03801f83e3cda797e88272f
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694137
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The biggest remaining Linuxism in the submit path is the
copy_from_user() calls for reading the gpfifo entries to the HW-visible
buffer. Abstract away the copy of one such segment starting at some
offset and keep the wraparound logic and vidmem proxy in the core submit
path.
Jira NVGPU-705
Change-Id: I0c6438045c695e5e3f5da4fbc0c92d2c6e7f32cb
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730480
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add remove_gr_sys() op to gpu_ops to reverse steps
done in create_gr_sysfs().
Make gv11b_tegra_remove() specific to gv11b instead
to properly remove sysfs nodes. This also helps in
having gv11b specific remove steps.
Also, update platform remove function of dGPU i.e.
nvgpu_pci_tegra_remove() to remove sysfs nodes. This
adds parity with iGPU platform remove.
Bug 1987855
Change-Id: Ibbaffac5c24346709347f86444a951461894354d
Signed-off-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1735987
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- removed inclusion of linux includes.
- replaced with nvgpu/*.h's
- reformated the function signature of
"css_hw_get_pending_snapshot" and
"css_hw_get_overflow_status" be global instead of
static.
- added get_pending_snapshot and get_overflow_status
to ops->css.
JIRA: VQRM-3699
Change-Id: I177904c263e143b414924c2c28ad6fd3cfd00132
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1732783
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below HALs to setup mmu_fault configuration registers and to read
information registers and set them on Volta
gops.fb.write_mmu_fault_buffer_lo_hi()
gops.fb.write_mmu_fault_buffer_get()
gops.fb.write_mmu_fault_buffer_size()
gops.fb.write_mmu_fault_status()
gops.fb.read_mmu_fault_buffer_get()
gops.fb.read_mmu_fault_buffer_put()
gops.fb.read_mmu_fault_buffer_size()
gops.fb.read_mmu_fault_addr_lo_hi()
gops.fb.read_mmu_fault_inst_lo_hi()
gops.fb.read_mmu_fault_info()
gops.fb.read_mmu_fault_status()
Jira NVGPUT-13
Change-Id: Ia99568ff905ada3c035efb4565613576012f5bef
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1744063
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Starting with Volta, one TPC could have more than 1 SMs. So
.record_sm_error_state needs to have sm number as parameter.
Logic tpc id should be read from gr_gpc0_gpm_pd_sm_id_r.
Let the function return logical sm_id. RM server will need it to nofify
client.
Jira EVLR-2643
Bug 200405202
Change-Id: Iffaff05b89b1c5058616b8a6bf50dd73bd4e52f6
Signed-off-by: Richard Zhao <rizhao@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1742165
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs to allocate/map/commit global context buffers
gops.gr.alloc_global_ctx_buffers()
gops.gr.map_global_ctx_buffers()
gops.gr.commit_global_ctx_buffers()
Set these HALs for all the supported GPUs
We right now re-use below APIs to set these HALs
gr_gk20a_alloc_global_ctx_buffers()
gr_gk20a_map_global_ctx_buffers()
gr_gk20a_commit_global_ctx_buffers()
Jira NVGPUT-27
Change-Id: I975a54e8d1716af057f982d543787748d35a256e
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1743362
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
VBIOS link_disable_mask should be sufficient to find the connected
links. As VBIOS is not updated with correct mask, we parse the DT
node where we hardcode the link_id. DT method is not scalable as same
DT node is used for different dGPUs connected over PCIE. Remove the
DT parsing of link id and use HAL to get link_mask based on the GPU.
JIRA NVLINK-162
Change-Id: Idb7b639962928ce48711a0d7fc277c4c324bee91
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1738967
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The sequence of INIT* minion dlcmd varies between nvlink 2.0 and 2.2.
The order is strict for 2.2. Also there are new dlcmds added to the
nvlink bringup sequence. Add HAL to allow sequence update for nvlink 2.2.
Old sequence:
INITLANEENABLE-> INITDLPL
New Sequence:
INITDLPL->INITDLPL_TO_CHIPA->INITTL->INITLANEENABLE
JIRA NVLINK-176
Change-Id: I49e0a726f56e7d6122ac4cddf0f0e021d16f1926
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1738329
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Earlier implementation of railgate disable config is disabling
runtime pm during pm_init. This is causing multiple issues:
1. gpu rail will be on as soon as nvgpu driver probe is called.
Actual gpu hw init may happen at much later point of time.
2. This is breaking railgate_enable sysfs node functionality.
railgate_enable is not working if runtime pm is disabled.
To avoid all these issues for railgate disable, enable runtime pm
during pm_init and set auto-suspend delay to negative (-1), which
will disable runtime pm suspend calls.
Also fixed following issues along with this:
1. Updated railgate_enable debugfs implementation to use auto-suspend delay.
To disable railgating:
Set auto-suspend delay with negative value(-1) which will disable runtime
pm suspend.
To enable railgating:
Set auto-suspend delay with railgate_delay value.
Also removed redundant user_railgate_disabled gk20a device data and
replaced with can_railgate, where ever it is applicable.
2. Initialized default railgate_delay to 500msec to avoid railgate
on/off transitions with railigate enable from disabled state.
3. Created railgate_residency debug fs node irrespective of can_railgate
initial state. This is helping with the case, where initial state of
railgate state off and then railgate enable is done through sysfs node.
Bug 2073029
Change-Id: I531da6d93ba8907e806f65a1de2a447c1ec2665c
Signed-off-by: seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1694944
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Before nvlink 2.2, driver was responsible for setting the NVLink clocks
during NVLink initialization. For the purpose of security, NVLink PLL
handling is moved to Minion in nvlink 2.2 and driver should stop writing
to these registers.
JIRA NVLINK-167
Change-Id: I18392a29c322da55053037bfde62c8f74ee75288
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1730597
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
RXDET is supported only on nvlink 2.2 devices and forward.
Add HAL to run RXDET selectively based on chip. RXDET needs to be
done after the links are out of reset but before any other link
level initialization.
minion_send_cmd is also made non-static to support RXDET
functionality.
JIRA NVLINK-160
Change-Id: Ic65b8dbc7281743f62072089ff3c805521ac9b38
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1729525
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We receive bundle with address and 64 bit values from ucode on some platforms
This patch adds the support to handle 64 bit values
Add struct av64_gk20a to store an address and corresponding 64 bit value
Add struct av64_list_gk20a to store count and list of av64_gk20a
Add API alloc_av64_list_gk20a() to allocate the list that supports 64bit
values
In gr_gk20a_init_ctx_vars_fw(), if we see NETLIST_REGIONID_SW_BUNDLE64_INIT,
load the bundle64 state into above local structures
Add new HAL gops.gr.init_sw_bundle64() and call it from gk20a_init_sw_bundle()
if defined
Also load the bundle for simulation cases in gr_gk20a_init_ctx_vars_sim()
Jira NVGPUT-96
Change-Id: I1ab7fb37ff91c5fbd968c93d714725b01fd4f59b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1736450
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add g->fifo_eng_timeout_us to define engine timeout in microseconds.
It is initialized with GRFIFO_TIMEOUT_CHECK_PERIOD_US. In RM server
case, it can be overriden with value defined in device tree.
Jira EVLR-2674
Change-Id: I69ac2ce779fe575566c8ba48e8cd2d0e6b2d93cf
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1728391
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fb.enable_hub_intr() to enable hub interrupts
gops.fb.disable_hub_intr() to disable hub interrupts
Set existing APIs gv11b_fb_enable/disable_hub_intr() to these HALs
Call the HALs everywhere instead of calling the APIs directly
Jira NVGPUT-44
Change-Id: Id299c6d228733ed365a71be6b180186776cc1306
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1725977
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
-timeouts will be enabled only when timeouts_disabled_refcount
will reach 0
-timeouts_enabled debugfs will change from u32 type to file type
to avoid race enabling/disabling timeout from debugfs and ioctl
-unify setting timeouts_enabled from debugfs and ioctl
Bug 1982434
Change-Id: I54bab778f1ae533872146dfb8d80deafd2a685c7
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1588690
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Code related to MC module is updated for handling
MISRA violations
Rule 10.1: Operands shalln't be an inappropriate
essential type.
Rule 10.3: Value of expression shalln't be assigned
to an object with a narrow essential type.
Rule 10.4: Both operands in an operator shall have
the same essential type.
Rule 14.4: Controlling if statement shall have
essentially Boolean type.
Rule 15.6: Enclose if() sequences with braces.
JIRA NVGPU-646
JIRA NVGPU-659
JIRA NVGPU-671
Change-Id: Ia7ada40068eab5c164b8bad99bf8103b37a2fbc9
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1720926
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs for bios operations
gops.bios.devinit()
gops.bios.preos()
gops.bios.verify_devinit()
Export existing APIs gp106_bios_devinit() and gp106_bios_preos() and set them
to above HALs on gp106 and gv100
And call new HALs from gp106_bios_init() if supported instead of directly
calling APIs
Jira NVGPUT-48
Change-Id: Ic89f1c86cf6e3e0785b3663fe733b201d6f2f773
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1708382
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add NVGPU_IOCTL_CHANNEL_RESCHEDULE_RUNLIST ioctl to reschedule runlist,
and optionally check host and FECS status to preempt pending load of
context not belonging to the calling channel on GR engine during context
switch.
This should be called immediately after a submit to decrease worst case
submit to start latency for high interleave channel.
There is less than 0.002% chance that the ioctl blocks up to couple
miliseconds due to race condition of FECS status changing while being read.
For GV11B it will always preempt pending load of unwanted context since
there is no chance that ioctl blocks due to race condition.
Also fix bug with host reschedule for multiple runlists which needs to
write both runlist registers.
Bug 1987640
Bug 1924808
Change-Id: I0b7e2f91bd18b0b20928e5a3311b9426b1bf1848
Signed-off-by: David Li <davli@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1549050
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below new HALs
gops.fifo.add_sema_cmd() to insert HOST semaphore acquire/release methods
gops.fifo.get_sema_wait_cmd_size() to get size of acquire command buffer
gops.fifo.get_sema_incr_cmd_size() to get size of release command buffer
Separate out new API gk20a_fifo_add_sema_cmd() to implement semaphore acquire/
release sequence and set it to gops.fifo.add_sema_cmd()
Add gk20a_fifo_get_sema_wait_cmd_size() and gk20a_fifo_get_sema_incr_cmd_size()
to return respective command buffer sizes
Jira NVGPUT-16
Change-Id: Ia81a50921a6a56ebc237f2f90b137268aaa2d749
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1704490
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- Added vf change inject support for gv10x
- Updated clk_pmu_vf_inject() to fill required data
for pascal or volta vf change inject support
- Added new ctrl clk interface for gv10x clk domain list
- Added pmu interface for gv10x clk domain list &
vf change inject request
- Modified clk cmd, msg & RPC id's to match
with chips_a_23609936 branch
Bug 200399373
Change-Id: Ib9dc10073386f63bdfd92110c7ec3e09b1c484ce
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700746
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently gpu sysfs retrieves Fmax@Vmin by direct call into Tegra DVFS
driver that introduces compile time dependencies on CONFIG_TEGRA_DVFS.
In addition incorrect clock is used for DVFS information access.
Re-factored sysfs node to use generic GPU clock operation for Fmax@Vmin
read. This would fix a bug in target clock selection, and allows to
remove dependency of sysfs on CONFIG_TEGRA_DVFS.
Updated nvgpu_linux_get_fmax_at_vmin_safe operation itself so it can be
called on platforms that does not support Tegra DVFS, although 0 will
still be returned as Fmax@Vmin on such platforms.
Bug 2045903
Change-Id: I32cce25320df026288c82458c913b0cde9ad4f72
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1710924
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
segregated os-agnostic function from linux/sim.c and linux/sim_pci.c
to sim.c and sim_pci.c, while retaining os-specific functions.
renamed all gk20a_* api's to nvgpu_*.
renamed hw_sim_gk20a.h to nvgpu/hw_sim.h
moved hw_sim_pci.h to nvgpu/hw_sim_pci.h
JIRA VQRM-2368
Change-Id: I040a6b12b19111a0b99280245808ea2b0f344cdd
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1702425
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
PMU ucode is updated to include LDIV slowdown factor in gr_init_param command.
- Defined a new version gr_init_param_v2.
- Updated the PMU FW version code.
- Set the LDIV slowdown factor to 0x1e by default.
- Added sysfs entry to program ldiv_slowdown factor at runtime.
Bug 200391931
Change-Id: Ic66049588c3b20e934faff3f29283f66c30303e4
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1674208
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new HAL gops.mc.isr_nonstall() to handle nonstall interrupts
We already handle nonstall interrupts in nvgpu_intr_nonstall()
But this API is completely in linux specific code
Separate out os-independent code to handle nonstall interrupts in new API
mc_gk20a_isr_nonstall() and set it to HAL gops.mc.isr_nonstall() for all
existing chips
Call this HAL from nvgpu_intr_nonstall()
Jira NVGPUT-8
Change-Id: Iec6a56db03158a72a256f7eee8989a0a8a42ae2f
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1706589
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Implement a worker thread to replace the workqueue based
handling for clk arbiter update callbacks.
The work items scheduled with the thread are of two types,
update_vf_table and arb_update. Previously, there were two
workqueues for handling these two work struct's separately.
Now the single worker thread would process these two events.
If a work item of a particular type is scheduled to be run
on the worker, another instance of same type won't be
scheduled, which mirrors the linux workqueue behavior.
This also removes dependency on linux workqueues/work struct
and makes code portable to be used by QNX also.
Jira VQRM-3741
Change-Id: Ic27ce718c62c7d7c3f8820fbd1db386a159e28f2
Signed-off-by: Sourab Gupta <sourabg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1706032
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
clk_vin data structures updated as new calibration type (v20) is added.
GP106 header does not have vin calibration type.
Assuming V10 if calibration type is not V20.
Add fuse calibration for V20 type.
Bug 200399373
Change-Id: I9449de1ecb0d0873f3bc16f46660f93fab5b9eac
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1687591
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below two new HALs
gops.fifo.runlist_hw_submit() to submit a new runlist to hardware
gops.fifo.runlist_wait_pending() to wait until runlist write is successful
Set existing API gk20a_fifo_runlist_wait_pending() to
gops.fifo.runlist_wait_pending HAL
Add new API gk20a_fifo_runlist_hw_submit() which submits the runlist to h/w
and set it to gops.fifo.runlist_hw_submit HAL
Jira NVGPUT-20
Change-Id: Ic23f7d947e30883aca0b536de818e79e14733195
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1700548
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This patch deals with cleanups meant to make things simpler for the
upcoming os abstraction patches for the sync framework. This patch
causes some substantial changes which are listed out as follows.
1) sync_timeline is moved out of gk20a_fence into struct
nvgpu_channel_linux. New function pointers are created to facilitate os
independent methods for enabling/disabling timeline and are now named
as os_fence_framework. These function pointers are located in the struct
os_channel under struct gk20a.
2) construction of the channel_sync require nvgpu_finalize_poweron_linux()
to be invoked before invocations to nvgpu_init_mm_ce_context(). Hence,
these methods are now moved away from gk20a_finalize_poweron() and
invoked after nvgpu_finalize_poweron_linux().
3) sync_fence creation is now delinked from fence construction and move
to the channel_sync_gk20a's channel_incr methods. These sync_fences are
mainly associated with post_fences.
4) In case userspace requires the sync_fences to be constructed, we
try to obtain an fd before the gk20a_channel_submit_gpfifo() instead of
trying to do that later. This is used to avoid potential after effects
of duplicate work submission due to failure to obtain an unused fd.
JIRA NVGPU-66
Change-Id: I42a3e4e2e692a113b1b36d2b48ab107ae4444dfa
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1678400
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>