Safety build does not support vidmem. This patch compiles out vidmem
related changes - vidmem, dma alloc, cbc/acr/pmu alloc based on
vidmem and corresponding tests like pramin, page allocator &
gmmu_map_unmap_vidmem..
As vidmem is applicable only in case of DGPUs the code is compiled
out using CONFIG_NVGPU_DGPU.
JIRA NVGPU-3524
Change-Id: Ic623801112484ffc071195e828ab9f290f945d4d
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2132773
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Remove below hals since the corresponding functions are same on all
platforms and they are h/w independent
g->ops.gr.enable_ctxsw()
g->ops.gr.disable_ctxsw()
g->ops.gr.reset()
Call the functions directly at all places
Remove CONFIG_NVGPU_DEBUGGER from places where these functions are
called since they are not debugger dependent
This also helps to disable CONFIG_NVGPU_DEBUGGER and to keep recovery
sequence intact
Jira NVGPU-3506
Change-Id: Id2b208ca23dc4667e78edcd8ad242a8558e0ff64
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2137255
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add branch coverage for:
- nvgpu_channel_kill
- nvgpu_channel_close
Also, modified gk20a_free_channel as follows:
- use nvgpu_assert to check ch->g (so that it can be tested)
- make sure g is non-NULL before calling nvgpu_get_poll_timeout
Jira NVGPU-3480
Change-Id: Ie1fa231b022da47b9ef9022ae67a6b3d73c28a8b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2129724
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Restricted the os specific channel open only for kernel mode submit.
For linux, g->os_channel.open is just noop.
For QNX, nvgpu creates the sync point notifier thread for
job tracking. This job completion tracking is only required
for kernel mode submit. This sync point notifier thread
is not required for user mode submit.
JIRA NVGPU-2944
Change-Id: Ie37824aac3609090ef9da378202dbc211a2eb0b3
Signed-off-by: Lakshmanan M <lm@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2131386
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Make the nvgpu_init_mutex function return void.
In linux case, this doesn't affect anything since mutex_init
returns void.
For posix, we assert() and die if pthread_mutex_init fails.
This alleviates the need to error inject for _every_
nvgpu_mutex_init function in the driver.
Jira NVGPU-3476
Change-Id: Ibc801116dc82cdfcedcba2c352785f2640b7d54f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2130538
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The vm object num_user_mapped_buffers was declared as an int. However,
it is an unsigned value. Being a signed value required a cast to
unsigned when calling nvgpu_big_zalloc() which causes a CERT-C INT31
violation. So, avoid the cast and use an unsigned type. And fix related
INT30 violations related to num_user_mapped_buffers as well.
To avoid introducing new MISRA/CERT-C violations, update the upstream
user of these changes, fifo/channel.c and make the equivalent uses of
this value, num_mapped_buffers a u32 as well.
JIRA NVGPU-3517
Change-Id: I6f6d9dfe4a0ee16789b8cd17b908a3f3f9c4a40c
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2127427
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag is added to compile out below features from
safety build
-set_preemption_mode
-channel_enable
-channel_disable
-channel_preempt
-channel_force_reset
-tsg_enable
-tsg_disable
-tsg_preempt
-tsg_event_id_ctrl
-post_event_id
JIRA NVGPU-3516
Change-Id: I935841db766f192f62598240c0e245a2959555be
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2126829
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
nvgpu_channel_bind_setup is split into user_mode and kernel_mode
specific initialization. Avoid allocation of resources specific to
kernel_mode submits when the channel supports user_mode submits and
vice-versa. In future, a safety build will be used to compile out the
kernel_mode specific initialization for safety builds.
Jira NVGPU-3479
Change-Id: Idf158abc24a0b969a5c40f54fee6be4301a08773
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2122844
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Renamed gk20a_channel_* APIs to nvgpu_channel_* APIs.
Removed unused channel API int gk20a_wait_channel_idle
Renamed nvgpu_channel_free_usermode_buffers in os/linux-channel.c to
nvgpu_os_channel_free_usermode_buffers to avoid conflicts with the API
with the same name in channel unit.
Jira NVGPU-3248
Change-Id: I21379bd79e64da7e987ddaf5d19ff3804348acca
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2121902
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Check return value of below function and add void to ignore
the return value
update_gp_get
Rename
nvgpu_get_gp_free_count -> nvgpu_channel_update_gpfifo_get_and_get_free_count
nvgpu_gp_free_count -> nvgpu_channel_get_gpfifo_free_count
JIRA NVGPU-3388
Change-Id: I6e2265882c1f34e3bb47eaeac7a2c5a9fbe9b4eb
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2115784
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
channel.c calling nvgpu_gr_flush_channel_tlb() creating circular
dependency between gr and fifo. Avoid this by moving channel tlb
related data to struct nvgpu_gr_intr in gr_intr_priv.h and
initialized this data in gr_intr.c.
Created following new gr intr hal and called this new hal from channel.c
void (*flush_channel_tlb)(struct gk20a *g);
JIRA NVGPU-3214
Change-Id: I2d259bf52db967273030680f50065af94a17f417
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109274
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The alloc_insty_block() function in the MM HAL is not a HAL. It does
not abstract any HW accesses; instead it just wraps a dma allocation.
As such remove it from the HAL and move the single gk20a implementation
to common/mm/mm.c as nvgpu_alloc_inst_block().
JIRA NVGPU-2042
Change-Id: I0a586800a11cd230ca43b85f94a35de107f5d1e1
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2109049
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- nvgpu_channel_resume_all_serviceable_ch is always returning 0. So,
it is safe to change return type of this function to void.
- This is required to fix MISRA violation: MISRA C-2012 Rule 17.7:
The value returned by a function having non-void return shall be
used.
JIRA NVGPU-3140
Change-Id: I12930ddb21b506266664aac8905326204e9483eb
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2106989
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved the following HALs from fifo to tsg
- set_timeslice
- default_timeslice_us
Renamed
- gk20a_tsg_set_timeslice -> nvgpu_tsg_set_timeslice
- min_timeslice_us -> tsg_timeslice_min_us
- max_timeslice_us -> tsg_timeslice_max_us
Scale timeslice to take into account PTIMER clock in
nvgpu_runlist_append_tsg.
Removed gk20a_channel_get_timescale_from_timeslice, and
instead moved timeout and scale computation into runlist HAL,
when building TSG entry:
- runlist.get_tsg_entry
Use ram_rl_entry_* accessors instead of hard coded values
for default and max timeslices.
Added #defines for min, max and default timeslices.
Jira NVGPU-3156
Change-Id: I447266c087c47c89cb6a4a7e4f30acf834b758f0
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2100052
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently, both clk_arb and channels use their own implementation of
a background worker. These implementations are almost identical and can
be extracted out into a single self-contained unit name nvgpu_worker.
Another advantage of using a single worker unit is to avoid duplication
of Unit Tests for this unit in other units.
channel and clk_arb units provide their own specific implementations
via an ops interface named nvgpu_worker_ops which is a part of the
nvgpu_worker struct.
The following high level APIs are exposed by the nvgpu_worker
nvgpu_worker_should_stop
nvgpu_worker_enqueue
nvgpu_worker_init
nvgpu_worker_deinit
The nvgpu_worker_ops containg the following function pointers
pre_process
wakeup_early_exit
wakeup_post_process
wakeup_process_item
wakeup_condition
wakeup_timeout
The specific code in channel and clk_arb is changed to use the above
implementations instead of their own separate implementations.
Jira NVGPU-3101
Change-Id: I14a0bba6a3d61a642b858dec70d5818d5a0472a4
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2090475
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>