The compiler option -Wmissing-prototypes is being enabled globally in
the upstream Linux kernel and this causes build failures for nvgpu. The
build failures occur because either the driver is missing an include
file which has the prototype or because the function is not declared
statically when it should be (ie. there are no external users).
Fix the various build failures and enable -Wmissing-prototypes to
prevent any new instances from occurring.
Bug 4404965
Change-Id: I551922836e37b0c94c158232d6277f4053e9d2d3
Signed-off-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/3027483
(cherry picked from commit e8cbf90db2d0db7277db9e3eec9fb88d69c7fcc7)
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/3035518
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following changes are added to fix the issue.
1) Threads having higher priority e.g. RT may preempt
threads with sched-normal priority. As a consequence, higher priority
threads might not still see initialization of data in another thread
resulting in failures such as accessing a condition value before initialization.
Any initialization in the parent thread must be accompanied by a barrier
to make it visible in other thread. Added appropriate barriers to prevent
reordering of the initialization in the thread construction path.
2) There is a race condition between nvgpu_cond_signal() and
nvgpu_cond_destroy() in the asynchronous submit code and corresponding
worker thread's process_item callback for NVS. This may lead to
data corruption and resulting in the above errors as well. Fixed
that by adding a refcount based mechanism for ownership sharing
of the struct nvgpu_nvs_worker_item between the two threads.
Bug 3778235
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: Ie9b9ba57bc1dcbb8780801be79863adc39690f72
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2771535
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-cert <svc-mobile-cert@nvidia.com>
Reviewed-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-by: Ketan Patil <ketanp@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
In linux threaded interrupts run with a Realtime priority
of 50. This bumps up the priority of bottom-half handlers
over regular kernel/User threads even during process
context.
In the current implementation scheduler thread still
runs in normal kernel thread priority. In order to
allow a seamless scheduling experience, the worker
thread is now created with a Realtime priority of 1.
This allows for the Worker thread to work at a priority
lower than interrupt handlers but higher than the regular
kernel threads.
Linux kernel allows setting priority with the help of
sched_set_fifo() API. Only two modes are supported
i.e. sched_set_fifo() and sched_set_fifo_low().
For more reference, refer to this article
https://lwn.net/Articles/818388/.
Added an implementation of nvgpu_thread_create_priority()
for linux thread using the above two APIs.
Jira NVGPU-860
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Change-Id: I0a5a611bf0e0a5b9bb51354c6ff0a99e42e76e2f
Reviewed-on: https://git-master.nvidia.com/r/c/linux-nvgpu/+/2751736
Reviewed-by: Prateek Sethi <prsethi@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
GVS: Gerrit_Virtual_Submit <buildbot_gerritrpt@nvidia.com>
The pmu init thread typically returns immediately
without calling nvgpu_thread_should_stop().
pmu_pg_kill_task() checks if the thread is running, and
if it is, calls nvgpu_thread_stop().
However, there's a race condition where the init thread could
have exited between the time that kill_task() checked the
running flag and the time we actually stop the thread, leading
to a kernel crash.
Fix this by making the running flag in the nvgpu_thread struct
atomic. Both the thread proxy function and the thread_stop()
function will set the flag to false.
In the case of nvgpu_thread_proxy(), if the flag is already false,
then nvgpu_thread_stop() has already reset it, at which point we
just wait for nvgpu_thread_should_stop() to return true.
In the case of nvgpu_thread_stop(), if the flag is already false,
then the thread proxy function has already exited, and there is
nothing more to do.
Bug 2591298
Change-Id: I9ba6b63c30a5c3e1df11e790094836b44373122b
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2230358
GVS: Gerrit_Virtual_Submit
Reviewed-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>