Files
nvsci_samples/event_sample_app/00README.txt
svcmobrel-release a3f1b7ea33 Updating prebuilts and/or headers
2eba699906039d6615aae4967f6ea79bfe44a40a - event_sample_app/block_pool.c
f3abb0a884f0647204ad32ff51255c4712e52120 - event_sample_app/Makefile
9ee49033e077ac5c8bf458a04c91dd3dbed9633d - event_sample_app/event_loop.h
b33adce6eb1bbc7af23f6c37b6a635479e18a66a - event_sample_app/block_returnsync.c
a56041c06b6bc1d3812b72b399d7d78dd7895485 - event_sample_app/block_limiter.c
ca34c957759f7a010f0cbbbf9bedc03a2c98092b - event_sample_app/block_c2c.c
8d6d0ec3aa8e374a1d2a5fedc9dd24ff7bbdb731 - event_sample_app/block_multicast.c
a76149a2531899e35843d939f60ad8979d8cf65f - event_sample_app/block_consumer_uc1.c
9da8763e4af4b4b7278507a3ebfe2c68a7a24585 - event_sample_app/util.h
2bf7e1383d6e8913c9b0a6a8bdd48fe63d8098d0 - event_sample_app/block_producer_uc1.c
a54abf82eaa2d888e379ab4596ba68ce264e80b5 - event_sample_app/block_info.h
080a6efe263be076c7046e70e31098c2bbed0f6d - event_sample_app/block_presentsync.c
7dd10e5ea71f0c4a09bbe1f9f148f67a13ee098c - event_sample_app/util.c
bc1a6f9017b28e5707c166a658a35e6b3986fdf4 - event_sample_app/usecase1.h
317f43efc59638bf1eae8303f0c79eafb059241a - event_sample_app/block_ipc.c
40361c8f0b68f7d5207db2466ce08c19c0bf1c90 - event_sample_app/event_loop_service.c
efad113d0107e5d8f90146f3102a7c0ed22f1a35 - event_sample_app/event_loop_threads.c
2908615cebcf36330b9850c57e8745bf324867b2 - event_sample_app/block_queue.c
36ed68eca1a7800cf0d94e763c9fc352ee8cda1e - event_sample_app/block_common.c
675f75d61bd0226625a8eaaf0e503c9e976c8d61 - event_sample_app/main.c
c3b26619dd07d221e953fc5dc29a50dcb95a8b97 - rawstream/Makefile
1fbb82e2281bb2e168c87fd20903bbed898ca160 - rawstream/rawstream_cuda.c
1d96498fe3c922f143f7e50e0a32b099714060ad - rawstream/rawstream_consumer.c
d077dafc9176686f6d081026225325c2a303a60e - rawstream/rawstream_producer.c
54ae655edddda7dcabe22fbf0b27c3f617978851 - rawstream/rawstream.h
d5ffeef3c7ad2af6f6f31385db7917b5ef9a7438 - rawstream/rawstream_ipc_linux.c
81e3d6f8ff5252797a7e9e170b74df6255f54f1b - rawstream/rawstream_main.c

Change-Id: I0f4e671693eb0addfe8d0e6532cc8f240cb6c778
2025-09-19 10:10:49 -07:00

259 lines
13 KiB
Plaintext

SPDX-FileCopyrightText: Copyright (c) 2021-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: LicenseRef-NvidiaProprietary
NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
property and proprietary rights in and to this material, related
documentation and any modifications thereto. Any use, reproduction,
disclosure or distribution of this material and related documentation
without an express license agreement from NVIDIA CORPORATION or
its affiliates is strictly prohibited.
NvSciStream Event Loop Driven Sample App - README
---
# nvscistream_event_sample - NvSciStream Sample App
## Description
This directory contains an NvSciStream sample application that
supports a variety of use cases, using an event-loop driven model.
Once the stream is fully connected, all further setup and streaming
operations are triggered by events, processed either by a single
NvSciEvent-driven thread or separate threads which wait for events
on each block. The former is the preferred approach for implementing
NvSciStream applications. In addition to those events which NvSci
itself generates, any other event which can be bound to an NvSciEvent
can be added to the event loop. This allows for robust applications
which can handle events regardless of the order in which they occur.
To use this sample for writing your own applications:
* See main.c for examples of how to do top level application setup and
how to select the blocks needed for your use case and connect them
all together.
* See the descriptions in the usecase*.h files to determine which use cases
involve the producer and consumer engines that you are interested in.
* See the appropriate block_*.c files for examples of creating the
necessary blocks and handling the events that they encounter.
See the block_producer_*.c and block_consumer_*.c files for examples of how
to map the relevant engines to and from NvSci.
* See the appropriate event_loop_*.c file for your chosen event handling
method.
## Build the application
The NvSciStream sample includes source code and a Makefile.
Navigate to the sample application directory to build the application:
make clean
make
## Examples of how to run the sample application:
* NOTE:
* Inter-process and inter-chip test cases must be run with sudo.
* NvMedia/CUDA stream (use case 2) of the sample application is not supported
on x86 and Jetson Linux devices.
* Inter-chip use cases are not supported on Jetson Linux devices.
* Update the NvIpc/PCIe endpoint accordingly.
Single-process, single-consumer CUDA/CUDA stream that uses the default event
service:
./nvscistream_event_sample
Single-process, single-consumer stream that uses the threaded event handling:
./nvscistream_event_sample -e t
Single-process NvMedia/CUDA stream with yuv format:
./nvscistream_event_sample -u 2 -s y
Single-process NvMedia/CUDA stream with three consumers, and the second uses
the mailbox mode:
./nvscistream_event_sample -u 2 -m 3 -q 1 m
Multi-process CUDA/CUDA stream with three consumers, one in the same
process as the producer, and the other two in separate processes. The
first and the third consumers use the mailbox mode:
./nvscistream_event_sample -m 3 -p -c 0 -q 0 m &
./nvscistream_event_sample -c 1 -c 2 -q 2 m
Multi-process CUDA/CUDA stream with three consumers, one in the same
process as the producer, and the other two in separate processes.
To simulate the case with a less trusted consumer, one of the consumer
processes is set with lower priority. A limiter block is used to restrict
this consumer to hold at most one packet. The total number of packets is
increased to five.
Linux example:
./nvscistream_event_sample -m 3 -f 5 -p -c 0 -l 2 1 &
./nvscistream_event_sample -c 1 &
nice -n 19 ./nvscistream_event_sample -c 2 &
# Makes the third process as nice as possible.
QNX example:
./nvscistream_event_sample -m 3 -f 5 -p -c 0 -l 2 1 &
./nvscistream_event_sample -c 1 &
nice -n 1 ./nvscistream_event_sample -c 2 &
# Reduces the priority level of the third process by 1.
Multi-process CUDA/CUDA stream with two consumers, one in the same
process as the producer, and the other in a separate processe. Both
processes enable the endpoint information option:
./nvscistream_event_sample -m 2 -p -c 0 -i &
./nvscistream_event_sample -c 1 -i
Multi-process CUDA/CUDA stream with extra validation steps for ASIL-D process
(Not support on x86 or Jetson Linux devices):
./nvscistream_event_sample -u 3 -p &
./nvscistream_event_sample -u 3 -c 0
Multi-process CUDA/CUDA stream using external event service to handle internal
I/O messages acroess process boundary:
./nvscistream_event_sample -p -E &
./nvscistream_event_sample -c 0 -E
Multi-process CUDA/CUDA stream with one consumer on another SoC.
The consumer has the FIFO queue attached to the C2C IpcSrc block, and
a three-packet pool attached to the C2C IpcDst block. It uses IPC channel
nvscic2c_pcie_s0_c5_1 <-> nvscic2c_pcie_s0_c6_1 for C2C communication.
./nvscistream_event_sample -P 0 nvscic2c_pcie_s0_c5_1 -Q 0 f
# Run below command on another OS running on peer SOC.
./nvscistream_event_sample -C 0 nvscic2c_pcie_s0_c6_1 -F 0 3
Multi-process CUDA/CUDA stream with four consumers, one in the same
process as the producer, one in another process but in the same OS as the
producer, and two in another process on another OS running in a peer SoC.
The third and fourth consumers have a mailbox queue attached to the C2C
IpcSrc block, and a five-packet pool attached to the C2C IpcDst block.
The third consumer uses nvscic2c_pcie_s0_c5_1 <-> nvscic2c_pcie_s0_c6_1 for
C2C communication. The 4th consumer uses nvscic2c_pcie_s0_c5_2 <->
nvscic2c_pcie_s0_c6_2 for C2C communication.
./nvscistream_event_sample -m 4 -c 0 -q 0 m -Q 2 m -Q 3 m -P 2 nvscic2c_pcie_s0_c5_1 -P 3 nvscic2c_pcie_s0_c5_2 &
./nvscistream_event_sample -c 1 -q 1 m
# Run below command on another OS running on peer SOC.
./nvscistream_event_sample -C 2 nvscic2c_pcie_s0_c6_1 -q 2 f -F 2 5 -C 3 nvscic2c_pcie_s0_c6_2 -q 3 m -F 3 5
#Example commands for inter-process late attach usecase
Multi-process CUDA/CUDA stream with one early consumer and one late-attached consumer
Producer and early consumer processes are configured to stream 100000 frames, where as
the late-attached consumer process is configured to receive 10000 frames.
# Run the below commands to launch producer and early consumer processes.
./nvscistream_event_sample -m 2 -r 1 -p &
./nvscistream_event_sample -c 0 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -c 1 -k 1 10000 &
Multi-process CUDA/CUDA stream with one early consumer and two late-attached consumers
Producer and early consumer processes are configured to stream 100000 frames, where as
the late-attached consumer process one is configured to receive 10000 frames and
the late-attached consumer process two is configured to receive 50000 frames
# Run the below commands to launch producer and early consumer processes.
./nvscistream_event_sample -m 3 -r 2 -p &
./nvscistream_event_sample -c 0 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process one.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -c 1 -k 1 10000 &
# Run the below command after some delay to launch the late-attached consumer process two.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -c 2 -k 2 50000 &
#Example commands for inter-process re-attach usecase
Multi-process CUDA/CUDA stream with one early consumer and two late-attached consumers
Producer and early consumer processes are configured to stream 100000 frames, where as
the late-attached consumer process one is configured to receive 10000 frames and
the late-attached consumer process two is configured to receive 50000 frames.
Once late-attached consumer process one completes streaming, re-attach it for receiving
5000 frames.
# Run the below commands to launch producer and early consumer processes.
./nvscistream_event_sample -m 3 -r 2 -p &
./nvscistream_event_sample -c 0 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process one.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -c 1 -k 1 10000 &
# Run the below command after some delay to launch the late-attached consumer process two.
sleep 1;
./nvscistream_event_sample -L -c 2 -k 2 50000 &
# After late-attached consumer process one completes, re-attach it.
./nvscistream_event_sample -L -c 1 -k 1 5000 &
Limitations with C2C late/re-attach:
This sample app has the following limitations.
1. For C2C late/re-attach, this sample app does not support IPC consumer being the only early
consumer and all the remaining consumers as C2C late-attached. This is due to setting static
attribute logic for late-attach is not added.
2. A C2C consumer can acts as an IPC consumer during late-/re-attach but an IPC consumer
cannot be made as C2C consumer during Late/re-attach.
#Example commands for inter-chip late attach usecase
Multi-process CUDA/CUDA stream with one early C2C consumer and one C2C late-attached consumer
Producer and early C2C consumer processes are configured to stream 100000 frames, where as
the late-attached C2C consumer process is configured to receive 10000 frames.
The early consumer uses nvscic2c_pcie_s0_c5_1 <-> nvscic2c_pcie_s0_c6_1 for
C2C communication. The late-attached consumer uses nvscic2c_pcie_s0_c5_2 <->
nvscic2c_pcie_s0_c6_2 for C2C communication.
# Run the below commands to launch producer on SOC1
./nvscistream_event_sample -m 2 -r 1 -P 0 nvscic2c_pcie_s0_c5_1 -P 1 nvscic2c_pcie_s0_c5_2 &
# Run the below commands to launch early consumer process on SOC2
./nvscistream_event_sample -C 0 nvscic2c_pcie_s0_c6_1 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process on SOC2
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -C 1 nvscic2c_pcie_s0_c6_2 -k 1 10000 &
Multi-process CUDA/CUDA stream with one early C2C consumer and two C2C late-attached consumer
Producer and early C2C consumer processes are configured to stream 100000 frames, where as
the late-attached C2C consumer process is one configured to receive 10000 frames and
the late-attached C2C consumer process is two configured to receive 10000 frames.
The early consumer uses nvscic2c_pcie_s0_c5_1 <-> nvscic2c_pcie_s0_c6_1 for
C2C communication. The late-attached consumer one uses nvscic2c_pcie_s0_c5_2 <->
nvscic2c_pcie_s0_c6_2 for C2C communication and the late-attached consumer two
uses nvscic2c_pcie_s0_c5_3 <->nvscic2c_pcie_s0_c6_3 for C2C communication.
# Run the below commands to launch producer on SOC1
./nvscistream_event_sample -m 3 -r 2 -P 0 nvscic2c_pcie_s0_c5_1 -P 1 nvscic2c_pcie_s0_c5_2 -P 2 nvscic2c_pcie_s0_c5_3 &
# Run the below commands to launch early consumer process on SOC2
./nvscistream_event_sample -C 0 nvscic2c_pcie_s0_c6_1 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -C 1 nvscic2c_pcie_s0_c6_2 -k 1 10000 &
# Run the below command after some delay to launch the late-attached consumer process.
sleep 1;
./nvscistream_event_sample -L -C 2 nvscic2c_pcie_s0_c6_3 -k 2 10000 &
#Example commands for inter-chip/process re-attach usecase
Multi-process CUDA/CUDA stream with one early consumer and two late-attached consumers
Producer and early consumer processes are configured to stream 100000 frames, where as
the late-attached consumer process one is configured to receive 10000 frames and
the late-attached consumer process two is configured to receive 50000 frames.
Once late-attached consumer process one completes streaming, re-attach it for receiving
5000 frames.
Once late-attached consumer process two completes streaming, re-attach it as IPC consumer for receiving
5000 frames.
# Run the below commands to launch producer on SOC1
./nvscistream_event_sample -m 3 -r 2 -P 0 nvscic2c_pcie_s0_c5_1 -P 1 nvscic2c_pcie_s0_c5_2 -P 2 nvscic2c_pcie_s0_c5_3 &
# Run the below commands to launch early consumer process on SOC2
./nvscistream_event_sample -C 0 nvscic2c_pcie_s0_c6_1 -k 0 100000 &
# Run the below command after some delay to launch the late-attached consumer process.
sleep 1; # This 1s delay will let producer and consumer to enter into streaming phase.
./nvscistream_event_sample -L -C 1 nvscic2c_pcie_s0_c6_2 -k 1 10000 &
# Run the below command after some delay to launch the late-attached consumer process.
sleep 1;
./nvscistream_event_sample -L -C 2 nvscic2c_pcie_s0_c6_3 -k 2 50000 &
# Once late-attached consumer process one completes streaming,
# re-attach it for receiving 5000 frames.
./nvscistream_event_sample -L -C 1 nvscic2c_pcie_s0_c6_2 -k 1 5000 &
# Once late-attached consumer process two completes streaming,
# re-attach it as IPC consumer on SOC1 for receiving 5000 frames.
./nvscistream_event_sample -L -c 2 -k 2 5000 &