Supported Sensors

Generic Video Device / Webcam


Supported hardware versions

Hardware ID Driver supports USB and internal web cams
Supported hardware Video4Linux (V4L) compatible USB video devices, tested with the Microsoft LifeCam

Sensor requirements and background

A webcam is an inexpensive way to visualize your surroundings.

Sensor notes

  • The PolySync Core video device interface supports devices that are Video4Linux (V4L) compatible.

Hardware requirements

  • ECU with PolySync Core installed
  • A USB port on your ECU

Configuring the sensor

When the video device is connected to the Linux machine the Video4Linux (V4L) drivers will enumerate the device name, similar to /dev/video0.

When the device is initialized during the boot process or connected to the host machine you will see it in /var/log/syslog or you can run dmesg in a terminal before connecting the device to your host machine.

You can also see all video devices by running the command:

$ ls -l /dev/video*

Configuring the ECU

Refer to the cameras manufacturers documentation to see if it requires any setup. For most V4L compatible video devices no additional setup is required.

Configuring the PolySync driver

The default PolySync Configurator configuration has shown to work out of the box on most systems.

Adding the sensor to the SDF

Using the Configurator tool, add a sensor node to the SDF.

The ‘Node Interface’ name is video-device.

Configuration parameters

The values in the default configuration has been shown to work for most cameras. You will need to set the Video Device 0 Name in the SDF Configurator IO Configuration section to your video device.

SDF settings
  • Node Configuration
    • Application settings, such as the file path for the the device driver
  • Sensor Configuration
    • Physical settings of the device, such as XYZ placement on the vehicle
    • Published and logged: Pixel format, image width/height, frames per second
    • Optional extended H264 encoder parameters
  • IO Configuration
    • Video device name
    • IO settings for the data, such as video frames per second
    • Source: image width/height, frames per second
    • Source device configuration parameters
Shared memory configuration

This is an optional step. Shared memory queues provide drastically faster read/write speeds than the global shared PolySync bus that exists on the Ethernet wire.

Shared memory segments exist to allow two or more host-local nodes to access the data. Commonly the data is very large, such as uncompressed image data that would normally saturate a Gigabit Ethernet bus.

  1. Configure the PolySync driver to use the shared memory queue by providing a unique shared memory queue key
    • The driver manages the creation and cleanup of the memory segment
  2. Configure your custom application to read data by referencing the unique shared memory queue key
  3. Create the shared memory segment
  4. Start the PolySync driver and custom application

If the shared memory buffer is not set up when the dynamic driver starts, it will detect this and populate an error in the polysync.log indicating the appropriate buffer size that needs to be configured.


The driver optionally allows setting the device frame rate using a time-per-frame fraction. This is useful for example when working with devices that only support 29.97 FPS (NTSC).

The value is expressed as a fraction with a numerator value (seconds) and a denominator value (frames).

Some common values are:

  • (130) == 30 FPS
  • (100130000) == 29.97 FPS
  • (120) == 20 FPS
  • (215) == 7.5 FPS

When using the time-per-frame fraction in conjunction with an encoder, the published frame rate get/set parameter would be set to the nearest whole number rounding up. For example, when using (100130000) for 29.97 FPS you would set the published frame rate to 30.

The published frame rate is not used when publishing the image data in its native pixel format, as received from the device.

Below is an example of the SDF configuration used for 7.5 FPS. Example SDF configuration

Validating the sensor is properly configured

If you’re approaching a new PolySync system or need to validate an existing configuration you can use the following checklist to ensure the sensor is properly configured.

Set up checklist

If the sensor passes these checks then the PolySync dynamic driver will likely be able to communicate with the sensor.

  1. Verify your video device is enumerated in /dev/
    • $ ls -l /dev/video*
  2. Ensure you can view video with a system application.
    • The Cheese Webcam Booth application is installed by default on an Ubuntu desktop system.

Starting the PolySync driver

The configuration set in the Configurator is loaded from the SDF when the dynamic driver starts. It connects to the sensor through the serial port, requests the data, and waits for confirmation that the sensor configuration is valid.

When the dynamic driver receives the first packet of data, it begins processing and abstracting the data from the OEM data structure in a high-level hardware agnostic message type. In this case the cameras data is placed in a ps_image_data_msg.

  1. Power on the ECU
  2. Plug the camera into a USB port
  3. Optionally follow the set up checklist
  4. Start your runtime
  5. Start the dynamic driver process

Starting the node manually on the command line

To start a dynamic driver node on the command line, the node must first be defined in the SDF using the Configurator application.

Each node defined in the Configurator has a unique node ID which points to the nodes configuration. This article explains how to find the node ID.

Command line flags and usage

Once the node ID is known (substitute for X), the dynamic driver node for the supported sensor can be started with the base command:

$ polysync-core-dynamic-driver -n X

Each sensor supports an array of command line arguments. To see a full list of command line arguments, pass the -h help flag:

$ polysync-core-dynamic-driver -n X -h  |  less

There’s a lot of output so we recommend you pipe the output to less, but it’s not required.

Flag Required Description Arguments
-e No Export a JSON support string describing the interface, used by the SDF configuration tool N/A
-g No Get all available video devices on the host N/A
-h No Show the help message N/A
-i <> No Use provided PAL interface file instead of what is stored in the SDF Path to the dyanmic driver interface PAL shared object library
-n <N> Yes SDF node configuration identifier for the node SDF node ID from the Configurator, [0-65536]
-o No Reserved N/A
-O No Check the node SDF configuration for required updates and exit option (returns exit status zero if no change required) N/A
-p <file.plog> No Use provided logfile in Record and Replay operations instead of the default File path to a PolySync plog logfile
-s <psync.sdf> No Use provided SDF instead of the default File path to an SDF file
-t No Perform a validation test on the Video Device interface N/A
-u No Allow updates to the SDF configuration during the normal runtime if needed (does not exit) N/A
-U No Update the node SDF configuration and exit N/A
-w No Disable the hardware interface(s), allowing the node to run without hardware connected - also known as replay mode N/A
DTC codes and common fixes
DTC value DTC name Fault description Notes
304 DTC_NOINTERFACE Interface not available Activated when the sensor is not reachable at the USB device set in the Configurator; activated when the sensor becomes unreachable during runtime

Accessing sensor data

When the dynamic driver node is operating in an OK state then data is being published to the global PolySync bus, and any node can subscribe to the high-level message type(s) output by the dynamic driver node.

There are several tools that PolySync provides to quickly validate that data exists on the bus.

Access sensor data with PolySync nodes that subscribe to the sensor’s output message types.

Input / output message types

The Generic Video Device dynamic driver node outputs the following message type to the bus.

Message API Docs Notes
Publishes pre-defined ‘ps_image_data_msg’ Sensor Data Model

By default the ps_image_data_msg is published to the bus when you create a new Generic Video Device node. You can enable and disable the publishing of specific message types in the Configurator.

Image data message fields

Data type Name Description Message field populated by this sensor
ps_msg_header header PolySync message header. Yes
ps_sensor_descriptor sensor_descriptor Sensor descriptor. Yes
ps_timestamp timestamp Image timestamp. Yes
ps_native_timestamp native_timestamp Native timestamp for the image. Provided by some devices. Check ps_native_timestamp.format for meaning. Format value PSYNC_NATIVE_TIMESTAMP_FORMAT_INVALID means not available. Yes
ps_pixel_format_kind pixel_format Image data format. Yes
ps_identifier frame_id Image counter. Value PSYNC_IDENTIFIER_INVALID means not available. Yes
DDS_unsigned_short width Image width. [pixels] Yes
DDS_unsigned_short height Image height. [pixels] Yes
DDS_sequence_char data_buffer Image data buffer. Yes

Filtering incoming data for this sensor

An application that subscribes to a given message type is able to see data from more than one sensor or source.

Applications can filter for specific sensors and data sources in the message callback in C applications, or the messageEvent in C++ applications.

Filter incoming messages for this sensor with ps_sensor_kind value 180.

You can find all sensor descriptor values in this article.

Resources and configuration tools

This section has supporting resources and tools that are referenced throughout the article.

Visualize data outside of PolySync Core Studio