Getting Started

Welcome

We built the PolySync Core platform after working with the world’s most advanced autonomous driving teams and realizing the need for faster app creation. We wanted the experience of autonomy system development to feel more like mobile app development─fast backend assembly that leaves you the time to focus on your algorithms. This article will guide you through core concepts and components of this platform.

The Getting Started workflow is written for engineers to be able to install, configure, and become efficient using PolySync Core in a short amount of time. The getting started workflow describes concepts and components of the PolySync Core platform essential for all engineers using the system.

Getting Started with Core

Overview of Core

1. The basics

PolySync Core is a platform, or autonomy operating system, consisting of a middleware and set of software API services that are common across autonomous driving applications.

By separating the application layer from the underlying Operating System (OS) and hardware, PolySync Core makes it much more consistent and portable to build autonomous vehicle applications.

Middleware

The diagram above introduces various applications that can utilize the PolySync Core middleware layer. These applications perform important functions for autonomous driving─like perception and control─with a hardware agnostic approach.

Applications access the hardware and sensor data using PolySync’s software APIs. Applications and logfiles are portable across any PolySync Core ECU─even different architectures─taking advantage of the abstraction provided from the middleware layer.

Let’s take a closer look at the PolySync Core layer to understand what type of functionality it provides.

1.1 A deeper look

Middleware

As you can see from the above diagram, at the lowest level there are three types of inter-process communications (IPC):

IPC API Description
Publisher/Subscriber A single node can broadcast messages to many listeners simultaneously, and a single node can listen to one or many broadcasters simultaneously
Get/Set Query and change a node parameters at runtime
Shared memory Save bandwidth on the PolySync Core bus by keeping large data─for example uncompressed image data─off the wire using shared memory segments

In addition to IPC, PolySync Core provides a variety of helpful services to access and process the data, including:

API Description
Hardware abstraction Sensors and their recorded logfiles are portable across all supported architectures
Host abstraction Applications written using the APIs are portable across all supported architectures
Time services Unified time-domain across all hosts and nodes
Coordinate frame transformation By default all incoming sensor data is translated to the vehicle centered coordinate frame for easy reference
CAN, Serial, Socket, and Video Device APIs to talk to to connected hardware over the native interface, also used by the PolySync Core dynamic driver to talk to sensors
Messaging Predefined high-level message types that enable hardware agnostic access to sensor data
Record and replay Ability to record raw sensor data, and be able to recreate the environment by replaying PolySync Core logfile sessions
Diagnostics Fully traceable diagnostic trace for each node through the PolySync Core Diagnostic Trouble Codes (DTCs)
Analytics Host and node level analytics, for example messages published per second

Now it’s time to dive into some of the finer details of PolySync Core communication.

2. Bus and runtime communication

The PolySync Core bus and runtime work together to keep your sensors and software talking.

In order to start this process, a PolySync Core runtime has to be established within the boundaries of the System Design File (SDF). This makes for a convenient way to reference the full software and hardware system. The SDF represents the static runtime configuration─defining each host, every node, and all sensors connected to the autonomous vehicle system.

A runtime won’t exist until one or more nodes are executing as valid domain participants, which is done by calling the psync_init function. And all nodes within the runtime communicate on a shared PolySync Core bus.

The PolySync Core bus exists on the Ethernet wire. On a distributed system, all hosts would be connected to the same Gigabit Ethernet switch.

System Architecture

In the diagram, the hardware-dependent nodes─the dynamic driver interfaces─have been logically separated from the pure-software application nodes. All nodes use the PolySync Core C and C++ APIs to access the PolySync Core bus.

Information and sensor data is passed around in pre-defined PolySync Core messages, which are published and subscribed to the bus using the various APIs.

PolySync Generate is used to generate a node to subscribe to the desired message(s). It’s also used to generate dynamic driver interfaces for CAN sensors that have a valid DBC file.

3. Get/set and parameters

All of the provided dynamic driver interfaces ship with a complete parameter set, which includes parameters like mounting position, operating states, and what kinds of filters are active. This provides you with the ability to manipulate features during runtime.

Any node on the PolySync Core bus can utilize the get/set framework to easily implement runtime configurability. This is done on a node-by-node basis through publishing and subscribing to the ps_parameters_msg, as well as maintaining a real-time database of the node’s supported PolySync Core parameters.

PolySync Core also includes some useful tools to help you create, build, debug, and manage runtime parameters.

Application Description
SDF Configurator A utility program that defines the runtime system by defining hosts, nodes, and node parameters
Studio system hierarchy plugin Real-time get/set interface for active runtime nodes
Data model generator A utility to generate custom message types and new parameters that can be used by the publisher/subscriber network

4. Abstracted data model

PolySync’s main form of communication involves using a set of common messages that are broadcast and received to the PolySync Core bus through a publisher/subscriber model. PolySync Core ships with a core data model, as well as other data model modules. This includes typical types of data that are often seen on autonomous vehicles, as well as our own messages, that enable some of the API functionality.

Big Picture

The data model provides an abstraction layer that separates data consumers (application nodes) from data producers (dynamic drivers). The data within each message is in the vehicle-centered reference coordinate frame and is time stamped with UTC time, by default. This makes it easy to swap out hardware and software without breaking the system or the developer’s code.

4.1 Example of sensor abstraction

Architecturally, the dynamic driver for the ibeo LUX looks like this:

Lux Node

The dynamic driver abstracts the “native bus” of the sensor─Ethernet and CAN─into a set of generalized data types (messages) on the PolySync Core bus. Once these messages are published, they are available to any other participant on the bus. They also contain any data generated by other sensors or processes.

For instance, the ps_lidar_points_msg contains the LiDAR points generated by each of the LiDAR sensors on the bus. Each LiDAR point contains generalized data:

Data field Description
Sensor description String name for the source sensor
Native timestamp Some sensors provide their own timestamps
Echo number Used for LiDAR that supports multiple returns (i.e. through transparent objects)
Echo pulse width Duration of energy return
Layer Scanning LiDAR usually returns multiple “layers” of data in horizontal lines
Positions The position of the point, returned in vehicle centered reference frame

There is also a header that contains:

Data field Description
Message type A string name for the sensor
Timestamp Usually a UTC timestamp that comes from GPS
Source GUID A unique identifier for the node, generated by PolySync Core at runtime

5. PolySync Core Components

Listed below are the core tools provided with PolySync Core release packages.

Application Executable name Description
API Docs N/A Set of APIs to enable the development of Level 45 autonomous vehicle applications
Dynamic driver polysync-core-dynamic-driver Communicates with the sensor and is responsible for processing the raw sensor data and abstracting it from the perception and actuation hardware
Manager polysync-core-manager Manages the runtime by starting and stopping nodes based on command messages received over the bus
SDF Configurator polysync-core-sdf-configurator A utility program that defines the runtime system by defining hosts, nodes, and node parameters
Studio polysync-core-studio Plugin based visualization node that offers real-time access to data on the PolySync Core bus
Data model generator pdm-gen A utility to generate custom message types and new parameters that can be used by the publisher/subscriber network

Conclusion

Congratulations! The information you’ve worked your way through has provided you with a background in all of the core concepts and components of the PolySync Core platform.

Install Core

  1. Determine your release : lsb_release -a

  2. Right-click the link that is associated with your release and copy it to your clipboard.

    Core Packages

    Installs Core runtime with Studio & Configurator included (recommended for workstation installs).

    Name SOC Architecture Link
    Ubuntu 14.04 LTS (trusty) generic amd64 link
    Ubuntu 16.04 LTS (xenial) generic amd64 link

    Core Headless Packages

    Installs Core runtime without any GUI applications (recommended for on-car and embedded applications).

    Name SOC Architecture Link
    Ubuntu 14.04 LTS (trusty) generic amd64 link
    Yocto (pyro) rpi2 armhf link
    Debian 8 (jessie) rpi2 armhf link
    Ubuntu 16.04 LTS (xenial) generic amd64 link
    Ubuntu 16.04 LTS (xenial) generic arm64 link
    Ubuntu 16.04 LTS (xenial) drivepx2 arm64 link
    Ubuntu 16.04 LTS (xenial) generic armhf link
    Ubuntu 16.04 LTS (xenial) nanopim3 armhf link

  3. Download the package : wget <paste link here>

  4. Install the downloaded package : sudo dpkg -i <downloaded package filename>

  5. Update your apt-get : sudo apt-get update

  6. Install Core : sudo apt-get install polysync-core

  7. Congratulations, Core is installed! Open a new terminal window to begin using the tools, or, use the one you’re in now by running : source /usr/local/polysync/utils/set_env.sh

License Core

Getting a License

If you are installing for the first time, you will need to request a license from PolySync Support. Be sure to include your Host ID in the request, which you can find by running :

polysync-license-tool -a

Which should output something that looks like :

*****************************
*** PolySync License Tool ***
API Build Version: 2.0.10-1487273235

Host ID: '(Ethernet) <your host id>'

[...]

Send that ID along with your name and organization to help@polysync.io to request your license. We’ll get back to you as soon as we can.

Activating Your License

Once you have been issued a license, you can activate it by running

polysync-license -a <your license key>

If the key is valid, you will see confirmation on the License status line.

*****************************
*** PolySync License Tool ***
API Build Version: 2.0.10-1487273235

[...]

License status: valid
*****************************
Configuring A Runtime

Single Host Setup

PolySync Core IP address

When using a static network, the DDS Discovery feature will search for other hosts through the defined Ethernet interface. If other PolySync Core hosts exist on the network the PolySync Core bus will be shared, which means other host’s and node’s data will be visible, and vice versa. The proper way to connect two or more ECUs is through a distributed host set up.

Set Address

For a single host system with a static IP address, the following command can be used:

$ polysync-core-manager -s xxx.xxx.xxx.xxx

This updates the low-level network configuration file telling PolySync Core which IP address is associated to the PolySync Core bus.

Feel free to use the default IP address: 127.0.0.1, or select a static IP address from a private network.

For single host systems it is highly recommended to use the loopback interface with the IP address 127.0.0.1.

$ polysync-core-manager -s 127.0.0.1

Multicast

One requirement for PolySync Core nodes to discover each other on the loopback interface is to enable multicast.

$ sudo ifconfig lo multicast

This is set persistently during the polysync_post_install.sh script during installation.

SDF IP address

Lastly, open the SDF Configurator to ensure that the SDF configuration matches the system network configuration. If there is a mismatch detected in the SDF, the Host Wizard will open.

$ polysync-core-sdf-configurator
Hooking Up Sensors

Connect USB Webcam Tutorial

This tutorial will introduce you to the process of connecting a webcam sensor.

When working with sensors it is important to remember that every sensor is slightly different─some require a sensor identifier, or additional Linux configuration requirements, etc. The requirements for specific sensors can be found under Supported Sensors in the help center.

1. Define your node using the PolySync Core SDF Configurator

For detailed steps on setting up the video device Generic Video Device / Webcam sensor article.

The first step towards connecting a sensor is to define your node using the PolySync Core SDF Configurator . You can either use the Ubuntu icon or enter the following command into a Terminal window to start the SDF Configurator:

2. Define the node

Define the node using SDF Configurator . Use the Ubuntu icon or enter the following command into a terminal window to start the SDF Configurator:

$ polysync-core-sdf-configurator

In the bottom left corner of the SDF Configurator window click “Add Sensor Node.” A dialog box will appear with a dropdown list for all of the currently supported sensor types. Choose “video-device” from this list.

After pressing “OK,” all of the parameters for the sensor will be visible. If using multiple cameras, be sure to name the sensor to avoid confusion.

Connecting Cam 1

The above Configurator allows you to manage three main categories of parameters, which help you with everything from physical device settings to file paths:

  • Node Configuration
    • Displays the Node ID of the sensor
    • Application settings, such as the file path for the the device driver
  • Sensor Configuration
    • Physical settings of the device, such as XYZ placement on the vehicle
    • Published and logged: Pixel format, image width/height, frames per second
    • Optional extended H264 encoder parameters
  • IO Configuration
    • IO settings for the data, such as video frames per second
    • Source: Pixel format, image width/height, frames per second
    • Source device configuration parameters

Within the SDF Configurator all changes are saved after pressing “Enter.”

Take note of the node’s Node ID: located in the upper right corner of the “Node Configuration” section. This will be used later to start the node.

3. Enumerate and assess the video device name

Video device name

When the video device is connected to the Linux machine the Video4Linux drivers will enumerate the device name, similar to /dev/video0.

When the device is initialized during the boot process or connected to the host machine you will see it in /var/log/syslog or you can run dmesg in a terminal before connecting the device to your host machine.

Once the device is connected to the host machine, you can add it in the SDF Configurator IO Configuration panel under “Video Device 0 Name” field.

You can also see all video devices by doing:

$ ls -l /dev/video*

4. Starting PolySync Core manager

Next, you’re going to call the PolySync Core manager daemon and signal it to start the wall clock:

Start the PolySync Core manager service:

$ sudo service polysync-core-manager start

Once that has been completed, you can start the node manually by using the -n flag to reference the “Node ID:” in the Node Configuration section of the SDF Configurator:

$ polysync-core-dynamic-driver -n 1

In an “OK” state, this would be the last line the node prints:

2018-04-15 10:57:47.09s DEBUG  [video-device-1688894064433460] - transition to state: OK - timestamp: 1473994633468834

5. Visualize data using Studio

You are now able to start PolySync Core Studio in order to visualize the image data. Input the following:

Start Studio to visualize the image data. Input the following:

$ polysync-core-studio

Open the video plugin from the button on the plugin loader on the right side of Studio. On the left-hand side of the window you can see the plugin’s control panel. The video device name can be selected from the control panel, and the live input can be seen from the node.

Connecting Cam 2

6. Visualize data outside of PolySync Core Studio

The image data viewer example binds the first image data publisher it finds.

$ cd image_data_viewer
$ make
$ ./bin/polysync-image-data-viewer

7. Access image data from the Core bus

The video-device publishes ps_image_data_msg messages.

Use this article to Access Data from the Bus.

To quickly access the ps_image_data_msg run the following commands in a terminal:

$ polysync-generate node -s ps_image_data_msg
$ cd MyNode
$ mkdir build && cd build
$ cmake .. && make
$ ./MyNode

If studio is still in Harware mode you see the contents of ImageDataMessage.

Conclusion

Congratulations! You have now successfully connected a webcam sensor.

Record Logfiles

PolySync Core is able to record incoming sensor data, and allows you to replay the data on any other system. Using Studio or command line tools, you can command nodes to start and stop the recording of incoming data.

Both methods use the same underlying logic to record data: the application populates a ps_rnr_msg with the desired state and publishes the message to the bus, which active nodes receive and respond to.

1. Recording data with Studio

1.1 Starting the runtime

Before sensor data can be recorded, runtime nodes must be running in the hardware state.

To command nodes into the hardware state, press Studio’s Icon button.

For more detailed instructions, see the Runtime section for an article detailing how to start sensor and hardware nodes in hardware mode.

1.2 Start recording data

To begin recording data, press the Icon button.

The record module will name the logfile session based on the UTC timestamp of the moment the record button is pressed.

If you cannot see the Icon button, then the runtime nodes are note operating in hardware state, and you should check Studio’s system state which is launched from the icon.

Studio landing page - system state icon

1.3 Command active nodes to stop recording data

When you’ve finished recording your data or scenario, press the Icon button.

2. Change the Record and Replay Directory

Each host ECU in a PolySync Core runtime has a unified directory where sensor data is written to.

This is defined in the system SDF using the SDF Configurator.

$ polysync-core-sdf-configurator

Update the Record and Replay Directory on each host to change the location where logfiles are written to. By default logfiles are located in $PSYNC_USER_HOME/rnr_logs/.

Each node in the runtime records data asychronously to a PolySync Core plog logfile. A collection of plog’s is a logfile session.

3. Nodes that support record and replay

Custom nodes don’t natively record and replay data. They must use PolySync Core APIs to manage logfile data, for example the PolySync-provided nodes, or the rnr_node C example application.

PolySync-provided nodes implement file reading and writing routines using the PolySync Core Logfile API. The nodes also need to listen for the ps_rnr_msg, which uses the Record & Replay API to instruct nodes regarding what files they should open, and when to start/stop recording and replaying data.

Applications like Studio’s replay module implement the Record & Replay API to instruct nodes which directory to record data to, and at what time to start writing data to the disk (now, or in the near future).

4. Next steps

The logfile session is now available in Studio’s logfile management area which is opened with the icon.

The logfile session can be replayed on the local system or it can be exported from Studio, which allows you to import and replay your data on any other PolySync Core system.

Replaying Data

Replay Logfiles

PolySync Core is able to replay sensor data that’s been recorded as a logfile session.

Nodes are spawned and instructed to load logfile sessions by the Core manager and Studio’s logfile management module.

1. Start the PolySync Core manager

The Core manager is started as a service when the ECU boots. To ensure it’s running use the command:

$ sudo service polysync-core-manager start

2. Import the logfile session

Each logfile session must be imported to the system before it can be replayed.

Skip this step if the logfile session exists within Studio’s logfile manager.

3. Replay a logfile session

Start PolySync Core Studio with the application icon, or with the command:

$ polysync-core-studio

Studio will open in standby mode, signifying it is ready to replay a logfile session or enter hardware mode.

If the system is in a different state, put the system in the standby state.

  • Open the logfile manager with the icon Setup
  • Select one of the available logfile sessions from the logfile manager table on the left side
    • Each of the nodes listed in the logfile session are defined in the replay SDF , which is specific to this logfile session
  • Select the logfile session and drag it to the playlist pane on the right. Studio logfile sessions
  • Replay the logfile session by double clicking the logfile session name in the playlist pane
    Ready for Logfile Session Replay

Each nodes status is shown in the playlist dropdown menu, where you can watch nodes change as they each load an individual plog file.

When all nodes in the selected logfile session have loaded, Studio automatically enables playback and the nodes begin publishing data to the bus.

Logfile Session 1000 Playback

The system state observer at the bottom-left changes from green to blue to indicate a session is being replayed and messages containing sensor data are being published to the PolySync Core bus . Studio plugins allow you to visually validate the data.

4. Validating the runtime

Visually validate the data using Studio’s plugins.

Observe the system state to check for any runtime node errors. The system state module will display a banner and the icon will be red if there are observed errors.

If errors are reported more information can be found in Studio’s console module with the icon.

Debug and error messages are also visible outside of Studio in the file $PSYNC_USER_HOME/polysync.log.

5. Accessing data

Once the runtime has started, nodes are publishing data to the PolySync Core bus. This allows other application nodes to access the data by subscribing to the message types being output by dynamic driver nodes.

Determine the messages that are being published to the bus using Studio’s Trace plugin.

Generate custom applications that subscribe to these published message types using polysync-generate.

Replay Logfiles (Command Line)

To replay data, runtime nodes need to be told which logfile session to load. Sensor and hardware nodes subscribe to the ps_rnr_msg to receive commands to enter record or replay modes and know when to start reading or writing the data.

PolySync Core provides both GUI and command line tools to command runtime nodes to record and replay data.

This article shows how to use the supplied C API rnr_control example application to command nodes to replay an existing logfile session.

1. Clone the repo

The rnr_control code is in the PolySync Core C Examples GitHub repo.

$ git clone https://github.com/PolySync/PolySync-Core-C-Examples

2. Build the application

The project uses Make to build, and places the binary in the bin/ directory.

$ cd PolySync-Core-C-Examples/rnr_control
$ make
$ sudo cp ./bin/polysync-rnr-control-c /usr/local/polysync/bin

The binary is built and is named polysync-rnr-control-c.

3. Record and replay sessions directory

Each host has a defined record and replay (RnR) sessions directory defined in the SDF. The RnR directory is defined and viewed in the SDF Configurator, and can also be printed by the license tool:

$ polysync-license-tool
*****************************
*** PolySync License Tool ***
API Build Version: 2.0.9-1486494674

[...]
Record and Replay Sessions Directory: '/home/$USER/.local/share/polysync/rnr_logs'
[...]

*****************************

The polysync-rnr-control-c application can only command runtime nodes to replay data contained in the RnR directory. The default directory is $PSYNC_USER_HOME/rnr_logs.

4. Replay a PolySync Core logfile

Now that the polysync-rnr-control-c application has been built, we can use it to command nodes to replay PolySync Core logfiles.

4.1 Starting runtime nodes

Use the manager to start all runtime nodes defined in the system SDF in the replay context.

$ polysync-core-manager -n -w

Nodes are now in the standby state, waiting to be told which PolySync Core logfile to load for replay.

4.2 Commanding replay

The polysync-rnr-control-c application uses the -t flag to indicate which logfile session to replay. All other flags are optional while commanding runtime nodes to replay data.

$ polysync-rnr-control-c -t 1000

This command will replay the default logfile session that ships with PolySync.

It’s important to note that the rnr_control application has no knowledge of the nodes on the PolySync Core bus. It’s simply packaging and publishing a ps_rnr_msg that active nodes subscribe and react to.

For systems that have a couple dozen nodes, or limited compute resources, it’s nice to provide a loading grace period. Pass in the -s <opt> flag to specify a relative start time (microseconds), or the -S <opt> flag to specify an absolute UTC microsecond start time.

$ polysync-rnr-control-c -t 1000 -s 3000000   # 3 seconds

To stop replay at any time, use the applications -q flag.

$ polysync-rnr-control-c -q

5. Access and visualize the data

Now that data is being published on the bus by dynamic driver nodes, you can use applications like Studio, the C++ Echo example, or a custom application to access the data.

Publishing & Subscribing on the bus

Publish and Subscribe Tutorial

In this tutorial, we are going to walk through how to publish a command to the PolySync Core bus and subscribe to a particular message type.

1. Become a participant on the bus

In order to begin the publish/subscribe communication process, a PolySync Core node must first become a valid participant on the bus .

You can do this by calling:

 psync_init()

Once this has occurred, it becomes discoverable by all other nodes in the PolySync Core runtime, allowing it to publish and subscribe any number of message types.

Nearly all LiDAR sensors provide intensity and x/y/z positions for each point in the cloud. PolySync Core defined a high-level message ps_lidar_points_msg to hold those common fields, and provided other fields such as a start and end timestamp or the scanning LiDAR sensors that also provide that data to the Dynamic Driver interface.

The Dynamic Driver interface is written to parse the LiDAR sensor data from the native bus─the Ethernet socket, CAN channel, or serial connection─to process and abstract the data into the ps_lidar_points_msg and publish that message to the PolySync Core bus.

Any number of nodes may subscribe to the high-level message type to receive a copy of each instance that’s published to the bus.

When multiple Dynamic Driver nodes are publishing the same message type, subscribing nodes will filter for specific sources upon receiving the message.

2. Locate the C++ Code

You can find the C++ Publish Subscribe code as part of our public C++ examples repo here.

PolySync-Core-CPP-Examples/PublishSubscribe
├── Publish
│   ├── CMakeLists.txt
│   └── Publish.cpp
└── Subscribe
    ├── CMakeLists.txt
    └── Subscribe.cpp

2.1 Publish.cpp

#include <PolySyncNode.hpp>
#include <PolySyncDataModel.hpp>

using namespace std;

/**
 * @brief PublisherSubscriberNode class
 *
 *
 */
class PublisherSubscriberNode : public polysync::Node
{

private:
    const string node_name = "polysync-publish-cpp";
    const string platform_motion_msg_name = "ps_platform_motion_msg";

    ps_msg_type _messageType;

public:

    PublisherSubscriberNode()
    {
        setNodeType( PSYNC_NODE_TYPE_API_USER );
        setDomainID( PSYNC_DEFAULT_DOMAIN );
        setSDFID( PSYNC_SDF_ID_INVALID );
        setFlags( PSYNC_INIT_FLAG_STDOUT_LOGGING );
        setNodeName( node_name );
    }

    ~PublisherSubscriberNode()
    {

    }

    void initStateEvent() override
    {
        _messageType = getMessageTypeByName( platform_motion_msg_name );
    }

    void releaseStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void errorStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void fatalStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void warnStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void okStateEvent() override
    {
        // Create a message
        polysync::datamodel::PlatformMotionMessage message( *this );

        // Set the message timestamp (UTC), representing when the data was
        // received or when it originated
        message.setTimestamp( polysync::getTimestamp() );

        message.setLatitude( 45.515289 );

        message.setLongitude( -122.654355 );

        // Set the message header timestamp, representing when the message
        // was published to the PolySync bus
        message.setHeaderTimestamp( polysync::getTimestamp() );

        // Publish the message to the PolySync bus
        message.publish();

        // The ok state is called periodically by the system, sleep to reduce
        // the number of messages sent
        polysync::sleepMicro( 1000000 );
    }
};
int main()
{
    // Create an instance of the PublisherSubscriberNode and connect it to
    // PolySync
    PublisherSubscriberNode publisherNode;

    // When the node has been created, it will cause an initStateEvent to be
    // sent and then proceed into the okState.  connectToPolySync does not
    // return, use Ctrl-C to exit
    publisherNode.connectPolySync();

    return 0;
}

2.2 Publish.cpp explained

Please be aware that both of these headers must be included in C++ applications. The Node API PolySyncNode.hpp creates a node and defines the states within the node state machine. It also allows the node to publish and subscribe to the bus.

The PolySyncDataModel.hpp defines all messages, types, and constants in the PolySync Core C++ Data Model.

#include <PolySyncNode.hpp>
#include <PolySyncDataModel.hpp>

The following is an inclusive list of the node states that are defined as part of the PolySync Core node state machine:

Event function Execution criteria
setConfigurationEvent() Called once at the beginning of the program before any PolySync Core related tasks, and where command-line arguments should be handled
initStateEvent() Called once after the node transitions into the INIT state
okStateEvent() Called continuously while in the OK state
warnStateEvent() Called continuously while in the WARN state
errorStateEvent() Called continuously while in the ERROR state
fatalStateEvent() Called once after the node transitions into the FATAL state before the node terminates
releaseStateEvent() Called once on exit, triggered by CTRL+C
messageEvent() Fires once for each time that a subscribed message is received, access the message data payload

Unless we override the node states, each state does minimal to no operation.

You can then move on to subclass the C++ node object to create a PolySync Core node.

class PublisherSubscriberNode : public polysync::Node
{ ... }

It is important to remember to set the node name during object construction.

Since the node will be publishing a message to the PolySync Core bus, it needs to know the string representation for the message type. We can make this known to the node by calling the _message private member. This will hold the integer representation of the message type at runtime, which allows the node to identify and subscribe to the message type.

private:
    const string node_name = "polysync-publish-cpp";
    const string platform_motion_msg_name = "ps_platform_motion_msg";

    ps_msg_type _messageType;

The constructor is then called.

PublisherSubscriberNode()
    {
        setNodeType( PSYNC_NODE_TYPE_API_USER );
        setDomainID( PSYNC_DEFAULT_DOMAIN );
        setSDFID( PSYNC_SDF_ID_INVALID );
        setFlags( NODE_FLAGS_VALUE | PSYNC_INIT_FLAG_STDOUT_LOGGING );
        setNodeName( node_name );
    }

Now, we move on to call the OK state event. It is important to note that this is called once per execution cycle in the node state machine, while the node is in the OK state.

This OK state event will create an instance of the platform motion message, fill it with some sample data, and publish it to the global PolySync Core bus each time it is called.

You can control the node’s execution speed by allowing it to sleep more or less.

void okStateEvent() override
    {
        ...

        polysync::sleepMicro( 1000000 );
    }

Using the predefined messages in the datamodel, the node creates an instance of the ps_platform_motion_msg using the C++ representation for the message type.

Now, we will call the message timestamp. Each message has some form of a message timestamp, which is used to represent the UTC time signifying when the data was received from a hardware device, PolySync Core node, or software application.

        polysync::datamodel::PlatformMotionMessage message( *this );
        message.setTimestamp( polysync::getTimestamp() );

At this point, we can allow the node to update the latitude and longitude values for this instance of the platform motion message.

Check the C++ Data Model to see all data fields that exist and can be set in this message type.

        message.setLatitude( 45.515289 );

        message.setLongitude( -122.654355 );

We will now move on to setting the message header timestamp. This represents the UTC time when the message was published to the PolySync Core bus. The publishing node is always responsible for setting this directly before publishing the message to the PolySync Core bus.

        message.setHeaderTimestamp( polysync::getTimestamp() );

        message.publish();

Now we will connect the node to PolySync. The main entry point creates an instance of the custom publisher/subscriber node and immediately connects to PolySync, placing the node in the setConfigurationEvent and then the initStateEvent.

All non-PolySync Core related initialization should happen in the nodes setConfigurationEvent.

The node will exit the state machine execution loop once it receives the CTRL-C signal interrupt.

int main()
{
    PublisherSubscriberNode publisherNode;

    publisherNode.connectPolySync();

    return 0;
}

2.3 Subscribe.cpp

#include <iostream>
#include <PolySyncNode.hpp>
#include <PolySyncDataModel.hpp>

using namespace std;

/**
 * @brief Node flags to be OR'd with driver/interface flags.
 */
#ifndef NODE_FLAGS_VALUE
#define NODE_FLAGS_VALUE (0)
#endif
class PublisherSubscriberNode : public polysync::Node
{
private:
    const string node_name = "polysync-subscribe-cpp";
    const string platform_motion_msg_name = "ps_platform_motion_msg";

    ps_msg_type _messageType;

public:

    PublisherSubscriberNode()
    {
        setNodeType( PSYNC_NODE_TYPE_API_USER );
        setDomainID( PSYNC_DEFAULT_DOMAIN );
        setSDFID( PSYNC_SDF_ID_INVALID );
        setFlags( NODE_FLAGS_VALUE | PSYNC_INIT_FLAG_STDOUT_LOGGING );
        setNodeName( node_name );
    }

    ~PublisherSubscriberNode()
    {

    }

    void initStateEvent() override
    {
        _messageType = getMessageTypeByName( platform_motion_msg_name );

        // Register as a listener for the platform motion message
        registerListener( _messageType );
    }

    void releaseStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void errorStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void fatalStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void warnStateEvent() override
    {
        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro( 10000 );
    }

    void okStateEvent() override
    {
        // The ok state is called periodically by the system
        polysync::sleepMicro( 10000 );
    }

    void messageEvent( std::shared_ptr< polysync::Message > message ) override
    {
        using namespace polysync::datamodel;
        if( std::shared_ptr < polysync::datamodel::PlatformMotionMessage > platformMotionMsg =
                getSubclass< PlatformMotionMessage >( message ) )
        {  
            //platformMotionMsg->print();

            // or...

            std::cout << "Latitude: " << platformMotionMsg->getLatitude() << std::endl;

            std::cout << "Longitude: " << platformMotionMsg->getLongitude() << std::endl;

        }
    }

};

int main()
{
    // Create an instance of the PublisherSubscriberNode and connect it to
    // PolySync.
    PublisherSubscriberNode subscriberNode;
    // When the node has been created, it will cause an initStateEvent to be
    // sent and then proceed into the okState.  connectToPolySync does not
    // return, use Ctrl-C to exit.
    subscriberNode.connectPolySync();

    return 0;
}

2.4 Subscribe.cpp explained

The subscriber is largely similar to the publisher, and repeated variables and identical functions will be skipped here.

The message subscriber nodes should always query the runtime for the message type integer values, based on the string representation of the message type.

Once the message type is known, the node can register a listener. For every instance this message type is seen on the PolySync Core bus, this node’s message event will be executed once.

    void initStateEvent() override
    {
        _messageType = getMessageTypeByName( platform_motion_msg_name );

        registerListener( _messageType );
    }

The message event executes asynchronously in its own thread.

We will be using the defined C++ class template. This acts to promote the incoming base message to the appropriate type─the string representation of the message.

    void messageEvent( std::shared_ptr< polysync::Message > message ) override
    {
        using namespace polysync::datamodel;
        if( std::shared_ptr < polysync::datamodel::PlatformMotionMessage > platformMotionMsg =
                getSubclass< PlatformMotionMessage >( message ) )
        {  
            ...
        }
    }

Each message in the C++ API has a built-in print() method for quick and easy debugging. It will print each message field to std::out, or a provided ostream. Alternatively─and more commonly─you can access the data using the get*() functions.

It’s best to do as little processing in the nodes message event as possible. If the node needs to perform any intense processing it should be moved to another thread by queuing the incoming messages.

            //platformMotionMsg->print();

            // or...

            std::cout << "Latitude: " << platformMotionMsg->getLatitude() << std::endl;

            std::cout << "Longitude: " << platformMotionMsg->getLongitude() << std::endl;

3. Running the nodes

After building the two nodes using cmake and make, you can run them using two terminal windows.

3.1 Building the nodes

If you want to see the build dependencies of a PolySync Core node, you can check out the CMakeLists.txt file, and the included PSYNC_HOME/BuildResources.cmake file.

$ cd PolySync-Core-CPP-Examples/PublishSubscribe/Publish
$ mkdir build && cd build
$ cmake ..
$ make

Next, you’re going to build the subscriber node in a second terminal window.

$ cd PolySync-Core-CPP-Examples/PublishSubscribe/Subscribe
$ mkdir build && cd build
$ cmake ..
$ make

4. Running the publisher node

It is time to start the publisher node. This will begin publishing the platform motion message to the global PolySync Core bus, with latitude and longitude set to our static values.

$ ./polysync-publisher-cpp
2016-09-17 14:35:33.09s DEBUG  [polysync-publish-cpp-] - build version 2.0.6-1468018065
2016-09-17 14:35:34.09s DEBUG  [polysync-publish-cpp-281475340021795] - created participant - GUID: 0x0001000015A7B023 d(281475340021795)
2016-09-17 14:35:34.09s DEBUG  [polysync-publish-cpp-281475340021795] - message types visible to this node: 46
2016-09-17 14:35:34.09s DEBUG  [polysync-publish-cpp-281475340021795] - transition to state: INIT - timestamp: 1474148134509200
2016-09-17 14:35:34.09s DEBUG  [polysync-publish-cpp-281475340021795] - transition to state: OK - timestamp: 1474148134519985

5. Running the subscriber node

Now you will run the subscriber node. This connects to PolySync Core and will begin receiving data as soon as it enters the “OK” state.

$ ./polysync-subscriber-cpp
2016-09-17 14:35:36.09s DEBUG  [polysync-subscribe-cpp-] - build version 2.0.6-1468018065
2016-09-17 14:35:37.09s DEBUG  [polysync-subscribe-cpp-281476857950370] - created participant - GUID: 0x00010000702170A2 d(281476857950370)
2016-09-17 14:35:37.09s DEBUG  [polysync-subscribe-cpp-281476857950370] - message types visible to this node: 46
2016-09-17 14:35:37.09s DEBUG  [polysync-subscribe-cpp-281476857950370] - transition to state: INIT - timestamp: 1474148137993624
2016-09-17 14:35:37.09s DEBUG  [polysync-subscribe-cpp-281476857950370] - transition to state: OK - timestamp: 1474148137996558
Latitude: 45.5153
Longitude: -122.654
Latitude: 45.5153
Longitude: -122.654

6. Visualize the data

The latitude and longitude data can be visualized in Studio’s Trace plugin, by selecting the ps_platform_motion_msg.

To run Studio, enter the following:

$ polysync-core-studio

Conclusion

Congratulations! As you’ve worked through this tutorial, you’ve been introduced to the core concepts of PolySync Core communication through a message publisher/subscriber model, the basis for creating C++ nodes to access data from the global PolySync Core bus, and building the node against the PolySync Core APIs.

Build a Node

1. PolySync Core Node

PolySync Core nodes publish and subscribe to message types on the PolySync Core bus.

When a node subscribes to a message type, it receives a copy of every instance of that message type published to the PolySync Core bus. The node can access the message buffer and queue, process, convert, or visualize the data contained in the message.

PolySync Core provides a scaffolding generator that makes it easy to roll out a functioning C++ software node that subscribes to the data the node needs.

2. Scaffolding a node

To generate a new node, open a new terminal and type:

$ polysync-generate node

This will create a project called MyNode, which can be built with:

$ mkdir ./MyNode/build 
$ cd MyNode/build
$ cmake .. && make
$ ./MyNode

By default MyNode subscribes to and prints PolySync Core diagnostic messages - DiagnosticTraceMessage.

2.1 Subscribing to specific messages

The PolySync Core data model ships with pre-defined message types to store common data types seen on an autonomous vehicle. For example the ps_objects_msg, or ps_lidar_points_msg.

These are high-level, hardware agnostic message types. Any PolySync Core node can populate and publish messages defined in the data model.

You can view all messages currently being published to the PolySync Core bus using the Echo C++ example.

2.1.1 Sensor data

Sensor data is published to the bus as messages by dynamic driver nodes, which communicate with supported sensors.

Each supported sensor page indicates the message types the dynamic driver publishes to the bus. For example the FLIR Blackfly GigE article indicates that the ps_image_data_msg is published by the driver.

2.2 Building the node

Use the -s <message-type> flag to subscribe to the list of messages.

$ polysync-generate node -s ps_objects_msg ps_lidar_points_msg ps_image_data_msg
$ mkdir ./MyNode/build 
$ cd MyNode/build
$ cmake .. && make
$ ./MyNode

The generated C++ node will have access to all objects, LiDAR points, and image data in the messageEvent function.

2.3 Command line options

Print the command line options with the help flag. You can specify the node name, output directory, and which messages to subscribe to with a few flags.

$ polysync-generate node -h
usage: polysync-generate node [-h] [-n NAME] [-o DIR]
                              [-s MESSAGE [MESSAGE ...]] [-l]

optional arguments:
  -h, --help            show this help message and exit
  -n NAME, --name NAME  name of the generated PolySync Core node
  -o DIR, --output DIR  path in which to place generated files
  -s MESSAGE [MESSAGE ...], --subscribe MESSAGE [MESSAGE ...]
                        messages to which to subscribe
  -l, --list            list valid PolySync Core messages

3. About the code

MyNode contains all of the necessary methods every PolySync Core C++ node should have. This is the working directory:

MyNode
├── include
│   └── MyNode.hpp
├── CMakeLists.txt
└── src
    ├── MyNode.cpp
    └── main.cpp

In main.cpp, you can see how PolySync Core nodes are initialized.

The connectPolySync( ) is a blocking operation, and places the node in the node state machine. If there is a valid license, the node leaves the AUTH state and enters the INIT state, calling the initStateEvent( ) in MyNode.cpp.

3.1 main.cpp

#include < iostream >

#include < GetOpt.hpp >
#include < PolySyncCore.hpp >
#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >

#include "MyNode.hpp"


using namespace std;


int main( int argc, char * argv[] )
{
    MyNode node;

    node.setNodeName( MyNode::NODE_NAME );

    node.setCommandLineOptions( { argc, argv } );

    node.connectPolySync();

    return 0;
}

3.2 MyNode.hpp

#ifndef MYNODE_HPP
#define MYNODE_HPP


class MyNode : public polysync::Node
{

public:
    virtual void messageEvent( std::shared_ptr< polysync::Message > message );


    virtual void initStateEvent();


    virtual void setConfigurationEvent( const GetOpt & commandLineArgs );


    void okStateEvent();


    virtual void fatalStateEvent();


    virtual void errorStateEvent();


    virtual void warnStateEvent();


    virtual void releaseStateEvent();


public:

    static constexpr auto NODE_NAME = "MyNode";


private:

    ps_msg_type _messageTypeDiagnosticTraceMsg;

};

#endif // MYNODE_HPP

3.3 MyNode.cpp

#include <iostream>

#include < GetOpt.hpp >
#include < PolySyncCore.hpp >
#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >

#include "MyNode.hpp"


using namespace std;


void MyNode::messageEvent( std::shared_ptr< polysync::Message > message )
{
    using namespace polysync::datamodel;

    if( auto diagnosticTraceMsg = getSubclass< DiagnosticTraceMessage >( message ) )
    {
        diagnosticTraceMsg->print();
    }
}


void MyNode::initStateEvent()
{
     try
     {
         _messageTypeDiagnosticTraceMsg = getMessageTypeByName( "ps_diagnostic_trace_msg" );
         registerListener( _messageTypeDiagnosticTraceMsg );
     }
     catch (polysync::DTCException e)
     {
         polySyncLogDebug( "ps_diagnostic_trace_msg - " + e.getDtcDescription() );
         activateFault( e.getDtc(), NODE_STATE_FATAL );
     }
}


void MyNode::setConfigurationEvent( const GetOpt & commandLineArgs )
{
    commandLineArgs.printAll();
}


void MyNode::okStateEvent()
{
    // called continuously while in OK state
}


void MyNode::fatalStateEvent()
{
    // called once upon entering FATAL state
}


void MyNode::errorStateEvent()
{
    // called continuously while in ERROR state
}


void MyNode::warnStateEvent()
{
    // called continuously while in WARN state
}


void MyNode::releaseStateEvent()
{
    // called at end of node life
}

4. Additional References

The node generator is a great tool to get you developing custom nodes in PolySync Core quickly. All nodes implement the C Node API or the C++ Node API where you can find the functions a node implements. Most examples implement the Node API as nodes are at the core of PolySync Core however a few examples provide additional references for Node use and construction.

node_template shows the basic “node template” that all PolySync Core nodes use. This is a reference for understanding PolySync Core nodes and the different node states.

publish_subscribe provides an overview of the publish and subscribe communication in PolySync Core which is the most common form of node to node communication in PolySync. This is a good example to reference for node to node communication. PublishSubscribe provides the same overview except in C++.

Finally, ParameterGetSet provides an overview of filtering for a specific node on the PolySync Core bus and setting a specific parameter ID. This is good reference for setting parameter IDs at runtime and publishing and subscribing to ps_parameters_msg.

Publishing Multiple Message Types (Adv.) Tutorial

This advanced tutorial covers how to populate and publish data from a single node into a PolySync Core message on the bus, enable multiple nodes to subscribe to messages to access data asynchronously, and visualize all data being published to the PolySync Core bus.

Overview

The example program polysync-data-generator-cpp is included in the PolySync Core installation. It is designed to populate PolySync Core messages with sample data and publish those messages to the global, shared bus.

The sample data comes in multiple types, each with its own associated high-level PolySync Core message:

  • Radar Targets: ps_radar_targets_msg
  • Classified Objects: ps_objects_msg
  • LiDAR Points: ps_lidar_points_msg

PolySync Core messages are defined in multiple data model modules:

1. Running the data generator

Begin by starting the PolySync Core manager. Open a new terminal and run:

$ sudo service polysync-core-manager start

1.2 Generate data

You can find the C++ Data Generation code as part of our public C++ examples repo here.

To build the C++ application you will use the tools cmake and make. The runtime dependencies are detailed in the local README.md file.

$ git clone git@github.com:PolySync/PolySync-Core-CPP-Examples.git
$ cd PolySync-Core-CPP-Examples/DataGenerator
$ sudo apt-get install libglib2.0-dev  # this is required for this example to build, documented in the examples README.md file
$ mkdir build && cd build
$ cmake ..
$ make

Now the polysync-data-generator-cpp node is built, and can be started with the command:

$ ./polysync-data-generator-cpp

This is an overview of the source files for the example:

PolySync-Core-CPP-Examples/DataGenerator/
├── CMakeLists.txt
├── DataGenerator.cpp
├── LidarPointGenerator.cpp
├── LidarPointGenerator.hpp
├── ObjectGenerator.cpp
├── ObjectGenerator.hpp
├── RadarTargetGenerator.cpp
├── README.md
└── RadarTargetGenerator.hpp

1.2.1 DataGenerator.cpp

This is where the PolySync Core node is defined, and where you can access the main entry point for this node/application.

#include < iostream >

#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >

#include "LidarPointGenerator.hpp"
#include "RadarTargetGenerator.hpp"
#include "ObjectGenerator.hpp"

using namespace polysync::datamodel;

class DataGenerator : public polysync::Node
{
protected:
    virtual void okStateEvent()
    {
        _lidarPointGenerator->updatePoints();
        _lidarPointGenerator->publishPoints();

        _radarTargetGenerator->updateTargets();
        _radarTargetGenerator->publishTargets();

        _objectGenerator->updateObjects();
        _objectGenerator->publishObjects();

        polysync::sleepMicro( _updateInterval );
    }
    virtual void initStateEvent()
    {
        _lidarPointGenerator =
                std::unique_ptr< LidarPointGenerator >{
                    new LidarPointGenerator( *this ) };

        _radarTargetGenerator =
                std::unique_ptr< RadarTargetGenerator >{
                    new RadarTargetGenerator( *this ) };

        _objectGenerator =
                std::unique_ptr< ObjectGenerator >{
                    new ObjectGenerator( *this ) };
    }

private:
    ps_timestamp _updateInterval{ 50000 };
    std::unique_ptr< LidarPointGenerator > _lidarPointGenerator;
    std::unique_ptr< RadarTargetGenerator > _radarTargetGenerator;
    std::unique_ptr< ObjectGenerator > _objectGenerator;
};
int main()
{
    DataGenerator dataGenerator;

    dataGenerator.connectPolySync();

    return 0;
}

1.2.2 DataGenerator.cpp explained

Now let’s break down the DataGenerator.cpp code.

The line below defines the PolySync Core node and each of the protected event methods in the node state machine. Each of the states do minimal or no operation unless they are overridden.

#include < PolySyncNode.hpp >

The C++ data model defines all types and messages in the PolySync Core runtime. This must be included in all C++ node applications.

#include < PolySyncDataModel.hpp >

The following headers are specific to this application and define the data “generators.” They are responsible for creating, populating, and updating the respective message types with data.

#include "LidarPointGenerator.hpp"
#include "RadarTargetGenerator.hpp"
#include "ObjectGenerator.hpp"

All of our PolySync Core C++ applications must then subclass the PolySync Core node object in order to connect to the PolySync Core bus and publish/subscribe messages.

class DataGenerator : public polysync::Node

The following is called continuously while in the node’s OK state, and is where almost all of the action happens for the node.

virtual void okStateEvent()

For each execution loop, each of the three primitive data types will be updated in a new message instance. These are then published to the bus.

_lidarPointGenerator->updatePoints();
_lidarPointGenerator->publishPoints();

_radarTargetGenerator->updateTargets();
_radarTargetGenerator->publishTargets();

_objectGenerator->updateObjects();
_objectGenerator->publishObjects();

This is called once after the node transitions to the INIT state. The initStateEvent is typically where a node creates messages and initializes any other resources that require a PolySync Core node reference.

virtual void initStateEvent()

This creates a reference to the new LiDAR point generator instance.

_lidarPointGenerator = std::unique_ptr< LidarPointGenerator >{ new LidarPointGenerator( *this ) };

The following is how you can control the execution speed of the state machine. The _updateInterval is called at the end of each event, whether it has been overridden or not.

ps_timestamp _updateInterval{ 50000 };

This creates a reference for each of your data generator objects defined in their unique header and source files.

std::unique_ptr< LidarPointGenerator > _lidarPointGenerator;
std::unique_ptr< RadarTargetGenerator > _radarTargetGenerator;
std::unique_ptr< ObjectGenerator > _objectGenerator;

This creates an instance of your DataGenerator object. The second command is a blocking operation, and places the node in the node state machine. If there is a valid license, the node leaves the AUTH state and enters the INIT state, calling the initStateEvent.

DataGenerator dataGenerator;
dataGenerator.connectPolySync();

1.3 Generate LiDAR points

Th LiDAR point generator defines the class that is used to create, initialize, populate, and publish LiDAR message on PolySync. This is reimplemented in RadarTargetGenerator.cpp and ObjectGenerator.cpp for a RADAR generator class and an object’s generator class.

#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >


class LidarPointGenerator
{
public:
    LidarPointGenerator( polysync::Node & );


    void updatePoints();


    void publishPoints();


    void initializeMessage();

private:

    polysync::datamodel::LidarPointsMessage _message;

    float _relativeTime{ 0.0 };
    const float _gridScale{ 10.0 };
    const ulong _gridSideLength{ 100 };
    const ulong _sensorID{ 11 };
    const ulong _numberOfPoints{ 10000 };
    const float _sineFrequency{ 4.0 };
};

1.3.1 LidarPointsGenerator.hpp explained

Now let’s break down the LidarPointsGenerator.hpp code.

#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >

The three primitive data types (LiDAR, RADAR, objects) each create a class with a basic PolySync Core node constructor and three functions to initialize, update, and publish data to the bus. The LiDAR point generator class is then called.

class LidarPointGenerator
{ ... };

This class must be constructed with a valid node reference, which happens in the initStateEvent defined in the DataGenerator.cpp file.

public:
    LidarPointGenerator( polysync::Node & );
};

This will be used to populate the LidarPointsMessage with a valid sensor descriptor─describing this node as a publisher on the bus─as well as set the initial header, and message start/end timestamps.

Once the message is initialized, you will use function below to call updatePoints.

public:
    ...
    void initializeMessage();
};

For each message published to the bus, the node needs to update each of the valid fields within the message. updatePoints iterates over a vector of PolySync Core LidarPoint and updates each LiDAR points:

  • Intensity
  • x, y and z position

Each point in the vector is updated to represent a sine wave.

public:
    ...
    void updatePoints();
};

After you’re done setting the header timestamp, the following will publish the LidarPointsMessage with an updated vector of points to the PolySync Core bus.

public:
    ...
    void publishPoints();
};

Next, you will call this high-level message used to store LiDAR points.

private:

    polysync::datamodel::LidarPointsMessage _message;
};

The variables are then used to calculate the sine wave and determine the number of LiDAR points required to represent it.

private:
    ...
    float _relativeTime{ 0.0 };
    const float _gridScale{ 10.0 };
    const ulong _gridSideLength{ 100 };
    const ulong _numberOfPoints{ 10000 };
    const float _sineFrequency{ 4.0 };
};

1.3.2 LidarPointsGenerator.cpp

#include "LidarPointGenerator.hpp"

using namespace std;
using namespace polysync::datamodel;

LidarPointGenerator::LidarPointGenerator( polysync::Node & node )
    :
    _message( node ),
    _numberOfPoints( _gridSideLength * _gridSideLength )
{
    initializeMessage();
}

void LidarPointGenerator::initializeMessage()
{
    polysync::datamodel::SensorDescriptor descriptor;

    descriptor.setTransformParentId( PSYNC_COORDINATE_FRAME_LOCAL );
    descriptor.setType( PSYNC_SENSOR_KIND_NOT_AVAILABLE );
    _message.setSensorDescriptor( descriptor );

    auto time = polysync::getTimestamp();
    _message.setHeaderTimestamp( time );
    _message.setStartTimestamp( time );
    _message.setEndTimestamp( time );

    updatePoints();
}

void LidarPointGenerator::updatePoints()
{
    auto time = polysync::getTimestamp();
    auto timeDelta = time - _message.getStartTimestamp();
    auto timeDeltaSeconds = static_cast< float >( timeDelta ) / 1000000.0;

    _relativeTime += timeDeltaSeconds;

    _message.setStartTimestamp( time );
    _message.setEndTimestamp( time );

    std::vector< LidarPoint > outputPoints;
    outputPoints.reserve( _numberOfPoints );

    for( auto pointNum = 0U; pointNum < _numberOfPoints; ++pointNum )
    {
        polysync::datamodel::LidarPoint point;
        point.setIntensity( 255 );

        auto x = pointNum % 100;
        auto y = pointNum / 100;

        float u = static_cast< float >( x )/ 100.0;
        float v = static_cast< float >( y ) / 100.0;

        // center u/v at origin
        u = ( u * 2.0 ) - 1.0;
        v = ( v * 2.0 ) - 1.0;

        float w = sin( ( u * _sineFrequency ) + _relativeTime )
                * cos( ( v * _sineFrequency ) + _relativeTime )
                * 0.5;

        point.setPosition( { u * 10, v * 10, w * 10 } );
        outputPoints.emplace_back( point );
    }

    _message.setPoints( outputPoints );
}

void LidarPointGenerator::publishPoints()
{
    _message.setHeaderTimestamp( polysync::getTimestamp() );
    _message.publish();
}

1.3.3 LidarPointsGenerator.cpp explained

Now let’s break down the LidarPointsGenerator.cpp code.

During the object’s construction, the LiDAR message is initialized with the node reference provided from the DataGenerator.cpp initStateEvent function call. Once you initialize the message here, the message fields can be set to the desired values.

LidarPointGenerator::LidarPointGenerator( polysync::Node & node )
    :
    _message( node ),
    _numberOfPoints( _gridSideLength * _gridSideLength )
{
    initializeMessage();
}

The sensor descriptor is used by the subscribing node to determine which node the message originated from. It is not mandatory, but we highly recommend it as a means to populate the sensor descriptor for each message instance.

The TransformParentId represents the active coordinate frame identifier. In this case, you are setting the active coordinate frame to the local frame, which means no transform corrections are needed for outgoing data.

Messages should be initialized with the current UTC timestamp, multiple timestamps fields exist.

void LidarPointGenerator::initializeMessage()
{
    polysync::datamodel::SensorDescriptor descriptor;

    descriptor.setTransformParentId( PSYNC_COORDINATE_FRAME_LOCAL );
    descriptor.setType( PSYNC_SENSOR_KIND_NOT_AVAILABLE );
    _message.setSensorDescriptor( descriptor );

    auto time = polysync::getTimestamp();
    _message.setHeaderTimestamp( time );
    _message.setStartTimestamp( time );
    _message.setEndTimestamp( time );


    updatePoints();
}

This function is responsible for setting each point in the LiDAR points message, stored as a vector of PolySync Core LidarPoint. You should be aware that the namespace is polysync::data model.

Timestamp fields are updated for each iteration that the message is updated and subsequently published by publishPoints(). PolySync Core timestamps can be referenced globally across the PolySync Core bus, and they are used here both to calculate the sine wave, as well as in the message timestamp fields.

void LidarPointGenerator::updatePoints()
{
    auto time = polysync::getTimestamp();

    auto timeDelta = time - _message.getStartTimestamp();

    auto timeDeltaSeconds = static_cast< float >( timeDelta ) / 1000000.0;

    _relativeTime += timeDeltaSeconds;

    _message.setStartTimestamp( time );
    _message.setEndTimestamp( time );
}

Each point is iterated over to set the intensity, and x/y/z position values before it’s emplaced within the vector. The following for loop will be used to calculate the position of each point to represent a sine wave.

Once the vector is filled the setPoints member function is used to copy the local vector into the message.

    std::vector< LidarPoint > outputPoints;
    outputPoints.reserve( _numberOfPoints );

    for( auto pointNum = 0U; pointNum < _numberOfPoints; ++pointNum )
    {
        polysync::datamodel::LidarPoint point;
        point.setIntensity( 255 );

        auto x = pointNum % 100;
        auto y = pointNum / 100;

        float u = static_cast< float >( x )/ 100.0;
        float v = static_cast< float >( y ) / 100.0;

        // center u/v at origin
        u = ( u * 2.0 ) - 1.0;
        v = ( v * 2.0 ) - 1.0;

        float w = sin( ( u * _sineFrequency ) + _relativeTime )
                * cos( ( v * _sineFrequency ) + _relativeTime )
                * 0.5;

        point.setPosition( { u * 10, v * 10, w * 10 } );
        outputPoints.emplace_back( point );
    }

    _message.setPoints( outputPoints );
}

Now the header timestamp will be set─immediately before publishing─to represent when the UTC timestamp of this message will be published to the PolySync Core bus.

void LidarPointGenerator::publishPoints()
{
    _message.setHeaderTimestamp( polysync::getTimestamp() );
    _message.publish();
}

2. Visualize data

Once a node─in this case polysync-data-generator-cpp─is publishing data to the PolySync Core bus, any other application can access the data by subscribing to the PolySync Core message type(s).

PolySync Core provides two applications that are already set up to subscribe to the message types published by this node, and visualize the data in multiple ways:

  • PolySync Core Studio
    • A Qt based application that has multiple plugins providing different ways to view the data on the bus.
  • Viewer Lite
    • An OpenGL 2D visualizer application that allows you to see primitive data types being transmitted using the PolySync Core bus.

2.1 Viewer Lite

Viewer lite is an example written in C.

In another terminal download, build, and start the viewer lite application.

$ git clone git@github.com:PolySync/PolySync-Core-C-Examples.git
$ cd PolySync-Core-C-Examples/viewer_lite
$ sudo apt-get install libglib2.0-dev freeglut3-dev  # this is required for this example to build, documented in the examples README.md file
$ make
$ ./bin/polysync-viewer-lite

DataGen

Your sample node should display the following:

  • A LiDAR “point cloud” as the the block in the center
  • Two “identified objects” as the rectangles on the upper right side of the point cloud. These are similar to those that you might get from an object identification sensor
  • Two circles representing RADAR targets on the upper left side of the point cloud

PolySync Core Viewer Lite does not allow for much distinction in the point cloud, but updates to the object data and RADAR pings that are immediately obvious.

2.2 Studio

In this instance, the 3D View plugin is the easiest way to view the provided data.

Leave the PolySync Core Viewer Lite application running, and either double-click the PolySync Core Studio icon on your desktop or execute the following command in a new terminal:

$ polysync-core-studio

When Studio opens, you will be able to visualize the data that the PolySync Core Data Generator is creating, select the “3D View” button on the right. With that item selected, PolySync Core Studio will display something similar to the image:

DataGen

Conclusion

Congratulations! You have now successfully populated and published data from a single node into a PolySync Core message on the bus, enabled multiple nodes to subscribe to messages to access data asynchronously, and visualized all data being published to the PolySync Core bus.

Workflow integrations

ROS Bridge

This bidirectional bridge enables PolySync Core message to be transcribed to ROS topics, and ROS topics to be transcribed to PolySync Core messages at runtime. PolySync Core provides a toolchain to build and run a ROS bridge node, which acts as a bridge between the PolySync Core bus and the ROS bus.

This bridge enables synchronized sensor data from PolySync Core to flow into existing ROS applications. Allowing you to take advantage of PolySync Core’s massive suite of supported sensors, providing submillisecond timing fidelity and no data-loss, while using existing ROS applications and software APIs to create autonomous vehicle algorithms.

Here’s an example of LiDAR, RADAR, and object data visualized in RVIS, streaming from the PolySync Core bus via the bridge.

ROS bridge example

The ROS bridge node can be generated against any PolySync Core data model using the PolySync Core provided pdm-gen tool. Once the files are generated the ROS bridge node can be spun up to subscribe to data from the PolySync Core bus, which the ROS bridge node begins transcribing and republishing to the ROS bus.

1. Building the ROS bridge

The pdm-gen tool takes in IDL files as input arguments. The IDL files define C-style structures which make a PolySync Core high level message, for example the ps_objects_msg and ps_lidar_points_msg.

After sourcing the ROS environment, use the pdm-gen tool and pass in the IDL files containing the message types that you want transcribed to ROS topics.


$ cd ~
$ source /opt/ros/<ros version>/setup.bash
$ pdm-gen -c -r
$ source pdm/ros_bridge/setup.sh

The command above will generate a ROS topic for every message type defined in the PolySync Core data model. Create and supply additional IDL files to generate ROS topics for your custom-defined message types.

The pdm-gen tool creates all ROS resources (scripts, headers, libraries) along with a custom ROS topic for each PolySync Core message that’s defined within the data model IDL files.

The output directory is ./pdm, and the generated ROS resources are in ./pdm/ros_bridge/. You do not need to manually copy files out of the pdm/ directory.

Notable files:

  • pdm/ros_bridge/lib/libps_ros_bridge_library.so
  • pdm/ros_bridge/include/RosMessageCopy.h
  • pdm/ros_bridge/share/polysync_ros_bridge/BridgeNode

2. Running the ROS bridge

Print help:

rosrun polysync_ros_bridge BridgeNode -h
Node Type is a required option all others are optional
Options:
Flag:     Description:                Default:
-t        Node Type
-o        PolySync Ok State Timing    1
-n        Node Name Prefix            ROS_Bridge
-w        Number of ROS Threads       1
-q        ROS Queue Size              1
-p        ROS Topic Prefix            polysync
-m        Print available messages
-h        Prints help

A node type is equivalent to the message type name, for example ps_gps_msg or ps_lidar_points_msg.

2.1 Enumerating available message types

The bridge node enumerates the supported messages that can be subscribed to on the PolySync Core side, and be translated to a complimentary ROS topic.

$ rosrun polysync_ros_bridge BridgeNode -m
Available bridge messages are:
ps_platform_cabin_report_msg
ps_rnr_msg
ps_file_transfer_msg
ps_imu_accuracy_msg
ps_parameters_msg
ps_event_msg
ps_platform_tire_pressure_report_msg
ps_platform_motion_msg
ps_radar_targets_msg
ps_image_data_msg
ps_platform_wheel_speed_report_msg
psync_file_transfer_ext_msg_handler
ps_objects_msg
ps_platform_obd_msg
ps_imu_msg
ps_platform_suspension_report_msg
ps_platform_surround_report_msg
ps_zones_msg
ps_gps_msg
ps_file_msg
ps_byte_array_msg
ps_response_msg
ps_lane_model_msg
ps_platform_steering_report_msg
ps_platform_throttle_report_msg
ps_platform_brake_information_msg
ps_traffic_sign_msg
ps_lidar_points_msg
ps_diagnostic_trace_msg
ps_can_frame_msg
ps_platform_steering_command_msg
ps_platform_brake_report_msg
ps_gps_accuracy_msg
ps_platform_turn_signal_command_msg
ps_sdf_state_msg
psync-file-ext-msg-handler
ps_platform_brake_command_msg
ps_command_msg
ps_manager_status_msg
ps_platform_throttle_command_msg
ps_platform_gear_command_msg
ps_platform_cabin_ext_report_msg
ps_platform_gear_report_msg
ps_rnr_sessions_msg
ps_platform_control_msg

2.2 Starting the bridge node

Open two terminals.

In the first terminal:


$ source ~/pdm/ros_bridge/setup.bash
$ roscore

In the second terminal:


$ source ~/pdm/ros_bridge/setup.bash
$ sudo service polysync-core-manager start
$ rosrun polysync_ros_bridge BridgeNode -t ps_diagnostic_trace_msg

Expected output:


2017-08-17 13:51:02.08s DEBUG  [ps_to_ros-] - build version 2.3.0-1502486752
2017-08-17 13:51:03.08s DEBUG  [ps_to_ros-281475018373488] - created participant - GUID: 0x00010000027BB970 d(281475018373488)
2017-08-17 13:51:03.08s DEBUG  [ps_to_ros-281475018373488] - message types visible to this node: 83
Spinning up a node of type: ps_diagnostic_trace_msg

3. Visualize the data

Assuming you’ve left the BridgeNode running from above, you can use rostopic to see the visualize the data by opening a new terminal and running the following command:


rostopic echo /polysync/ps_diagnostic_trace_msg

4. Additonal resources

To see an example of how to utilize the ROS bridge, see this example from the PolySync Core C++ GitHub repo.

Tutorials
In previous sections the concept of using multiple applications to publish and subscribe data was discussed. This section further develops some examples of vehicle interaction including control, path finding, and ground plane detection.

Vehicle Control Tutorial

In this tutorial we will demonstrate how the PolySync Core bus, combined with PolySync’s publish/subscribe architecture, can be used to control a vehicle in real time.

1. The PolySync Core parrot projects

Parrot Controller and Parrot Visualizer live in our public C examples repo. After downloading the repo manually or using git to clone it, open two terminals. In the first terminal, navigate to the “parrot_controller” project in the location shown below:

PolySync-Core-C-Examples/parrot_controller
├── include
│   ├── common.h
│   ├── gl_headers.h
│   ├── grid.h
│   ├── gui.h
│   ├── ps_interface.h
│   ├── render.h
│   ├── vehicle_control.h
│   └── waypoint.h
├── Makefile
└── src
    ├── grid.c
    ├── gui.c
    ├── parrot_controller.c
    ├── ps_interface.c
    ├── render.c
    ├── vehicle_control.c
    └── waypoint.c

In the second terminal, open the “parrot_visualizer” project:

PolySync-Core-C-Examples/parrot_visualizer
├── include
│   ├── common.h
│   ├── driver_vehicle.h
│   ├── gl_headers.h
│   ├── gui.h
│   ├── ps_interface.h
│   ├── render.h
│   └── sliding_filter.h
├── Makefile
├── res
│   └── parrot.png
└── src
    ├── driver_vehicle.c
    ├── gui.c
    ├── parrot_visualizer.c
    ├── ps_interface.c
    ├── render.c
    └── sliding_filter.c

1.1 Install dependencies

$ sudo apt-get install libglib2.0-dev freeglut3-dev libsdl2-dev libsdl2-image-dev libpng12-dev 

1.2 Make the PolySync Core parrot projects

Using the existing makefiles, make both projects and then run them.

Parrot Windows

2. Run the waypoint controller

Begin manually specifying waypoints by pointing and clicking on a few different locations on the “parrot_controller” example. Waypoints become visible where clicked, and the vehicle in the “parrot_visualizer” example will start to move towards the closest one. You are now driving your first autonomous vehicle in PolySync.

Parrot Driving

3. Using Trace to view messages on the bus

Begin by opening PolySync Core Studio:

$ polysync-core-studio

Then, click on Trace in the “Launch Plugin” right-hand sidebar. Trace should open up as shown below.

Studio Trace

The drop down list in the left-hand sidebar contains all of the message types that have been seen on the bus . It will update dynamically. Select a name for the Trace window in the text box above the drop down list. Once a message type and name for the Trace is selected hit “Start Trace.”

Now that Trace is running, you can create some waypoints by clicking in the parrot_controller example.

Trace Control

For a deeper understanding of what is going on here take a look below. The source file “vehicle_control.c” is where the pure pursuit algorithm lives that does the vehicle control.

PolySync-Core-C-Examples/parrot_controller
└── src
    └── vehicle_control.c

The function below is called by a loop in main. It receives the list of current waypoints which have been selected in the GUI. It also receives the user_data structure which contains the current position of the vehicle (parrot) on the screen. This vehicle position is continuously updated by the parrot-visualizer example which estimates the vehicles position based on steering and throttle messages. The parrot-visualizer example then loads this estimated position into a ps_platform_motion_msg and sends it out on the PolySync Core bus. The below operation, using the position information that has been received and processed, calls the “findNextWaypoint” function which determines what is the current waypoint goal based on the queue of waypoints (and also eliminates the current goal waypoint if it has been reached in this iteration). Once the waypoint goal is known, this is passed to the pure pursuit algorithm. Take a look at the next code snippet for an explanation of how this works.

void send_psync_messages_for_vehicle_control( node_data_s * user_data,
        waypoint_s * waypoints )
{
    // local vars
    node_data_s     *node_data  = NULL;

    // cast
    node_data = ( node_data_s* ) user_data;

    // ignore if not valid
    if( ( node_data == NULL ) || ( waypoints == NULL ) )
    {
        return;
    }

    waypoint_s nextWaypoint;

    if( findNextWaypoint( waypoints, &nextWaypoint, node_data->current_vehicle_position ) )
    {
        return;
    }

    double steeringAngle = 0;

    if( calculate_steering_angle_based_on_goal_waypoint(
            &nextWaypoint,
            node_data->current_vehicle_position,
            &steeringAngle ) )
    {
        return;
    }

    publish_steering_command( node_data->node, steeringAngle );    
    publish_throttle_command( node_data->node, DEFAULT_THROTTLE );
}

This function calculates the angle between the current heading of the vehicle and the angle from the vehicle’s current position to the goal waypoint. It then determines the error between these two angles, and uses this to calculate the lateral error of the vehicle from the goal path. Multiplied by a steering control factor this is applied to steering the vehicle.

int calculate_steering_angle_based_on_goal_waypoint(
        waypoint_s * goalWaypoint,
        waypoint_s currentPosition,
        double * steeringAngle )
{
    if( ! goalWaypoint->valid )
    {
        return 1;
    }

    double vehicleToWaypointAngle = calculate_angle_between_coordinates(
            currentPosition.x,
            currentPosition.y,
            goalWaypoint->x,
            goalWaypoint->y );

    double errorInHeading = calculate_smallest_interior_angle(
            currentPosition.heading*RAD2DEG,
            vehicleToWaypointAngle*RAD2DEG );    
    double lateralError = sin( errorInHeading*DEG2RAD );

    ( *steeringAngle ) = -STEERING_CONTROL_FACTOR*lateralError;

    return 0;
}

Once you have the desired steering angle, and desired throttle value, these are packaged up and published to the PolySync Core bus in the below.

    publish_steering_command( node_data->node, steeringAngle );

    publish_throttle_command( node_data->node, DEFAULT_THROTTLE );

These messages are received and processed by the parrot_control example, then used to estimate the position of the “vehicle” closing the loop.

Conclusion

Congratulations! You have now successfully controlled your first PolySync Core autonomous vehicle.

Ground Plane Detection Tutorial

This tutorial will introduce you to a common process for implementing perception algorithms, as well as using logfile data as an algorithm development acceleration tool.

1. Ground plane detection overview

The ground plane detection node is an example of a PolySync perception algorithm.

This node depends on a LiDAR point cloud as an input, so it subscribes to the ps_lidar_points_msg to receive a copy of every LiDAR message that’s seen on the PolySync bus .

For each received message, this node will access the point cloud and ignore any non-ground points using simple filtering .

LiDAR ground points are packaged into a separate ps_lidar_points_msg message, which is re-published to the bus for all other nodes to optionally consume.

studio-ground-plane-visualization

2. Locate the C++ code

You can find the C++ Ground Plane Detection code as part of our public C++ examples repo here.

PolySync-Core-CPP-Examples/GroundPlaneDetection
#include < PolySyncNode.hpp >
#include < PolySyncDataModel.hpp >

using namespace std;

/class GroundPlaneDetection : public polysync::Node
{
private:
    ps_msg_type _messageType;

    std::vector< polysync::datamodel::LidarPoint > _groundPlanePoints;

public:
    / void initStateEvent() override
    {
        _messageType = getMessageTypeByName( "ps_lidar_points_msg" );
        registerListener( _messageType );
    }
    void okStateEvent() override
    {
        polysync::datamodel::LidarPointsMessage groundPlaneMessage ( *this );
        groundPlaneMessage.setHeaderTimestamp( polysync::getTimestamp() );
        groundPlaneMessage.setPoints( _groundPlanePoints );
        groundPlaneMessage.publish();
        usleep(50);
    }
    / virtual void messageEvent( std::shared_ptr< polysync::Message > message )
    {
        using namespace polysync::datamodel;
        if( std::shared_ptr <LidarPointsMessage > lidarPointsMessage = getSubclass< LidarPointsMessage >( message ) )
        {
           if( lidarPointsMessage->getHeaderSrcGuid() != getGUID() )
            {
                _groundPlanePoints.clear();
                groundPlaneMessage.setHeaderTimestamp( polysync::getTimestamp() );
                std::vector< polysync::datamodel::LidarPoint > lidarPoints = lidarPointsMessage->getPoints();
                std::vector< polysync::datamodel::LidarPoint > groundPlanePoints;

                std::array< float, 3 > position;

                for( polysync::datamodel::LidarPoint point : lidarPoints )
                {
                    position = point.getPosition();
                    {
                        _groundPlanePoints.push_back( point );
                    }
                }
                //groundPlaneMessage.setPoints( _groundPlanePoints );

              // groundPlaneMessage.publish();

                groundPlanePoints.clear();
                lidarPoints.clear();
            }
        }
    }

    bool pointIsNearTheGround( const std::array< float, 3 > & point )
    {

        return point[0] >= 2.5 and      // x is 2.5+ meters from the vehicle origin
               point[0] < 35 and        // x is less than 35 meters from the vehicle origin
               point[1] > -12 and       // y is greater than -12 meters from the vehicle origin (towards the passenger side)
               point[1] < 12 and        // y is less than 12 meters from the vehicle origin (towards the driver side)
               point[2] > -0.35 and     // z is greater than -0.35 meters from the vehicle origin (towards the ground),

               point[2] < 0.25;         // z is less than 0.25 meters from the vehicle origin
    }

};

/int main()
{
    return 0;
}

2.1 GroundPlaneDetection.cpp explained

The _messageType is used to hold the integer value for the ps_lidar_points_msg.

_groundPlanePoints is used to hold the subset of LiDAR points that represent the ground plane.

private:
    ps_msg_type _messageType;

    std::vector< polysync::datamodel::LidarPoint > _groundPlanePoints;

You will call the following functions─almost always in this order─by a node that’s publishing messages to the bus. After creating an instance of the ps_lidar_points_msg, the payload is set. In this instance the ground plane points vector.

Please note that the message header timestamp is set immediately before the publish message function.

void okStateEvent() override
{
    polysync::datamodel::LidarPointsMessage groundPlaneMessage( *this );

    groundPlaneMessage.setPoints( _groundPlanePoints );

    groundPlaneMessage.setHeaderTimestamp( polysync::getTimestamp() );

    groundPlaneMessage.publish();

    usleep(50);
}

Within the messageEvent, you will promote the base class message to a LidarPointsMessage. It’s important that the node filters out any message that it publishes to the bus.

Once the node receives a valid, new LiDAR points message it can clear out the existing vector of LiDAR ground points.

if( std::shared_ptr <LidarPointsMessage > lidarPointsMessage = getSubclass< LidarPointsMessage >( message ) )
{
    // Filter out this nodes own messages
    if( lidarPointsMessage->getHeaderSrcGuid() != getGUID() )
    {
        _groundPlanePoints.clear();

Each time the node receives a LiDAR point message, it creates another instance of the LiDAR point message to hold the ground plane points. The original message and ground plane points message will have the same message start/end scan timestamps, which allows other node(s) to correlate the original and ground plane messages for further processing.

This code block shows how to create local containers for the incoming message, and iterate over the message payload in the for loop.

LidarPointsMessage groundPlaneMessage ( *this );

groundPlaneMessage.setHeaderTimestamp( polysync::getTimestamp() );

// Get the entire LiDAR point cloud from the incoming message
std::vector< polysync::datamodel::LidarPoint > lidarPoints = lidarPointsMessage->getPoints();

// Create a container that will hold all ground plane points that are found in the nodes processing
std::vector< polysync::datamodel::LidarPoint > groundPlanePoints;

// Create a container to hold a single point as the node iterates over the full point cloud
std::array< float, 3 > position;

for( polysync::datamodel::LidarPoint point : lidarPoints )
{
    // Get the x/y/z position for this point in the point cloud
    position = point.getPosition();


    if( pointIsNearTheGround( position ) )
    {
        // This point is close the ground, place it in our point vector
        _groundPlanePoints.push_back( point );
    }
}

Since the incoming data has been transformed to a vehicle centered reference coordinate frame by default, we can use the following simple filtering techniques to determine which points in the LiDAR cloud are ground points.

bool pointIsNearTheGround( const std::array< float, 3 > & point )
{
    // The vehicle origin is at the center of the rear axle, on the ground
    // Incoming LiDAR point messages have been corrected for sensor mount position already


    return point[0] >= 2.5 and      // x is 2.5+ meters from the vehicle origin
           point[0] < 35 and        // x is less than 35 meters from the vehicle origin
           point[1] > -12 and       // y is greater than -12 meters from the vehicle origin (towards the passenger side)
           point[1] < 12 and        // y is less than 12 meters from the vehicle origin (towards the driver side)
           point[2] > -0.35 and     // z is greater than -0.35 meters from the vehicle origin (towards the ground),
                                    // this compensates for vehicle pitch as the vehicle drives
           point[2] < 0.25;         // z is less than 0.25 meters from the vehicle origin
}

3. Build and run ground plane node

You can find the C++ Data Generation code as part of our public C++ examples repo here. You will build the node using cmake and make.

$ cd PolySync-Core-CPP-Examples/GroundPlaneDetection`
$ cmake . && make
$ ./polysync-ground-plane-detection-cpp

The node will quietly wait until LiDAR data is received in the messageEvent.

Since this node requires LiDAR data as an input, it won’t do anything unless LiDAR data─ps_lidar_points_msg─is being published to the bus by another node.

The PolySync development lifecycle uses the powerful replay tools to recreate real-world scenarios on the bench.

Using PolySync Studio, which leverages the Record & Replay API, you can command active nodes to replay data from a logfile and publish high-level messages to the bus. This allows for the development of algorithms that subscribe to messages and process the data contained within the messages in real-time.

4. Replaying data

In order for the ground plane tutorial to act on a data set, you need to replay data either from the command line or through Studio. You can also connect to sensor hardware and run the application against live data.

5. Visualizing data

The data can be visualized in Studio’s 3D View and the OpenGL based Viewer Lite example node.

To view in studio, open the 3D View plugin from the Studio plugin launcher. Once open if you are replaying data, LiDAR points should appear in the 3D View. The ground plane LiDAR points will appear in a different color in the 3D view, highlighting their difference from the other LiDAR points.

studio-ground-plane-visualization

5.1 Viewer Lite

Viewer Lite is an OpenGL 2D visualizer node that allows us to see primitive data types (LiDAR points, objects, and RADAR targets) being transmitted using the PolySync bus and messages.

Viewer Lite is an example written in C.

In another terminal download, build, and start the viewer lite application.

$ git clone git@github.com:PolySync/PolySync-Core-C-Examples.git
$ cd PolySync-Core-C-Examples/viewer_lite
$ sudo apt-get install libglib2.0-dev freeglut3-dev  # this is required for this example to build, documented in the examples README.md file
$ make
$ ./bin/polysync-viewer-lite

Once launched, when data is replayed from the command line or from Studio, data will appear in the Viewer Lite application.

viewer-lite-ground-plane-visualization

Conclusion

Congratulations! You have successfully implemented perception algorithms and walked through using logfile data as an algorithm development acceleration tool.

Path Planner Tutorial

This tutorial will show you how to build a basic path planner algorithm in PolySync.

We will apply a basic A* planner to a simulated search space, generate the optimal path to the goal state, and then move a simulated robot along that path to the goal.

The optimal path is sent one waypoint at a time as a PlatformMotionMessage, which is defined in the C++ Sensor Data Model Module.

1. Search algorithm

A robot is plotted on a map and tasked to find the optimal path to a chest of gold while reducing the cost of travel. The robot is given no prior information about its world, other than a heuristic, or estimate, of where the gold is located.

The robot can move in eight directions: up, down, right, left, and diagonally. Each time the robot moves, the path cost is increased by one, and the optimal path will be the path that most minimizes the path cost.

PathPlanner

As the robot searches the map using A*, it plots paths towards the goal using the heuristic cost to guide it. As it explores a cell, it adds its estimate of the path cost at that cell to the actual path cost, which is called the global score. The optimal path will have the lowest score along its trajectory.

Once the searcher explores a cell, it removes it from an open set and adds it to a closed set. It chooses to explore the next cell with the lowest estimated score and repeats until it has found the goal state.

1.1 Robot and searcher as PolySync Core nodes

PolySync Core can be used to bridge the communication between a path planner node and a robot node.

In this example a path planner calculates and publishes the optimal path as a set of waypoints for the robot to travel to.

In PolySync, this means that the two nodes subscribe to PlatformMotionMessages containing a set of x and y coordinates.

  • The searcher node publishes a PlatformMotionMessage with a set of x and y coordinates
  • The robot node subscribed to that message type moves to the desired location
  • Upon arrival, the robot node publishes its own PlatformMotionMessage, indicating its current position
  • The searcher node receives the robot’s current location via the PlatformMotionMessage and issues the next waypoint in another instance of the PlatformMotionMessage
  • This repeats until the robot reports that it has found the treasure

2. Setup the example

Find the C++ Path Planner code in the public C++ examples repo.

PolySync-Core-CPP-Examples/PathPlanner/
├── include
│   ├── GridMap.hpp
│   ├── Planner.hpp
│   ├── RobotNode.hpp
│   └── SearchNode.hpp
├── resources
│   ├── gold.jpg
│   ├── robot.jpg
│   └── maze2.pgm
├── CMakeLists.txt
├── RobotNode.cpp
├── SearchNode.cpp
└── src
    ├── GridMap.cpp
    └── Planner.cpp

2.1 OpenCV installation

This example requires OpenCV to render the map.

2.2 Running the example

Build the example with cmake in the PathPlanner directory.

$ cd PolySync-Core-CPP-Examples/PathPlanner
$ mkdir build && cd build
$ cmake ..
$ make

The program is split between two nodes that are run as seperate processes. Since the robot node acts as a listener, start it first.

$ ./polysync-path-planner-robot-cpp

In another terminal window, run the planner node.

$ ./polysync-path-planner-algorithm-cpp

2.3 What’s happening?

When the robot node is constructed, it knows where the robot is located but not where the goal is.

When the search node is constructed. it generates a goal state but needs a start location.

By using a new PlatformMotionMessage to communicate, the search node sends a goal location to the robot node that plots the goal state and returns to the starting robot location.

Once the search node receives the robot’s location it can begin searching the map for the optimal path.

PathPlanner

After several seconds, the search node will produce the optimal path and will ask you to push the return key to execute. You will then be able to watch the robot go for the gold.

PathPlanner

PathPlanner

3. Code breakdown

SearchNode.cpp and RobotNode.cpp define the PolySync Core nodes, and contain the main entry points for these application.

The SearchNode generates an instance of a Planner class to search for the optimal path.

The RobotNode creates a GridMap object to display its current location.

3.1 Finding RobotNode code

You can find the C++ Robot Node code as part of the Path Planner project in our public C++ examples repo here.

3.2 The RobotNode.cpp code

#include <cmath>
#include <iostream>

#include <PolySyncDataModel.hpp>

#include "RobotNode.hpp"

using namespace cv;
using namespace std;
using namespace polysync::datamodel;

constexpr int INVALID_LOC = -1;


RobotNode::RobotNode( )
    :
    _world( ),
    _golLocX( INVALID_LOC ),
    _golLocY( INVALID_LOC ),
    _newRobLocX( INVALID_LOC ),
    _newRobLocY( INVALID_LOC ),
    _numWaypoints( INVALID_LOC ),
    _waypointCounter( 0 )
{
    setNodeName( "robotNode" );
}


RobotNode::~RobotNode( )
{
}


void RobotNode::initStateEvent( )
{

    // register node as a subscriber to platform motion messages from ANY node.
    registerListener( getMessageTypeByName( "ps_platform_motion_msg" ) );

    setSubscriberReliabilityQOS(
            getMessageTypeByName( "ps_platform_motion_msg" ),
            RELIABILITY_QOS_RELIABLE );
}


void RobotNode::okStateEvent( )
{

    // wait for a goal location message from searchNode to be received
    if ( _golLocX == INVALID_LOC || _golLocY == INVALID_LOC )
    {

        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro(10000);
        return;

    }

    // when goal received, generate a map with that goal state.
    else if ( _newRobLocX == INVALID_LOC || _newRobLocY == INVALID_LOC )
    {

        cout << endl << "Goal location received by robot.";
        cout << endl << "Generating map - - - - - - > " << std::flush;

        _world = std::unique_ptr< GridMap >{ new GridMap };

        // pass in goal location
        _world->generateMap( _golLocX, _golLocY );

        _newRobLocX = _world->robLoc[0][0];
        _newRobLocY = _world->robLoc[0][1];

        cout << "Map generated. " << endl;
        cout << "Sending robot start location to planner algorithm." << endl;
        cout << endl << "RobotNode waiting for waypoints." << endl;

    }

    // with map generated, begin sending location messages to searchNode
    else if ( _golLocX != INVALID_LOC && _golLocY != INVALID_LOC )
    {

        sendLocationToPlanner( );

        // have I recieved the final waypoint?  if so, ask the user to shut down
        if ( _waypointCounter == _numWaypoints - 2)
        {
            cout << endl << "Robot is at goal state after ";
            cout << _waypointCounter << " waypoints. " <<  endl << endl;
            cout << "Press return key or Ctrl-C to shut down RobotNode.";
            cout << endl;

            cin.get();
            cout << endl << "Shutting down RobotNode." << endl << endl;

            disconnectPolySync( );

            return;
        }

        // The ok state is called periodically by the system so sleep to reduce
        // the number of messages sent. do nothing, sleep for 1 millisecond.
        polysync::sleepMicro(1000);

    }

    else
    {

        // do nothing, sleep for 100 milliseconds
        polysync::sleepMicro(100000);
        return;

    }
}


void RobotNode::messageEvent( std::shared_ptr< polysync::Message > newMsg )
{

    // check whether new message is not your own. This check is only important
    // since robotNode and searchNode both publish and subscribe to messages.
    if ( newMsg->getSourceGuid( ) == getGuid( ) )
    {
        return;
    }

    // now check whether new message is a PlatformMotionMessage.
    if ( auto msg = getSubclass< PlatformMotionMessage >( newMsg ) )
    {

        // the first received message from searchNode will be the goal location.
        if ( _golLocX == INVALID_LOC && _golLocY == INVALID_LOC )
        {
            _golLocX = msg->getOrientation()[0];
            _golLocY = msg->getOrientation()[1];
        }

        // discard any received messages still containing the goal location
        else if ( msg->getOrientation()[0] == _golLocX &&
                  msg->getOrientation()[1] == _golLocY )
        {
            return;
        }

        // if a new waypoint, send it to the robot and move to the new location.
        else if ( msg->getPosition()[0] > _waypointCounter )
        {
            _waypointCounter = msg->getPosition()[0];
            _numWaypoints = msg->getPosition()[2];

            _newRobLocX = msg->getOrientation()[0];
            _newRobLocY = msg->getOrientation()[1];

            cout << "Received waypoint " << _waypointCounter;
            cout << " from Planner.  X = " << _newRobLocX << ", Y = ";
            cout << _newRobLocY << endl << std::flush;
        }
    }
}


void RobotNode::sendLocationToPlanner( )
{

    _world->moveRobot( _newRobLocX, _newRobLocY );

    double actualRobLocX = double( _world->robLoc[0][0] );
    double actualRobLocY = double( _world->robLoc[0][1] );

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setPosition( { _waypointCounter, 0, 0 } );
    msg.setOrientation( { actualRobLocX, actualRobLocY, 0, 0 } );

    // Publish to the PolySync bus
    msg.publish();

}


int main( )
{

    RobotNode robotNode;
    robotNode.connectPolySync( );

    return 0;
}

3.3 The RobotNode.cpp code explained

Now let’s break down the RobotNode.cpp code.

The line below defines the PolySync Core node and each of the protected event methods in the node state machine. Each of the states do minimal or no operation unless they are overridden.

#include < PolySyncNode.hpp >

The following data model defines all types and messages in the PolySync Core runtime. This must be included in all C++ node applications.

#include < PolySyncDataModel.hpp >

The following are specific to this application.

#include "RobotNode.hpp"

using namespace cv;
using namespace std;
using namespace polysync::datamodel;

constexpr int INVALID_LOC = -1;

The constructor initializes all private members.

RobotNode::RobotNode( )
    :
    _world( ),
    _golLocX( INVALID_LOC ),
    _golLocY( INVALID_LOC ),
    _newRobLocX( INVALID_LOC ),
    _newRobLocY( INVALID_LOC ),
    _numWaypoints( INVALID_LOC ),
    _waypointCounter( 0 )
{
    setNodeName( "robotNode" );
}

RobotNode::~RobotNode( )
{
}

All of our PolySync Core C++ applications must subclass the PolySync Core node object in order to connect to the PolySync Core bus and publish/subscribe messages.

In RobotNode.hpp, you can see how RobotNode inherits a PolySync Core node object.

class RobotNode : public polysync::Node

Back in RobotNode.cpp, after the constructor is called and the node is generated, the node transitions to the INIT state.

The initStateEvent is typically where a node creates messages and initializes any other resources that require a PolySync Core node reference. In this case, we register the RobotNode to listen for ps_platform_motion_msg.

void RobotNode::initStateEvent( )
{

    // register node as a subscriber to platform motion messages from ANY node.
    registerListener( getMessageTypeByName( "ps_platform_motion_msg" ) );

    setSubscriberReliabilityQOS(
            getMessageTypeByName( "ps_platform_motion_msg" ),
            RELIABILITY_QOS_RELIABLE );
}

The node then transitions from the INIT state to the OK state. The following is called continuously while in the node’s OK state, and is where almost all of the action happens for the node.

void RobotNode::okStateEvent( )

If the robot node has not received a goal state, do nothing and loop through again.

    if ( _golLocX == INVALID_LOC || _golLocY == INVALID_LOC )
    {

        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro(10000);
        return;

    }

If the goal has been received, the robot node can now generate the map. This happens here.

    else if ( _newRobLocX == INVALID_LOC || _newRobLocY == INVALID_LOC )
    {

        cout << endl << "Goal location received by robot.";
        cout << endl << "Generating map - - - - - - > " << std::flush;

        _world = std::unique_ptr< GridMap >{ new GridMap };

        // pass in goal location
        _world->generateMap( _golLocX, _golLocY );

        _newRobLocX = _world->robLoc[0][0];
        _newRobLocY = _world->robLoc[0][1];

        cout << "Map generated. " << endl;
        cout << "Sending robot start location to planner algorithm." << endl;
        cout << endl << "RobotNode waiting for waypoints." << endl;

    }

Once the map is generated, the robot node can wait for waypoints to arrive. This chunk of code is what will execute whenever SearchNode is sending waypoints and the robot is moving.

If the robot arrives at the goal state, the disconnectPolySync( ) is called and the node shuts down.

    else if ( _golLocX != INVALID_LOC && _golLocY != INVALID_LOC )
    {

        sendLocationToPlanner( );

        // have I recieved the final waypoint?  if so, ask the user to shut down
        if ( _waypointCounter == _numWaypoints - 2)
        {
            cout << endl << "Robot is at goal state after ";
            cout << _waypointCounter << " waypoints. " <<  endl << endl;
            cout << "Press return key or Ctrl-C to shut down RobotNode.";
            cout << endl;

            cin.get();
            cout << endl << "Shutting down RobotNode." << endl << endl;

            disconnectPolySync( );

            return;
        }

        // The ok state is called periodically by the system so sleep to reduce
        // the number of messages sent. do nothing, sleep for 1 millisecond.
        polysync::sleepMicro(1000);

    }
}

The messageEvent is called whenever a new message is received.

void RobotNode::messageEvent( std::shared_ptr< polysync::Message > newMsg )

When a new message is received, the first thing to do is to check whether it is a message published by the same node. In this case, RobotNode should discard anything it self-published.

    if ( newMsg->getSourceGuid( ) == getGuid( ) )
    {
        return;
    }

The next line checks whether the received message is a PlatformMotionMessage. This is technically unnecessary in our case, since this problem is only structured to send and receive PlatformMotionMessage, but it is good practice as you scale up your application.

   if ( auto msg = getSubclass< PlatformMotionMessage >( newMsg ) )

After the robot node verifies it has a new PlatformMotionMessage from a different node, it can now classify whether it is the goal state (the first received message) or a new waypoint. Local variables are populated accordingly.

        // the first received message from searchNode will be the goal location.
        if ( _golLocX == INVALID_LOC && _golLocY == INVALID_LOC )
        {
            _golLocX = msg->getOrientation()[0];
            _golLocY = msg->getOrientation()[1];
        }

        // discard any received messages still containing the goal location
        else if ( msg->getOrientation()[0] == _golLocX &&
                  msg->getOrientation()[1] == _golLocY )
        {
            return;
        }

        // if a new waypoint, send it to the robot and move to the new location.
        else if ( msg->getPosition()[0] > _waypointCounter )
        {
            _waypointCounter = msg->getPosition()[0];
            _numWaypoints = msg->getPosition()[2];

            _newRobLocX = msg->getOrientation()[0];
            _newRobLocY = msg->getOrientation()[1];

            cout << "Received waypoint " << _waypointCounter;
            cout << " from Planner.  X = " << _newRobLocX << ", Y = ";
            cout << _newRobLocY << endl << std::flush;
        }

This shows how the robot updates the SearchNode with its current location. Once the new location is received, SearchNode publishes the next waypoint.

void RobotNode::sendLocationToPlanner( )
{

    _world->moveRobot( _newRobLocX, _newRobLocY );

    double actualRobLocX = double( _world->robLoc[0][0] );
    double actualRobLocY = double( _world->robLoc[0][1] );

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setPosition( { _waypointCounter, 0, 0 } );
    msg.setOrientation( { actualRobLocX, actualRobLocY, 0, 0 } );

    // Publish to the PolySync bus
    msg.publish();

}

This creates an instance of your RobotNode object. The second command is a blocking operation, and places the node in the node state machine. If there is a valid license, the node leaves the AUTH state and enters the INIT state, calling the initStateEvent.

int main( )
{
    RobotNode robotNode;
    robotNode.connectPolySync( );

    return 0;
}

3.4 Finding SearchNode code

You can find the C++ SearchNode.cpp code as part of the Path Planner project in our public C++ examples repo here.

3.5 The SearchNode.cpp code

#include <cmath>
#include <iostream>

#include <PolySyncDataModel.hpp>

#include "SearchNode.hpp"
#include "GridMap.hpp"

using namespace cv;
using namespace std;
using namespace polysync::datamodel;

constexpr int INVALID_LOC = -1;


SearchNode::SearchNode( )
    :
    _searcher( ),
    _golLocX( INVALID_LOC ),
    _golLocY( INVALID_LOC ),
    _robLocX( INVALID_LOC ),
    _robLocY( INVALID_LOC ),
    _newRobLocX( INVALID_LOC ),
    _newRobLocY( INVALID_LOC ),
    _numWaypoints( INVALID_LOC ),
    _waypointCounter( INVALID_LOC )
{
    setNodeName( "searchNode" );

    setNodeType( PSYNC_NODE_TYPE_SOFTWARE_ALGORITHM );
}

SearchNode::~SearchNode( )
{
}


void SearchNode::initStateEvent( )
{

    // register node as a subscriber to platform motion messages from ANY node.
    registerListener( getMessageTypeByName( "ps_platform_motion_msg" ) );

    setSubscriberReliabilityQOS(
            getMessageTypeByName( "ps_platform_motion_msg" ),
            RELIABILITY_QOS_RELIABLE );
}


void SearchNode::okStateEvent( )
{

    // generate goal state at a pseudo-random location.
    if ( _golLocX == INVALID_LOC && _golLocY == INVALID_LOC )
    {

        _searcher = std::unique_ptr< Planner >{ new Planner };

        _golLocX = _searcher->getGoalX( );
        _golLocY = _searcher->getGoalY( );

        cout << endl << "Goal Location generated by Planner Algorithm. ";
        cout << endl << "Sending goal location to robot." << endl << endl;
        cout << "Waiting for Robot Location." << endl << endl << std::flush;

    }

    // send goal location to robot repeatedly until it is received.
    else if ( _robLocX == INVALID_LOC || _robLocY == INVALID_LOC )
    {

        sendGoalToRobot( );

        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro(10000);

    }

    // once robot reports its starting location, search the space for the
    // optimal path from start to goal state.
    else if ( _newRobLocX == INVALID_LOC && _newRobLocY == INVALID_LOC )
    {

        cout << "Robot start location received by planner algorithm." << endl;
        cout << "Begin searching for optimal path from start location." << endl;

        int robIndex = _searcher->world.getIndexFromState( _robLocX, _robLocY) ;

        // use A* search to find optimal path.
        _numWaypoints = _searcher->searchAStar( robIndex );

        _newRobLocX = int(_robLocX);
        _newRobLocY = int(_robLocY);

    }

    // wait until done searching, then send out next waypoint.
    else if ( _newRobLocX != INVALID_LOC || _newRobLocY != INVALID_LOC )
    {

        // have I sent the final waypoint? if so, shut down
        if ( _waypointCounter == _numWaypoints - 2 )
        {
            cout << endl << "Robot arrived at goal state after ";
            cout << _waypointCounter << " waypoints. " <<  endl;
            cout << "Shutting down SearchNode." << endl << endl;

            disconnectPolySync( );

            return;
        }

        cout << "Sending waypoint " << _waypointCounter + 1 << " to robot.";
        cout << endl;

        int newIndex = _searcher->getNextWaypoint( _waypointCounter + 1 );

        sendNextWaypoint( newIndex, int(_waypointCounter + 1) );

        // The ok state is called periodically by the system so sleep to reduce
        // the number of messages sent. do nothing, sleep for 1 millisecond.
        polysync::sleepMicro(1000);

    }

    else
    {

        // do nothing, sleep for 100 milliseconds
        polysync::sleepMicro(100000);
        return;

    }
}


void SearchNode::messageEvent( std::shared_ptr< polysync::Message > newMsg )
{

    // check whether new message is not your own. This check is only important
    // since robotNode and searchNode both publish and subscribe to messages.
    if ( newMsg->getSourceGuid( ) == getGuid( ) )
    {
        return;
    }

    // now check whether new message is a PlatformMotionMessage.
    if ( auto msg = getSubclass< PlatformMotionMessage >( newMsg ) )
    {

        // all received platform motion messages will be current robot location.
        // robot will also report back last waypoint received so planner can
        // send the next waypoint.
        if ( msg->getOrientation()[0] != _robLocX ||
                msg->getOrientation()[1] != _robLocY )
        {
            _robLocX = msg->getOrientation()[0];
            _robLocY = msg->getOrientation()[1];

            if ( _waypointCounter != INVALID_LOC )
            {
                cout << "New Robot Location Message received at waypoint: ";
                cout << msg->getPosition()[0] << endl << std::flush;
            }
            _waypointCounter = msg->getPosition()[0];
        }
    }
}


void SearchNode::sendGoalToRobot( )
{

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setOrientation( { double(_golLocX), double(_golLocY), 0, 0 } );

    // Publish to the PolySync bus
    msg.publish( );

}


void SearchNode::sendNextWaypoint( int newIndex, int waypointID )
{

    _searcher->world.getStateFromIndex( newIndex );
    _newRobLocX = _searcher->world.checkedMoveIndX;
    _newRobLocY = _searcher->world.checkedMoveIndY;

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setPosition( { double(waypointID), 0, double(_numWaypoints) } );
    msg.setOrientation( { _newRobLocX, _newRobLocY, 0, 0 } );

    // Publish to the PolySync bus
    msg.publish();

}


int main()
{

    SearchNode searchNode;
    searchNode.connectPolySync( );

    return 0;
}

3.6 The SearchNode.cpp code explained

Now let’s take a look at the SearchNode.cpp code.

The line below defines the PolySync Core node and each of the protected event methods in the node state machine. Each of the states do minimal or no operation unless they are overridden.

#include < PolySyncNode.hpp >

The following data model defines all types and messages in the PolySync Core runtime. This must be included in all C++ node applications.

#include < PolySyncDataModel.hpp >

The following are needed for this application.

#include "SearchNode.hpp"
#include "GridMap.hpp"

using namespace cv;
using namespace std;
using namespace polysync::datamodel;

constexpr int INVALID_LOC = -1;

The constructor initializes all private members.

SearchNode::SearchNode( )
    :
    _searcher( ),
    _golLocX( INVALID_LOC ),
    _golLocY( INVALID_LOC ),
    _robLocX( INVALID_LOC ),
    _robLocY( INVALID_LOC ),
    _newRobLocX( INVALID_LOC ),
    _newRobLocY( INVALID_LOC ),
    _numWaypoints( INVALID_LOC ),
    _waypointCounter( INVALID_LOC )
{
    setNodeName( "searchNode" );

    setNodeType( PSYNC_NODE_TYPE_SOFTWARE_ALGORITHM );
}

SearchNode::~SearchNode( )
{
}

All of our PolySync Core C++ applications must then subclass the PolySync Core node object in order to connect to the PolySync Core bus and publish/subscribe messages. In SearchNode.hpp, you can see how SearchNode inherits a PolySync Core node object.

class SearchNode : public polysync::Node

Back in SearchNode.cpp, after the constructor is called and the node is generated, the node transitions to the INIT state. The initStateEvent is typically where a node creates messages and initializes any other resources that require a PolySync Core node reference. In this case, we register the SearchNode to listen for ps_platform_motion_msg.

void SearchNode::initStateEvent( )
{

    // register node as a subscriber to platform motion messages from ANY node.
    registerListener( getMessageTypeByName( "ps_platform_motion_msg" ) );

    setSubscriberReliabilityQOS(
            getMessageTypeByName( "ps_platform_motion_msg" ),
            RELIABILITY_QOS_RELIABLE );
}

The node then transitions from the INIT state to the OK state. The following is called continuously while in the node’s OK state, and is where almost all of the action happens for the node.

void SearchNode::okStateEvent( )

Generate a goal state, then do nothing and loop through again.

    if ( _golLocX == INVALID_LOC && _golLocY == INVALID_LOC )
    {

        _searcher = std::unique_ptr< Planner >{ new Planner };

        _golLocX = _searcher->getGoalX( );
        _golLocY = _searcher->getGoalY( );

        cout << endl << "Goal Location generated by Planner Algorithm. ";
        cout << endl << "Sending goal location to robot." << endl << endl;
        cout << "Waiting for Robot Location." << endl << endl << std::flush;

    }

Once generated, publish the goal state to the robot node until it is received.

    else if ( _robLocX == INVALID_LOC || _robLocY == INVALID_LOC )
    {

        sendGoalToRobot( );

        // do nothing, sleep for 10 milliseconds
        polysync::sleepMicro(10000);

    }

If the goal has been received, and the map generated, the robot sends its start location to the search node. Once the search node receives the start location, it can search for the optimal path.

    else if ( _newRobLocX == INVALID_LOC && _newRobLocY == INVALID_LOC )
    {

        cout << "Robot start location received by planner algorithm." << endl;
        cout << "Begin searching for optimal path from start location." << endl;

        int robIndex = _searcher->world.getIndexFromState( _robLocX, _robLocY) ;

        // use A* search to find optimal path.
        _numWaypoints = _searcher->searchAStar( robIndex );

        _newRobLocX = int(_robLocX);
        _newRobLocY = int(_robLocY);

    }

Once the search node has completed searching the grid for the optimal path, each successive loop through the okStateEvent( ) will publish a new waypoint for the robot node to move to. If the next waypoint is at the goal state, the disconnectPolySync( ) is called and the node shuts down.

    else if ( _newRobLocX != INVALID_LOC || _newRobLocY != INVALID_LOC )
    {

        // have I sent the final waypoint?  if so, shut down
        if ( _waypointCounter == _numWaypoints - 2 )
        {
            cout << endl << "Robot arrived at goal state after ";
            cout << _waypointCounter << " waypoints. " <<  endl;
            cout << "Shutting down SearchNode." << endl << endl;

            disconnectPolySync( );

            return;
        }

        cout << "Sending waypoint " << _waypointCounter + 1 << " to robot.";
        cout << endl;

        int newIndex = _searcher->getNextWaypoint( _waypointCounter + 1 );

        sendNextWaypoint( newIndex, int(_waypointCounter + 1) );

        // The ok state is called periodically by the system so sleep to reduce
        // the number of messages sent. do nothing, sleep for 1 millisecond.
        polysync::sleepMicro(1000);

    }

The messageEvent is called whenever a new message is received.

void SearchNode::messageEvent( std::shared_ptr< polysync::Message > newMsg )

When a new message is received, the first thing to do is to check whether it is a message published by the same node. In this case, SearchNode should discard anything it has self-published.

    if ( newMsg->getSourceGuid( ) == getGuid( ) )
    {
        return;
    }

The next line checks whether the received message is a PlatformMotionMessage. This is technically unnecessary in our case, since this problem is only structured to send and receive PlatformMotionMessage, though it is good practice as you scale up your application.

   if ( auto msg = getSubclass< PlatformMotionMessage >( newMsg ) )

After the search node verifies it has a new PlatformMotionMessage from a different node, it can now check whether the robot has moved to a new location. Local variables are populated accordingly.

        if ( msg->getOrientation()[0] != _robLocX ||
                msg->getOrientation()[1] != _robLocY )
        {
            _robLocX = msg->getOrientation()[0];
            _robLocY = msg->getOrientation()[1];

            if ( _waypointCounter != INVALID_LOC )
            {
                cout << "New Robot Location Message received at waypoint: ";
                cout << msg->getPosition()[0] << endl << std::flush;
            }
            _waypointCounter = msg->getPosition()[0];
        }

This shows how the searcher updates the RobotNode with the goal state. This method is called repeatedly in the okStateEvent until the RobotNode has received it and reported back its starting position.

void SearchNode::sendGoalToRobot( )
{

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setOrientation( { double(_golLocX), double(_golLocY), 0, 0 } );

    // Publish to the PolySync bus
    msg.publish( );

}

This shows how the search node updates the RobotNode with the next waypoint. Once the new location is received, RobotNode moves the robot to the new location and then publishes its current position.

void SearchNode::sendNextWaypoint( int newIndex, int waypointID )
{

    _searcher->world.getStateFromIndex( newIndex );
    _newRobLocX = _searcher->world.checkedMoveIndX;
    _newRobLocY = _searcher->world.checkedMoveIndY;

    // Create a message
    PlatformMotionMessage msg( *this );

    // Set publish time
    msg.setHeaderTimestamp( polysync::getTimestamp() );

    // Populate buffer
    msg.setPosition( { double(waypointID), 0, double(_numWaypoints) } );
    msg.setOrientation( { _newRobLocX, _newRobLocY, 0, 0 } );

    // Publish to the PolySync bus
    msg.publish();

}

This creates an instance of your SearchNode object. The second command is a blocking operation, and places the node in the node state machine. If there is a valid license, the node leaves the AUTH state and enters the INIT state, calling the initStateEvent.

int main()
{

    SearchNode searchNode;
    searchNode.connectPolySync( );

    return 0;
}

3.7 Planner code

This example uses A* to search for an optimal path. Of course, there are many different ways to search. See below how we implemented A*, and for extra credit, try writing your own algorithm.

3.8 Planner::searchAStar( )

int Planner::searchAStar( int _curLoc )
{

    // start the clock to time performance.
    float beginTime = clock();

    // reset expandedNodes counter
    _expandedNodes = 1;

    cout << endl << "Searching with A* . " << std::flush;

    while ( _endGame != true )
    {

        // expand the node in the openSet with lowest current score.
        _curLoc = getSearchNode( );

        // get x and y coordinates
        world.getStateFromIndex( _curLoc );

        // find all legal moves at the current node
        getNeighbors( _curLoc );

        // grade all legal moves to fully expand a node
        for (uint j = 0; j < _moves[ _curLoc ].size(); j++)
        {

            // set move to be graded
            _newLoc = _moves[ _curLoc ][ j ];

            // A* guarantees optimality along all expanded nodes, so there is no
            // need to re-check. If move has been expanded, move along
            if ( std::find( _closedSet.begin(), _closedSet.end(), _newLoc )
                            != _closedSet.end() )
            {
                continue;
            }

            // if move is at goal state, end game and send back path
            if ( world.checkGoal( _newLoc ))
            {
                // path @ newLoc will be the optimal path
                path[ _newLoc ] = path[ _curLoc ];
                path[ _newLoc ].push_back( _newLoc );
                _curLoc = _newLoc;
                _endGame = true;
                goto doneSearching;

            }
            else
            {

                // do nothing unless the new path to the index is less than the
                // current shortest path. This ensures that the global score is
                // only modified if the searcher has found a shorter path.
                if ( path[ _curLoc ].size() + 1 < _pathScore[ _newLoc ] )
                {
                    path[ _newLoc ] = path[ _curLoc ];

                    path[ _newLoc ].push_back( _newLoc );

                    _globalScore[ _newLoc ] =
                                    _heuristic[ _newLoc ]
                                    + path[ _newLoc ].size();

                    _pathScore[ _newLoc ] = path[ _newLoc ].size();
                }
            }
        }

        // once fully expanded, set the globalScore at the expanded node to a
        // large number. This is done to discourage searching the same node.
        _globalScore[ _curLoc ] = 1e9;

        // erase the current node from the openSet (nodes to be expanded).
        _openSet.erase(
                std::remove( _openSet.begin(), _openSet.end(), _curLoc ),
                _openSet.end() );

        // add the current node to the closedSet (nodes that have been expanded)
        _closedSet.push_back( _curLoc );

        // increase expanded node counter
        ++_expandedNodes;

        // print a "." for every 1000 nodes expanded as a progress meter
        if ( _expandedNodes % 1000 == 0 )
        {
            cout << ". " << std::flush;
        }
    }

    // what to do at endGame
    doneSearching:

    float endTime = float( clock() - beginTime ) / float( CLOCKS_PER_SEC );

    cout << endl << "Path Optimized, " << _expandedNodes << " nodes expanded.";
    cout << endl << "Total Optimization time: " << endTime << " seconds"<< endl;
    cout << endl << "Press the return key to execute optimal path." << endl;

    cin.get();

    cout << "Optimal path is " << path[ _curLoc ].size() << " steps long.";
    cout << endl;

    return path[ _curLoc ].size();

}

Here’s a visualizer to demonstrate different path planning algorithms in action.

To learn even more about sample-based planning, check out the Open Motion Planning Library (OMPL).

Conclusion

You have just finished the path planner tutorial using A* Implementation on a toy problem to show how a planning algorithm can generate an optimal trajectory and send it to a robot platform.

You also used PolySync Core to model the planner and robot as two separate nodes communicating messages back and forth.