One of the pillars of the autonomous system is the computing platform. This essential component provides the other subsystems with the required amount of processing power to perform their various functions. This component needs to be small, robust, and consume as little power as possible to prevent any avoidable negative impact on the performance or lifetime of the vehicle. The computational units which make up this component must operate at various levels, some performing high-level situational analysis and environment hypothesis, while others are transforming and transferring data, and providing low-level safety functions close to the hardware layer. To provide all of these functions, a total of four computational units are used.

First is the NVIDIA Jetson TX2. Provided to the team by Silver sponsors Xenon and NVIDIA, this credit-card sized graphical processor contains 256 CUDA enabled cores arranged in 4 multiprocessor units. It lends itself well to image processing due to the highly parallel nature of such tasks. For this reason, the TX2 is solely dedicated to processing stereo images from the car’s stereoscopic camera. The outcome of this processing is a set of cone positions in the coordinate frame of the vehicle which is passed to the Extended Kalman Filter (EKF) for sensor fusion.

Second is the Intel i7 8600 processor. This houses the remaining components of the high-level autonomous software system. These include LiDAR cone detection, SLAM (‘Simultaneous Localization and Mapping’ using an EKF), path planning, motion control, GPS, and the vehicle interface layer. The flow of information through this machine is quite linear. First, the cones and GPS/IMU data is fused together in the EKF. The resultant trackmap and vehicle state is passed on to path planning, which generates a desired vehicle path. This is then passed to motion control which decides how the vehicle should be actuated to follow the path. This is finally sent to the vehicle interface layer to be executed.

The vehicle interface layer communicates using UART to a Cypress PSoC 5LP. This is a highly configurable microcontroller which houses a rules compliant state machine to track the status of the vehicle as it performs the required tasks. It performs safety critical functions and interfaces with a custom PCB which uses non-programmable logic blocks to further ensure the system is correct and operational. Kinematic requests from motion control are validated by the state machine, before being packaged into a message to be transmitted on the vehicle’s autonomous CAN bus.

These requests are read off the CAN bus and decoded by the MoTeC M150 Engine Control Unit (ECU), which is in control of the actual vehicle itself. The M150 performs its own validation, before finally communicating with the required components of the vehicle to service the requested acceleration and steering angle.

At the high-level layers, to aid in the integration of the various components, Robot Operating System (ROS) was utilized. Contrary to its name, this is not a conventional operating system, but rather a set of tools and libraries which help in the development of robotic systems. It provides methods to isolate components into logical blocks called ‘nodes’ which communicate using the network transport layer. This allows the various elements to be developed in isolation, and for ‘nodes’ to be easily swapped out with newer version, or potentially simulated versions for testing.

To ensure the system is operational, each ROS component contains an internal state machine which helps a master controller keep track of the running subcomponents. Each subcomponent, or ‘child node’ publishes heartbeats to the master controller. These are monitored, collated, and published to the PSoC microcontroller. This further monitors the heartbeats and generates a periodic pulse to a physical watchdog IC (Integrated Circuit) which stops the car if the pulse dies. This chain of active safety ensures that the vehicle will only remain operational if all components are actively working and are not blocked.

All of these processing units come together to create the autonomous computing system, which is ultimately one of the foundations of the autonomous vehicle. In the future we’re looking towards using an NVIDIA Xavier to increase the visual processing throughput, allowing for more accurate landmark detection and ultimately a faster, more reliable car which will help pave the way for Monash Motorsport in this new and exciting field of engineering.

by James Wyatt