Unmanned System Data
Protocol and Format
All unmanned systems, regardless of their operating
environment, rely on proprioceptive sensors designed specifically to support
the unmanned vehicles operations within its given domain. As the amount of sensors and cameras needed
to support these operations continues to increase, so too does the amount of
data collected (CHI Corporation, 2017) .
Where real-time analysis of data supports situational
awareness for both human and mechanical elements, further command and control
(C2) considerations regarding data format, protocols and storage methods must
be realized to ensure the operating system is effective and functional.
This research paper addresses the sensors essential to
support autonomous vehicle operation as well necessary power and storage
requirements. In addition, four data considerations
that support autonomous vehicles, or more commonly referred to as self-driving
cars.
Sensors
Realization
of level 4 or 5 fully autonomous vehicles by 2021/2022 will require multiple
redundant sensory systems (Rudolph & Voelzke, 2017) . Unfortunately, cost effective high resolution
light detection and ranging (LiDAR) systems with sensing capabilities up to 300
meters essential to L4/5 operations are still in development. However, sensory
platforms used to support current level 1 and 2 driver assisted operations
consist primarily of camera and radar systems.
Camera/Imaging
Singular and multiple
camera applications working in unison with radar based systems enhance driver
situational awareness using sensor fusion algorithms to display speed and
distance as well as images of fixed and moving objects (Rudolph & Voelzke, 2017) . Current image processing requires a three
stage approach where images captured by the camera must be sent to the camera electronic
control unit (ECU) to facilitate image decoding, lens correction, geometrical
transformation, video stream, overlay and image streaming before the image is
finally displayed on the head unit (Rudolph & Voelzke, 2017) .
The latest smart camera
technologies eliminate the ECU, as image processing is initiated in the camera
itself and finalized in the display unit.
RADAR
Radio Detection and
Ranging or RADAR, provides recognition of objects using radio waves operating
in either a 24 GHz or 77 GHz frequency spectrum. The latter offering advantages in higher
accuracy of speed and distance measuring as well as smaller antenna and lower
rates of interference (Rudolph & Voelzke, 2017) .
Raw data collected by
the radar sensor is sent directly to a process controller, providing several
distinct advantages:
·
Reduces silicon surface space requirements and
associated costs
·
Relocation of power loss is facilitated using
the control units larger size as compared to the radar sensor
·
There
is no loss of data by filtering or compression, the ability to access the radar
sensor’s unfiltered raw data provides more possibilities for signal processing
and flexibility (Rudolph & Voelzke, 2017) .
LiDAR
Light Detection and
Ranging (LiDAR), is a laser based systems capable of measuring distances from
the unmanned vehicle to both fixed and moving objects. LiDAR systems are not new and have been used
to enhance industrial and military operations for years. However, as previously noted these systems are
very costly and large scale deployment for the automotive industry is not
feasible at this time.
DATA
Management
The
most critical bi-product of sensor based applications is the data collected and
how it is allocated. Data architectures must be designed to
manage the data as it is collected, processed and stored to support real-time
command and control and/or future comparative analysis operations. Recent technological improvements have
brought about central data processing units that allows the data from all the
sensors to be shared for multiple functions.
As noted in the AZO Sensor article, Automotive Sensor Technology for
Autonomous Driving, (2017):
The sensor modules then perform only sensory and
data transmission tasks without any processing and decision-making tasks, thus
eliminating data losses because of pre-processing or compression in the sensor
module. Consequently, the sensor modules can become smaller, energy saving and
more cost effective.
Improvements to sensors alone, does not address how data is managed. Therefore, (4) areas that should be addressed
when designing data management systems are:
- · Data Acquisition
- · Data Storage
- · Data Labeling
- · Data Management
Acquisition
A
plan that balances three critical factors; 1) scenario coverage portfolio 2)
urgency of collection and 3) available resources should be developed, as it
will eliminate redundant data and ensure “data acquisition meets comprehensive
needs while running as fast and efficiently as possible given available
resources” (Accenture, 2018) .
Storage
Early design
consideration should address, whether data storage will be self-contained or
cloud based, how it will be off-loaded, how will it be secured during each
stage of collection, annotation and use, and how to identify when data is
usable or not (Accenture, 2018) .
Labeling
Accenture noted in their report, Autonomous
Vehicles: The Race Is On (2018):
Many vehicles
have multiple sensors (radar, ultrasound, LiDAR, cameras), each gathering
different, complementary data. In just
one frame from one camera there can be hundreds of objects to label accurately.
By some estimates each hour of data
collected takes almost 800 human hours to annotate. The massive scale of this challenge is
impeding many companies from moving as quickly as they would like.
In that regard a few considerations
of how to label the data is rather important;
- · Provide clarity on what to capture
- · Determine the toolsets needed to best label and annotate objects across data formats
- · Consider economies of scale
Management
Considerations
of who, what, when, where and why approach to data collection, storage and use
should be taken in order to maintain data integrity and usability. How these considerations are communicated to
the research and development teams will make accessibility of relevant data
much easier.
Recommendation
Traditional data storage
and processing techniques are no longer capable of handling the amount of data
necessary or the power required to support autonomous operations, nor do these
techniques remain cost effective.
Therefore, an open source design architecture that promotes sharing of
data across different operating platforms and media infrastructures on an as
needed basis is strongly encouraged.
One of many data
storage solutions developed by the CHI Corporation, the Storagecraft OneBlox
Architecture is a recovery/replication solution capable of backing up data,
applications and systems over wide area networks or via the Cloud (CHI Corporation, 2017) . Realizing savings in additional development
costs and power consumption as it relates to on-board data processin
References
Accenture. (2018). Autonomous Vehicles: The Race Is
On. Retrieved from Accenture:
https://www.accenture.com/t20180309T092359Z__w__/id-en/_acnmedia/PDF-73/Accenture-Autonomous-Vehicles-The-Race-Is-On.pdf
AZO Sensors. (2017, June 20). Automotive Sensor Technology
for Autonomous Driving. Retrieved from AZO Sensors:
https://www.azosensors.com/article.aspx?ArticleID=847
CHI Corporation. (2017). More Sensors, More
Cameras, More Challenges. Retrieved from Autonomous Vehicle Development:
https://chicorporation.com/solutions/autonomous-vehicle-development/
Rudolph, G., & Voelzke, U. (2017, November 10). Three
Sensor Types Drive Autonomous Vehicles. Retrieved from Sensors Online:
https://www.sensorsmag.com/components/three-sensor-types-drive-autonomous-vehicles