Wednesday, July 11, 2018

7.5 Sense and Avoid Sensor Selection


Sense and Avoid Sensor Selection
            The realization of commercial operations of small unmanned aircraft systems (sUAS) within the National Airspace System (NAS) is restricted by archaic manned aircraft operating rules.  One rule in particular, under Title 14 of Code of Federal Regulations (CFRs) §91.113 requires a pilot operating an aircraft to maintain vigilance as to see and avoid other aircraft (FAA, 2004).  At the time of its publication this rule did not account for unmanned operations and was strictly intended to regulate manned aircraft operations.
The Federal Aviation Administration (FAA), by Congressional mandate under the FAA Modernization and Reform Act of 2012, published the Part 107 sUAS operating rules in an effort to promote commercial sUAS operations in the NAS, while offering an alternative means of compliance (AMOC) to manned regulations not previously attainable.  Specific to §91.113 is §107.31 Visual line of sight aircraft operation which states in part…”the remote pilot in command, the visual observer (if one is used), and the person manipulating the flight control of the small unmanned aircraft system must be able to see the unmanned aircraft throughout the entire flight…” (FAA, 2016).  Where this rule introduced a means to access the NAS to conduct commercial operations, it limited those operations to within visual line-of-sight (VLOS) only.
As sUAS operational capabilities continue to be improved, it has become evident that any operations conducted beyond VLOS (BVLOS) would only be allowed on a case-by-case basis and only after the operator presented a BVLOS concept of operations safety case found acceptable by the FAA.  This manner of gaining access to the NAS is proving to be un-sustainable and requires efforts on behalf of the UAS industry and regulators world-wide to develop and recognize acceptable standards and sensory based systems capable of providing an AMOC to current see and avoid requirements.  This research paper offers a promising mitigation referred to as Ground-Based Sense-and-Avoid (GB-SAA) radar technology and how it supports sUAS BVLOS operations.
GB-SAA
            Ground-Based Sense-and-Avoid radar technologies allow UAS to operate in U.S or International civil airspace by providing an ELOS to current see-and-avoid regulations (MIT, 2017).  The GBSAA is the only available SAA technological solution certified by the FAA and/or other International regulatory bodies that supports routine UAS operations in civil airspace (MIT, 2017).
            Where onboard payload capacity and adherence to regulated weight restrictions limits sUAS from adding onboard SAA sensors, the GBSAA utilizes mobile ground based and existing FAA radars to identify and track ADS-B compliant aircraft (MIT, 2017).
            SRC, Inc., a research and development corporation located in Syracuse, New York, offers a GBSAA system certified to DO-178 standards and provides operational benefits not realized using typical VLOS solutions:
  • ·       Reliable performance during low-visibility weather and nighttime
  • ·       Expanded operational area
  • ·       Increased operational time ( (SRC, 2018).

SRC’s GBSAA system provides a scalable approach to the many demands of UAS operators capable of 3-d target positioning, > 98% track reliability, high MTBF, fully integrated logistics support, flexible installation options (i.e. tripod/pedestal, rooftop/tower, vehicle mount), flexible power options (i.e. AC grid, generator, or 24 VDC vehicle), unattended remote operation over IP networks and ASTERIX or custom interfaces (SRC, 2018).  Benefits include; low lifecycle costs, ease of mobility and capable of integrating supporting technologies (i.e. cueing of visible/IR camera and ADS-B or secondary surveillance radar) (SRC, 2018).
            International efforts have been realized using a GBSAA system developed by DeTect called Harrier (Figure 1).

Figure 1 DeTect Harrier GBSAA adapted from https://www.uasvision.com/wp-content/uploads/2018/05/Detect-insert.jpg

Incorporating the latest in Doppler radar technology and real-time web-based displays capable of enhancing an operator’s situational awareness, these systems are deployed through-out regions of Spain and Europe to support wildfire and fire suppression, oil and gas site inspections, aerial survey and extended law enforcement applications during BVLOS operations (Unknown, 2018).
Conclusion
            In May of 2017, the FAAs Center of Excellence for UAS Research published a report, Small UAS Detect and Avoid Requirements Necessary for Limited Beyond Visual Line of Sight (BVLOS) Operations, the authors of the report (Askelson & Cathey, 2017), outlined the need for and the approach taken in the development of a standardized and globally accepted GBSAA radar system. The report offered the following conclusions:
Focusing on the Safety Risk Management (SRM) pillar of the SMS process, this effort (1) identified hazards related to the operation of sUAS in BVLOS, (2) offered a preliminary risk assessment considering existing controls, and (3) recommended additional controls and mitigations to further reduce risk to the lowest practical level. The risk assessment began with a set of sponsor provided assumptions and limitations. Generally speaking, operations in day, VMC conditions, within Class G and E airspace over other than densely populated areas were considered within scope. These operations were to be limited from the surface to 500 ft. AGL (although flight up to 1000 ft. could be considered), further than 3 miles from an airport or heliport, and within RLOS of a fixed ground-based transmitter. Following its release, several eligibility requirements and conditions of 14 CFR §107 were added to this list of assumptions for consideration as existing controls in the risk assessment.
Hopefully, these efforts to identify and develop AMOCs supporting BVLOS UAS operations will lead to FAA accepted standards and published regulations that will further UAS integration of the NAS.  Further collection and analysis of applicable data will prove this conclusion out.


References
Askelson, M., & Cathey, H. (2017). Small UAS Detect and Avoid Requirements Necessary for Limited Beyond Visual Line of Sight. ASSURE. Retrieved July 10, 2018, from http://www.assureuas.org/projects/deliverables/a2/Final_Report_A2_sUAS_BVLOS_Requirements.pdf
FAA. (2004, July 27). Sec. 91.113 Right-of-way rules: Except water operations. Retrieved June 29, 2018, from Code of Federal Regulations: http://rgl.faa.gov/Regulatory_and_Guidance_Library/rgFAR.nsf/0/934F0A02E17E7DE086256EEB005192FC?OpenDocument
FAA. (2016, August 29). Sec. 107.31 Visual Line of Sight Aircraft Operation. Retrieved July 10, 2018, from Code of Federal Regulations: http://rgl.faa.gov/Regulatory_and_Guidance_Library/rgFAR.nsf/0/3550A22F6001FDCE86258028006067B7?OpenDocument
MIT. (2017). Ground-Based Sense-and-Avoid System (GBSAA) for Unmanned Aircraft Systems (UAS). Retrieved from R&D 100 Conference: https://www.rd100conference.com/awards/winners-finalists/6825/ground-based-sense-and-avoid-system-gbsaa-unmanned-aircraft-systems-uas/
SRC. (2018). Ground-Based Sense and Avoid Radar System. Retrieved from SRC, Inc.: https://www.srcinc.com/what-we-do/radar-and-sensors/gbsaa-radar-system.html
Unknown. (2018, May 9). DeTect Installs Ground Based Sense-and-Avoid Radar at Aerodrome in Spain. Retrieved from UAS Vision: https://www.uasvision.com/2018/05/09/detect-installs-ground-based-sense-and-avoid-radar-at-aerodrome-in-spain/


Saturday, July 7, 2018

6.4 Control Station Analysis

Control Station Analysis
            Recent advancements in unmanned surface/underwater vehicles have shown their ability to relieve man of the dull, dirty, dangerous and deep hazards associated with maritime operations.  However, as with other unmanned systems (UxS), unmanned surface vehicles (USVs) and unmanned underwater vehicles (UUVs) are unable to realize their full potential due to limitations in providing the operator with essential data and communication strategies to support situational awareness.  In an effort to mitigate loss of operator SA, it is necessary for unmanned systems to depict and present data in a form and manner capable of supporting the operator’s decision making process while at the same time not promoting situation overload or saturation.  Recent developments in control station design architecture have shown promising results in minimizing data overload while enhancing operator SA.
Limitations
Commonality among UxS control stations has restricted an operator’s ability to quickly adapt to the command and control infrastructure essential to many different operating platforms. This inability of an operator to quickly adapt to different command architectures, requires specialized training for different operating systems and is therefore not cost effective or sustainable (Raytheon, 2011).
Increased use of unmanned surface vehicles (USVs) by the Navy has identified limitations in the command and control of these systems due to the limited communication capabilities from one surface vehicle to another.  In an effort to enhance range and communication strategies the Navy relies on a communication network comprised of manned/unmanned aircraft and/or satellite communications (Gonzales & Harting, 2014).
The PGCS essentially allows one operator to efficiently control the USV beyond line of sight (BLOS) operations while also controlling an UAS via multiple displays and waypoint navigation effectively controlling the communication bridge from one control station.  One operator + One Control Station x multiple unmanned platforms = reduced costs.
Solutions
The UAV Factory© developed a portable ground control station (Figure 1) designed specifically for controlling all manner of unmanned vehicles.
Figure 1 UAV Factory PGCS adapted from http://www.uavfactory.com/product/16
The control station or PGCS is comprised of commercial-off-the-shelf (COTS) products providing a flexible yet universal solution to UxS command and control (C2) (UAV Factory, n.d.).
            Equipped with a durable Panasonic CFR-31 Toughbook and capable of configuring dual 17” touch screen displays it provides the operator with multiple displays formatted to present data determined by the operator as essential to C2 of the particular UXS platform, regardless of the operating environment.
            Using a point-and-click command architecture, the operator has the ability to navigate the unmanned platform via waypoint navigation software.  This mouse based C2 strategy allows the operator to quickly transition between different unmanned systems without extensive training or considerations regarding currency in command of a particular operating platform.
            Using a portable environmentally protected case, electronic equipment can be configured to the operators needs by use of a comprehensive set of connections which allows the user to install application specific hardware such as autopilot RF-modems, video receivers, data links, data storage and recording devices ( (UAV Factory, n.d.).
            Power considerations allow the PGCS to be used anytime anywhere, using 10-32 VDC capability or dual hot-swappable Lithium Ion battery ports capable of providing up to two hours of operation while the integrated power distribution system provides two 12 VDC, 50 W power outputs for the equipment in the electronics compartment as well as external devices that are used in conjunction with the GCS.  (UAV Factory, n.d.).
Recommendation
Use of proprietary closed platform control systems, requires higher development costs, specialized training for operators and maintenance personnel, and increased costs associated with support infrastructure.
Realization of an open control interface command structure, like that of the PGCS, can reduce unnecessary costs, encourage innovation, improve quality of operations and maximize operator qualifications (Raytheon, 2011).
This synergistic design logic will increase ground station functionality, facilitate C2 of multiple vehicles and sensors, while enhancing an operators situational awareness across all levels of unmanned systems (Raytheon, 2011).
References
Gonzales, D., & Harting, S. (2014). Designing Unmanned Systems with Greater Autonomy. RAND Corporation. Retrieved from https://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR626/RAND_RR626.pdf
Raytheon. (2011, June 20). Common Ground Control Framework: More Efficient. Less Costly. Retrieved July 7, 2018, from https://www.youtube.com/watch?v=XQYW5q4qAJQ

UAV Factory. (n.d.). Portable Ground Control Station. Retrieved July 7, 2018, from UAV Factory: http://www.uavfactory.com/product/16

Sunday, June 24, 2018

4.5 Unmanned Ground System Data Protocol and Format

Unmanned System Data Protocol and Format
All unmanned systems, regardless of their operating environment, rely on proprioceptive sensors designed specifically to support the unmanned vehicles operations within its given domain.  As the amount of sensors and cameras needed to support these operations continues to increase, so too does the amount of data collected (CHI Corporation, 2017)
Where real-time analysis of data supports situational awareness for both human and mechanical elements, further command and control (C2) considerations regarding data format, protocols and storage methods must be realized to ensure the operating system is effective and functional. 
This research paper addresses the sensors essential to support autonomous vehicle operation as well necessary power and storage requirements.  In addition, four data considerations that support autonomous vehicles, or more commonly referred to as self-driving cars.
Sensors
            Realization of level 4 or 5 fully autonomous vehicles by 2021/2022 will require multiple redundant sensory systems (Rudolph & Voelzke, 2017).  Unfortunately, cost effective high resolution light detection and ranging (LiDAR) systems with sensing capabilities up to 300 meters essential to L4/5 operations are still in development. However, sensory platforms used to support current level 1 and 2 driver assisted operations consist primarily of camera and radar systems.
Camera/Imaging
            Singular and multiple camera applications working in unison with radar based systems enhance driver situational awareness using sensor fusion algorithms to display speed and distance as well as images of fixed and moving objects (Rudolph & Voelzke, 2017).  Current image processing requires a three stage approach where images captured by the camera must be sent to the camera electronic control unit (ECU) to facilitate image decoding, lens correction, geometrical transformation, video stream, overlay and image streaming before the image is finally displayed on the head unit (Rudolph & Voelzke, 2017)
            The latest smart camera technologies eliminate the ECU, as image processing is initiated in the camera itself and finalized in the display unit.
RADAR
            Radio Detection and Ranging or RADAR, provides recognition of objects using radio waves operating in either a 24 GHz or 77 GHz frequency spectrum.  The latter offering advantages in higher accuracy of speed and distance measuring as well as smaller antenna and lower rates of interference (Rudolph & Voelzke, 2017).
            Raw data collected by the radar sensor is sent directly to a process controller, providing several distinct advantages:
·       Reduces silicon surface space requirements and associated costs
·       Relocation of power loss is facilitated using the control units larger size as compared to the radar sensor
·       There is no loss of data by filtering or compression, the ability to access the radar sensor’s unfiltered raw data provides more possibilities for signal processing and flexibility (Rudolph & Voelzke, 2017).
LiDAR
            Light Detection and Ranging (LiDAR), is a laser based systems capable of measuring distances from the unmanned vehicle to both fixed and moving objects.  LiDAR systems are not new and have been used to enhance industrial and military operations for years.  However, as previously noted these systems are very costly and large scale deployment for the automotive industry is not feasible at this time. 
DATA Management
            The most critical bi-product of sensor based applications is the data collected and how it is allocatedData architectures must be designed to manage the data as it is collected, processed and stored to support real-time command and control and/or future comparative analysis operations.  Recent technological improvements have brought about central data processing units that allows the data from all the sensors to be shared for multiple functions.  As noted in the AZO Sensor article, Automotive Sensor Technology for Autonomous Driving, (2017):
The sensor modules then perform only sensory and data transmission tasks without any processing and decision-making tasks, thus eliminating data losses because of pre-processing or compression in the sensor module. Consequently, the sensor modules can become smaller, energy saving and more cost effective.
Improvements to sensors alone, does not address how data is managed.  Therefore, (4) areas that should be addressed when designing data management systems are:
  • ·       Data Acquisition
  • ·       Data Storage
  • ·       Data Labeling
  • ·       Data Management

Acquisition
            A plan that balances three critical factors; 1) scenario coverage portfolio 2) urgency of collection and 3) available resources should be developed, as it will eliminate redundant data and ensure “data acquisition meets comprehensive needs while running as fast and efficiently as possible given available resources” (Accenture, 2018).
Storage
            Early design consideration should address, whether data storage will be self-contained or cloud based, how it will be off-loaded, how will it be secured during each stage of collection, annotation and use, and how to identify when data is usable or not (Accenture, 2018).
Labeling
Accenture noted in their report, Autonomous Vehicles: The Race Is On (2018):
Many vehicles have multiple sensors (radar, ultrasound, LiDAR, cameras), each gathering different, complementary data.  In just one frame from one camera there can be hundreds of objects to label accurately.  By some estimates each hour of data collected takes almost 800 human hours to annotate.  The massive scale of this challenge is impeding many companies from moving as quickly as they would like.
In that regard a few considerations of how to label the data is rather important;
  • ·       Provide clarity on what to capture
  • ·       Determine the toolsets needed to best label and annotate objects across data formats
  • ·       Consider economies of scale

Management
            Considerations of who, what, when, where and why approach to data collection, storage and use should be taken in order to maintain data integrity and usability.  How these considerations are communicated to the research and development teams will make accessibility of relevant data much easier.
Recommendation
            Traditional data storage and processing techniques are no longer capable of handling the amount of data necessary or the power required to support autonomous operations, nor do these techniques remain cost effective.  Therefore, an open source design architecture that promotes sharing of data across different operating platforms and media infrastructures on an as needed basis is strongly encouraged.
One of many data storage solutions developed by the CHI Corporation, the Storagecraft OneBlox Architecture is a recovery/replication solution capable of backing up data, applications and systems over wide area networks or via the Cloud (CHI Corporation, 2017).  Realizing savings in additional development costs and power consumption as it relates to on-board data processin
 References
Accenture. (2018). Autonomous Vehicles: The Race Is On. Retrieved from Accenture: https://www.accenture.com/t20180309T092359Z__w__/id-en/_acnmedia/PDF-73/Accenture-Autonomous-Vehicles-The-Race-Is-On.pdf
AZO Sensors. (2017, June 20). Automotive Sensor Technology for Autonomous Driving. Retrieved from AZO Sensors: https://www.azosensors.com/article.aspx?ArticleID=847
CHI Corporation. (2017). More Sensors, More Cameras, More Challenges. Retrieved from Autonomous Vehicle Development: https://chicorporation.com/solutions/autonomous-vehicle-development/

Rudolph, G., & Voelzke, U. (2017, November 10). Three Sensor Types Drive Autonomous Vehicles. Retrieved from Sensors Online: https://www.sensorsmag.com/components/three-sensor-types-drive-autonomous-vehicles

Sunday, June 17, 2018

3.4 UAS Sensor Placement

UAS Sensor Placement
Sensor placement is a critical design decision that is based on the objective that an unmanned system will be tasked to perform.  Unmanned aircraft systems, in all shapes, sizes and platform configurations can be equipped with a variety of sensory platforms to accommodate any number of defined commercial operations or hobbyist/modeler activities.  This research paper discusses camera sensor applications and placement considerations for a system designed to provide full-motion video and still aerial photography operations and for a first person view (FPV) racer.
DJI Mavic Pro
            The DJI Mavic Pro (Figure 1) is a compact portable quad-copter capable of providing professional grade still and ultrahigh definition (UHD) video.  Equipped with a 4K camera stabilized by a 3-axis mechanical gimbal the Mavic Pro is supported by a semi-autonomous flight control system that allows the operator to focus on the photo op (DJI, 2018).
Figure 1 DJI Mavic Pro adapted from http://www.directd.com.my/dji-mavic-pro
The sensory platform consists of 5 cameras, GPS & GLONASS, 2 ultrasonic range finders, redundant IMU sensors and 24 high-performance computing cores supporting obstacle avoidance and precise hover capabilities (DJI, 2018).
Camera
            The camera, located on the bottom front of the operating platform, provides unobstructed professional grade stills and UHD video, with operating specs of 4K/30fps, 12 megapixels photos (JPEG, DNG) and 1080p video [MP4, MOV (MPEG-4 AVC/H.264)] at 96fps (DJI, 2018).
            To obtain quality 4K video, Mavic Pro is outfitted with a high precision 3-axis gimbal capable of eliminating any vibrations incurred during airborne operations.  Similar to professional sports cameras, the Mavic Pro comes with a 1/2.3 inch CMOS image sensor and an aerial enhanced lens with a 28mm focal length (DJI, 2018).
FPV Racer
            First Person View or FPV racing refers to a hobbyist/modelers form of sport/recreation where the pilot in command tests their piloting skills against other drone pilots.  Using a platform mounted forward looking sensor (camera) capable of transmitting real-time data to the pilot via a video monitor or specialized goggles, the view realized by the pilot is as if they were sitting in the cockpit (DRL, n.d.), hence the term FPV.  The racing platforms are typically small quadcopters (Figure 2) equipped with platform mounted forward looking sensors (camera) (DRL, n.d.).
Figure 2 FPV Quadcopter adapted from https://www.rcgeeks.co.uk/image/cache/catalog/category-images/newmenu/drones_hobby-drones_fpv-racers-750x430.jpg
In addition to the camera, operational control is realized using onboard flight stabilizing sensors to account for the aggressive manual command and control (C2) inputs necessary to navigate through the obstacle course at speeds up to 120 mph (DRL, n.d.).
Camera
Critical to all FPV operations the camera (Figure 3) must be capable of collecting and transmitting real time high definition video to the operator without any latency issues.
Figure 3 Typical FPV Camera adapted fromhttps://www.arrishobby.com/runcam-swift-mini-fpv-camera-for-fpv-racing-drones-p-3597

The camera must also be mounted in a frame that allows it to be tilted at an angle capable of providing the pilot with the best field of vision when the drone is operating at racing speed (Figure 4).
Figure 4 FPV Camera Tilt adapted from https://learnassets.getfpv.com/learn/wp-content/uploads/2018/04/30013237/CameraTilt123.jpg
As depicted in Figure 4 (3rd image), when the drone reaches full throttle / forward pitch and the camera tilt has not been adjusted, the operator will only see the ground and not the drones relation to the horizon (Escalante, 2018).
            Camera tilt and operating angles of attack accounted for, another significant concern is that of video transmission latency.  In his article, What Is FPV Camera Technology In Drones And Best Uses (2018), F. Corrigan noted:
When flying at 50 mph (a typical speed for an experienced FPV racer), a 100 ms delay can mean your drone will travel about 6 feet before you receive the video, which could mean the difference in you missing an obstacle or hitting it.  By using a dedicated FPV camera, your FPV system will have a much lower latency.  A latency of less than 40 ms is what you can expect.
The camera must also be capable of providing high quality resolution at 60 fps with a wide field of view (FOV) to allow the pilot to see around turns (Smith, 2015).  Since most FPV cameras are mounted on the front of the racing platform it must also be durable enough to withstand any impact (Corrigan, 2018).
Summary
            Concept of operations is an essential part of any UAS design process.  A reiterative process, it is incumbent on design engineers to have a complete understanding of the intended application.  Where cameras are identified as a primary sensor in realizing/supporting a specific operation (i.e. photography, FPV racing), engineers must identify the most cost effective sensor on the market, the most probable location/placement on the given platform and the cameras ability to withstand risks associated with the platforms operating environment (i.e. vibrations, weather, latency, durability, etc.).  But one thing remains constant, camera placement on any operating platform supports an unobstructed view for the operator, enhancing situational awareness and optimum operational results. 
References
Corrigan, F. (2018, April 21). What Is FPV Camera Technology In Drones And Best Uses. Retrieved from DroneZon: https://www.dronezon.com/learn-about-drones-quadcopters/what-is-fpv-camera-fov-tvl-cmos-ccd-technology-in-drones/
DJI. (2018). Mavic Pro Whereve You Go. Retrieved from DJI: https://www.dji.com/mavic
DRL. (n.d.). What is FPV Drone Racing? Retrieved from DRL: https://thedroneracingleague.com/learn-more/
Escalante, J. (2018, May 2). FPV Camera Angle: Solving the Mystery of Fast Flight. Retrieved from GetFPV: https://www.getfpv.com/learn/fpv-flight-academy/fpv-camera-angle-full-throttle-flight/

Smith, K. (2015, July 7). Drone Racing: What is it? Retrieved from MyFirstDRone: https://myfirstdrone.com/blog/drone-racing-what-is-it

Thursday, June 7, 2018

2.5 Unmanned Systems Maritime Search and Rescue

Unmanned Systems Maritime Search and Rescue (SAR)
Underwater search and rescue operations present significant risks to the human element as well as to the underwater operating platform.  However, recent technological improvements in unmanned underwater vehicles (UUVs)/autonomous underwater vehicles (AUVs) are proving to be viable mitigations to the dull, dirty, dangerous and deep (D4) risks associated with these operations.  Underwater search and rescue operations, by definition, assume that after a thorough search has been conducted to locate a specific target that a rescue follows.  Unfortunately, in deep water operations, rescues are rarely realized and recovery operations become the norm.  Recent headlines have highlighted these facts and operating platforms with specific capabilities are chosen to conduct these search and recovery operations.
This research paper presents an UUV/AUV that was deployed to assist in locating the wreckage of Malaysia Flight MH370, which was presumed to have disappeared in deep waters off the southern Indian Ocean (Varandani, 2018).
A description of the UUV/AUV is provided as well as a detailed description of the systems sensors and how they were designed specifically for the maritime environment.
In conclusion, questions regarding system and operational enhancements, and any advantages of UUV/AUV systems to those of manned platforms are addressed.
Bluefin-21 AUV
The Bluefin-21 (Figure 1) built by General Dynamics, is a self-contained autonomous vehicle equipped with a highly accurate sensory payload capable of extended deep water operations, typically realized by larger more cumbersome platforms (General Dynamics, 2018).
Figure 1 Bluefin-21 Autonomous Underwater Vehicle (AUV) adapted from http://d2fuv70sajz51d.cloudfront.net/publish/3495-b4EKktiE/Bluefin-21.png
With operating speeds between 2 – 4.5 knots, the Bluefin-21 is capable of operating at depths of 1500 meters for approximately 20 hours at an average speed of approximately 3 knots (Chand, 2014).  Its exteroceptive sensory payload consists of side scan sonar, sub-bottom profiler, multi-beam echo- sounder and digital camera (Chand, 2014).  A suite of proprioceptive sensors provides data essential to the onboard inertial navigation system and an ultra-short baseline system supporting autonomous navigation and positioning of the vehicle (Chand, 2014).
Sensors
            Like other unmanned systems the Bluefin-21 is equipped with both proprioceptive and exteroceptive sensors essential to supporting SAR operations.  Exteroceptive sensors collect/analyze data significant to the operating domain of the unmanned vehicle.  The
Side-Scan Sonar
            The EdgeTech 2200-M 120/410 kHz is an acoustic sensing technology that supports GPS mapping applications at depths between 0.5 and 11,000 meters (Bloss, 2013).  These types of sensors are specifically adapted for use in water applications where visibility and zero-low light conditions limit typical camera capabilities.  Using sound the side scan sonar displays an image based on the strength of the returning echo (NOAA, 2017).
Sub-Bottom Profiler
            Also a sonar based sensor, the EdgeTech DW-216 sub-bottom profiler is used to define and characterize layers of sediment, rock and other objects buried beneath the seafloor.  Using reflected and refracted sound pulses this system uses low-frequency pulses in order to deeply penetrate the sea floor but provides lower resolution pictures compared to that of high frequency systems that provide better imagery but are limited in the depth of scan (Substructure, n.d.).
Multi-beam Echo Sounder
            A significant improvement over that of side-scan sonar, the Reson 7125 400 kHz multi-beam echo-sounder employs numerous sonar beams to provide ultra-high resolution images of the seafloor (Substructure 2, n.d.)  In their online article, Multibeam Sonar (n.d.), Substructure noted:
Multibeam SONAR offers considerable advantages over conventional systems, including increased detail of the seafloor (100 percent coverage), confidence that all features and hazards are mapped without voids, the ability to map inaccessible areas (e.g., under jetties, structures, and vessels near breakwaters, in shoal areas, and adjacent to retaining walls), fewer survey lines (which shortens survey time), optimum seafloor detail for route and dredge programs, and the ability to comply with the highest order International Hydrographic Organization (IHO) and US Army Corps of Engineers (USACE) hydrographic standards.
Digital Camera
            Configured with a Prosilica GE1900 camera systems, the bluefin-21 is capable of capturing high-resolution black and white images at up to three fps (Naval Technology, 2018), the ensuing images are used to provide a visual perspective of target data.
Navigation and Communication
            As previously noted, the Bluefin-21 is also equipped with a robust set of proprioceptive sensors used to support navigation and communication.  Positive stability and control is realized using an inertial navigation systems (INS) which is further enhanced with a Doppler velocity log (DVL), sound velocity sensors (SVS) and state of the art global positioning system (GPS) (Naval Technology, 2018).  Communications with outside entities is facilitated using an externally mounted antenna supported by GPS and communication systems employing acoustic modems, Radio frequency (RF) serial links and Iridium satellite modem and Ethernet direct (Naval Technology, 2018).
Conclusion
            Risks associated with manned deep water operations are apparent, and mitigations to those risks are realized using unmanned/autonomous underwater vehicles.  Unfortunately, search and rescue operations in extreme/deep operating domains inevitably become search and recovery operations.  The Bluefin-21 is especially suited to meet the specific needs of these types of operations, on the other hand, timely target acquisition and recovery in shallower waters would more likely be accomplished using tethered remotely operated vehicles (ROVs) that enable real-time situational awareness and supported by a robotic arm. Further ROV discussions are saved for another time and assignment.



References
Bloss, R. (2013). Lasers, radar, acoustics and magnetic sensors come to the aid of unmanned vehicles. Sensor review, 33(3), 197-201. Retrieved from https://search-proquest-com.ezproxy.libproxy.db.erau.edu/docview/1365745582/fulltextPDF?accountid=27203
Chand, N. (2014). Unmanned/Autonomous Underwater Vehicles. Retrieved from SP's Naval Forces: http://www.spsnavalforces.com/story/?id=328
General Dynamics. (2018). Bluefin-21 Autonomous Underwater Vehicle (AUV). Retrieved from Mission Systems: https://gdmissionsystems.com/products/underwater-vehicles/bluefin-21-autonomous-underwater-vehicle
Naval Technology. (2018). Bluefin-21 Autonomous Underwater Vehicle (AUV). Retrieved from Naval Technology: https://www.naval-technology.com/projects/bluefin-21-autonomous-underwater-vehicle-auv/
NOAA. (2017, July 06). Side Scan Sonar. Retrieved from NOAA Ocean Service Education: https://oceanservice.noaa.gov/education/seafloor-mapping/how_sidescansonar.html
Substructure 2. (n.d.). Multibeam SONAR. Retrieved from Substructure - Hydrographic Surveys. Diving. Marine Services: http://substructure.com/about/marine-services-information/hydrographic-surveys/what-is-sonar/multibeam-sonar/
Substructure. (n.d.). Sub-Bottom Profiling. Retrieved from Substructure-Hydro Graphic Surveys. Diving. Marine SErvices.: http://substructure.com/about/marine-services-information/hydrographic-surveys/tools-used-to-examine-the-area-below-the-seafloor/

Varandani, S. (2018, June 05). MH370 Search Vessel Still Scanning Area Of Suspected Black Box Pings: Report. Retrieved from International Business Times: http://www.ibtimes.com/mh370-search-vessel-still-scanning-area-suspected-black-box-pings-report-2687462

Friday, June 1, 2018

1.5 Research Blog: UAV Sensor Enhancements

Unmanned systems, in all manner of operating domain (i.e. marine, ground, air and space) have realized significant technological improvements in both the public and civil sectors.  However, current Federal Aviation Administration (FAA) regulations appear to be the single most restrictive factor to full on integration of UAS in the National Airspace System (NAS).
The greatest hurdle for unmanned aerial systems is showing an equivalent level of safety to regulations that were written for manned operations.  Within Title 14 of the Code of Federal Regulations (14 CFR), §91.113 Right-of-way-rules: Except water operations is just one of these rules.  Written at a time when unmanned aircraft were not a reality or even a possibility, this rules primary intent is that an aircraft operating in the NAS relies on the ability of the person operating that aircraft to see and avoid other aircraft or other potential hazards whether airborne or fixed.  With the remote pilot in command (RPIC) removed from the operating platform of the aircraft, the RPIC must rely on sensors not yet shown to meet the intent of the original rule.  However, in an effort to develop an acceptable level of safety to this rule, recent technological improvements have been realized that may very well prove beneficial.
In his article, Tarabee Showcases LED Distance Sensor for UAVS (2018), author M. Rees introduced a new detect and avoid sensor developed specifically for UAS.  Manufactured by Tarabee, these sensors rely on light-emitting-diodes (LEDs) capable of measuring and returning distance values in millimeters at high rates of speed (Tarabee, 2018).  M. Rees provided the following additional information:
The TeraRanger Evo is a lightweight LED distance sensor. Weighing just 9g (12g with communication board), it has a unique modular design allowing multiple sensors to be used on one platform, with simple plug and play functionality. Ideal for use on UAVs, for high-speed collision avoidance and object detection, the new TeraRanger Evo sensor has a 60m distance range with centimeter-level accuracy.
One can be sure, as micro technologies similar to these become instrumental to RPIC situational awareness and present mitigations essential to addressing the risks of unmanned operations in the NAS, full integration will be realized.
References
Rees, M. (2018, May 15). Tarabee Showcases LED Distance Sensor for UAVs. Retrieved from UST-Unmanned Systems Technology: http://www.unmannedsystemstechnology.com/2018/05/terabee-showcases-led-distance-sensor-for-uavs/

Tarabee. (2018). Distance Sensors. Retrieved from Tarabee: https://www.terabee.com/distance-sensors/