DEV Community

Cover image for Synthetic Vision and AR/VR in Unmanned Aerial Systems and Aviation
PixelPlex
PixelPlex

Posted on

Synthetic Vision and AR/VR in Unmanned Aerial Systems and Aviation

In the emerging world of unmanned aerial systems (UAS; often called drones), there exists a wide slew of technologies that can be developed into system operations. Among these is the use of augmented reality and virtual reality. In aviation, there are not many uses of augmented systems that seek to replicate life besides what has been seen in avionics. Moving to UAS, there are subset industries dedicated to first-person-view (FPV) technologies, and the meshing of avionics with headset technology may curate a better remote pilot environment. In this paper, we will compare and contrast the technologies between manned aircraft/UAS and cover the problems associated with each.

Starting with manned aircraft, there are not any current uses of headset technologies. The extent of technology lies with dynamic screens such as the Garmin 1000 unit, which features terrain, next generation weather radar (NEXRAD), and other aircraft in 2D space via traffic alert and collision avoidance systems (TCAS). These systems aid pilots of all types in navigation through the national airspace system (NAS) and offer alternative ways of looking at continually updating data. The problem with these already advanced avionics systems is that it requires a lot of looking up and down. As a pilot is landing in visual flight rules (VFR), they must monitor what is outside the window and the instruments simultaneously.

The situation described above has been adequately solved by the addition of heads-up-displays (HUD) which aims to blend avionics into a typical pull-down tray. When necessary, the pilot can pull down the HUD (much like a car sun visor) to view important data such as airspeed, altitude, vertical speed, attitude indicator, turn coordinator, and more. While these systems solve the problem, they are expensive and rare to find except on newer Boeing 747 and 787s (other models too). The cost and effectiveness of these features are not necessary when a pilot can move their head up and down.

The final area within manned aircraft is the use of navigation systems, again with landing operations. One specific array of technologies comprises the enhanced flight vision system (EFVS). EFVS is used for little to zero visibility conditions such as for a special CAT III ILS landing. This system can be seen through a HUD if equipped or may have its own digital display alongside the avionics. EFVS creates a synthetic image by way of a high-powered camera, a large collection of real-life positional data (photogrammetry), infrared, radar, and visual references. The data are blended to create a (typically) green image of what the pilot would see if there were no visibility constraints. Again, a problem comes about from costs, research and design limitations, and the required special training in the system’s usage.

With these ideas in mind, there is a considerable amount of overlap that can be seen in the world of UAS. The main and current usage of headsets in UAS operations is for FPV operations. An RGB camera is affixed to the aircraft and is video linked to a headset with a screen. The pilot is limited in these systems by video quality (usually no higher than 1080p), video range (varies by quality of equipment), aircraft range (Federal Aviation Administration requirements and physical range), and costs. The cost of these systems can be as little as $100 or up in the $1000s depending on the quality the pilot wants. These systems show what the camera sees, but there are no additional sets of data that are seen with our manned aircraft.

Flight controller systems and GUI-based software such as PixHawk flight controllers with ArduPilot Mission Planner, when linked, creates mostly precise positioning data on a monitor. Typically, a laptop or an established ground control station is where the pilot will see this critical data to flight but not within inches of their face. This arises from two main factors, regulation, and operations.

The first is the regulatory requirements of commercial UAS in the United States. Recreational pilots fly for fun (not money) and are supposed to use visual observers (VO) when flying under FPV. Commercial remote pilots fly for value (monetary, trade, etc.) and are encouraged to utilize VO(s). This is further broken down into visual-line-of-sight (VLOS) and beyond-visual-line-of-sight (BVLOS). VLOS, as the acronym suggests, uses the visual range of the remote pilot’s eyes with the conditions presented in their geographical environment (weather, time of day, etc.). The standard average sight of aircraft in visual conditions is seven miles (statute) and cannot be aided by the use of binoculars or other devices. Much like recreational FPV flight, a VO is required and is limited to VO sight if headsets are used. Moving beyond this is BVLOS, which does not require VO depending on operations. The maximum potential of headsets would be found under commercial BVLOS operations as few restrictions exist.

Without considering weather conditions, it would be a fair statement to say that satellite-based systems would have highly effective use of control with headsets. Large, open operations would be more ideal and this is where the second factor of operations comes in. Inspections, surveying, and typical monitoring aircraft (such as routine flight paths) would not be well suited for these flight conditions involving a headset. Instead, unplanned flights such as search and rescue (SAR), law enforcement, and other reconnaissance would do well. The remote pilot would be aided by sensor equipment’s perspective in a variety of conditions. While it could do well, the merging of the technologies and systems previously described has not been carried out.

Special sensors may be found on UAS, which may alter the view of the pilot. The best systems would involve RGB cameras, infrared or thermal cameras, and light detection and range (LiDAR). These systems can create a reference image to be easily understood by humans which would not limit flight. Some current UAS systems can employ audible systems which annunciate flight parameters in flight (update of battery life, altitude, etc. every few seconds). Designating similar glass avionic systems into headset displays limits the need for extra systems by blending the systems. Night operations with UAS can be tricky with other regulatory requirements and limited reference points. The special sensors listed above can be altered to create a more fitting view for the remote pilot. If pre-planned operations are conducted beforehand, caching flight data through orthomosaics, photogrammetry, and other point maps can generate a quick EFVS that acts as both a real-life and current condition mapping.

The human factors and physiological effects of headsets and the surrounding technologies of AR and VR are interesting subjects and could also be explored for flight. These considerations could include spatial orientation, nighttime effects (autokinesis, flicker vertigo, false horizons, etc.), the human body and its reaction, and more. Other considerations could be made to increase industry standards and requirements, more from a society standard than from the FAA. Finally, increasing NextGen (future aviation safety, efficiency, and capacity) roadmaps could help with research and development (R&D) of crucial systems to UAS and their operations, possibly including headset technologies for flights.

To conclude, there are vast componential systems that occur within manned aircraft and UAS operations. The AR/VR systems could be further optimized with the combination of available resources to create optimized flights for commercial BVLOS operations. Finally, other considerations that would need to be further explored include health, industry standardization, further R&D, and reigning in the price to fit better into operations.

Top comments (0)