The US Air Force is making rapid progress on AI-piloted fighter jets

During testing in December, two AI programs were integrated into the system: the Air Force Research Laboratory’s Autonomous Air Combat Operations (AACO) and the Air Force’s Air Combat Evolution (ACE). Defense Advanced Research Projects Agency (DARPA). AACO’s AI agents focused on combat with a single opponent beyond visual range (BVR), while ACE focused on dogfight-like maneuvers with a closer simulated enemy and “visible”.

While VISTA requires a certified pilot in the rear cockpit as a back-up, during test flights an engineer trained in AI systems manned the front cockpit to deal with any technical issues that arose. In the end, these problems were minor. While unable to elaborate on the intricacies, DARPA program manager Lt. Col. Ryan Hefron says any setbacks were “to be expected when transitioning from virtual to live.” Overall, this was an important step towards achieving Skyborg’s goal of getting autonomous aircraft into the air as soon as possible.

The Department of Defense emphasizes that the AACO and ACE are designed to complement human pilots, not replace them. In some cases, AI co-pilot systems could act as a support mechanism for pilots in active combat. With AACO and ACE able to analyze millions of data entries per second and having the ability to take control of the aircraft at critical moments, this could be vital in life or death situations. For more common missions that do not require human intervention, flights could be fully autonomous, with aircraft noses replaced when a cockpit is not required for a human pilot.

“We are not trying to replace drivers, we are trying to increase give them an extra tool,” says Cotting. He draws the analogy of soldiers from past campaigns riding horses into battle. “The horse and the human had to work together,” he says. “The horse can run the track very well, so the rider doesn’t have to worry about getting from point A to point B. His brain can be freed up to think bigger thoughts.” For example, says Cotting, a first mate with 100 hours of cockpit experience could artificially gain the same advantage as a much higher-ranking officer with 1,000 hours of flight experience, thanks to the increased l ‘IA.

For Bill Gray, chief test pilot at the USAF Test Pilot School, integrating AI is a natural extension of the work he does with human students. “Every time we [pilots] talk to engineers and scientists about the difficulties of training and qualifying AI agents, they usually treat this as a new problem,” he says. “It bothers me, because I’ve been training and qualifying highly nonlinear and unpredictable natural intelligence agents – students – for decades. For me, the question isn’t, ‘Can we train and qualify AI agents? It’s, ‘Why can we train and qualify humans, and what can this teach us about how to do the same for AI agents?’

Gray believes that AI is “not a miracle tool that can solve all problems,” but rather that it should be developed in a balanced approach, with built-in safety measures to avoid costly accidents. Over-reliance on AI – a “reliance on autonomy” – can be dangerous, Gray says, pointing to the failures of Tesla’s Autopilot program despite Tesla asserting the need for the driver to be behind the wheel as a backup . Cotting agrees, calling the ability to test AI programs in VISTA a “risk reduction plan.” By training the AI ​​on conventional systems such as the VISTA X-62, rather than building an entirely new aircraft, automatic limits and, if necessary, safety pilot intervention can help prevent the AI ​​from putting endanger the aircraft as it learns.

Leave a Reply

Your email address will not be published. Required fields are marked *