DARPA and several contractors including Dynetics, a subsidiary of Leidos, hope the multi-phased ACE program will demonstrate AI can outperform human pilots reliably in aerial combat.
www.leidos.com
How does AI transform aerial combat?
Read other chapters from
Q&AI: Conversations on artificial intelligence.
In times when aerial combat was a major deciding factor in war, a fighter pilot with five or more victories in close-range combat (also called dogfighting) was considered an ace. Now DARPA, the research and development arm of the U.S. military, wants AI to accumulate victory ratios of 50-to-1 or more through its Aerial Combat Evolution (ACE)
program. DARPA and several contractors including
Dynetics, a subsidiary of Leidos, hope the multi-phased ACE program will demonstrate AI can outperform human pilots reliably starting with this month’s
AlphaDogfight Trials, a tournament-style contest anticipated as one of the next great AI competitions between man and machine.
Kevin Albarado, a senior engineer at Dynetics and the Chief Engineer
on the program, said dogfighting is surprisingly well-suited for algorithms because it’s a bounded task with defined goals and measurable outcomes. He believes the advanced AI used on ACE could create a significant imbalance in favor of the U.S. because of its ability to process huge amounts of data from the various sensors on modern military aircraft in order to decide and act on split-second decisions.
This sort of autonomy is the technology backbone that will enable the Defense department’s vision of
Mosaic Warfare, which relies on many diverse pieces to confuse and overwhelm the adversary. In this type of combat human pilots become cockpit-based commanders, orchestrating the larger battle while trusting AI to pilot their own plane. By networking unmanned platforms together as a system of systems, assets that are far more expendable than human pilots, these commanders can amass forces more easily and affordably to present greater complications to the enemy. DARPA compares this new role to a football coach calling plays based on the players on the field, a vision that requires trust in their various skills. To learn more we welcome Albarado.
Why train AI to dogfight when conventional wisdom says these types of battles are obsolete? What makes this use case so relevant?
Albarado: Dogfighting represents the pinnacle of air combat. Even if we don’t expect it to be heavily utilized in the future, it still represents the most difficult and stressing combat environment a pilot can face. It requires a high cognitive workload. It’s a highly dynamic scenario. The situation can turn in an instant. If you can competently dogfight against increasingly competent and complex adversaries without killing yourself or your wingmen, you can be trusted to fly any other air combat mission a commander might ask of you. In a similar vein, employing AI against increasingly skilled adversaries is primarily focused on building trust that AI can handle high-intensity combat.
Dogfighting represents the most difficult and stressing combat environment a pilot can face.
Kevin Albarado, Sr. Aerospace Engineer
What is your team’s primary role on the ACE program?
Albarado: Our primary role is to develop the AI that enables pilots to apply their tactical expertise to swarms of vehicles as a commander in a battle management role. If you’re a pilot who can delegate the more mechanical aspects of flying your plane to a machine, what are you then capable of doing? Now you can oversee other aircraft and significantly multiply your effectiveness in battle. In fact, we are aiming for kill ratios as high as 30-to-1 and even 50-to-1.
As commander, you don’t decide in detail what exactly each plane should do, but rather you give them very high level objectives to autonomously carry out. Dynetics’ role on ACE is to scale the single aircraft autonomy algorithms up to handle collaboration among a large force in very complex scenarios while achieving a high kill ratio. Our job is to make a competent enough AI for the pilot to stay engaged and manage the battle while the battle is going on.
This requires the pilot’s trust for the AI to expertly handle piloting, thereby freeing up the cognitive workload of the pilot to focus on dynamic battle management, where human creativity, interpretation of intent, and other legal, moral, and ethical decisions are best made by humans.
Dogfighting represents the pinnacle of air combat, making it an ideal use case to test our most advanced AI. (Image from DARPA)
What’s the biggest challenge in building this trust?
Albarado: The biggest builder of trust is for the warfighter to have experience with the autonomous system, to interact with it and see that it’s going to perform how it is expected to perform. Deviating from these expectations reduces trust in any system, especially if it leads to poor outcomes. So the biggest challenge is making a competent AI that’s sufficient even in the more complex scenarios, but retains a high degree of effectiveness under simper scenarios. As AI starts to push into more complex and sophisticated domains, it’s really important to systematically build trust over progressively complex scenarios so that humans have sufficient time and evidence to gain and retain trust. Of course, we’re pushing the envelope with DARPA, but doing it in such a way as to be a convincing pathfinder for future AI efforts.
Q: How do you measure or quantify trust?
Albarado: Our ability to keep the cockpit-based commander’s attention on the human-machine interface (HMI) depicting the larger air battle is part of how you measure how much he or she trusts the AI flying the plane. It would be akin to getting in a self-driving car and having a computer screen playing a movie. The more you’re engaged in that movie, the more you trust that car to drive itself. If instead you are looking at the road or monitoring what the car is doing, you likely don’t trust the autonomous car.
Q: How do human pilots learn to dogfight? Will AI learn in a similar way?
Albarado: When a human pilot normally learns to dogfight you don’t just get in the cockpit of a fighter jet and start dogfighting. You build up to it, starting with takeoff and landing. Then you learn how to get into proper position against an easy, slow, straight-flying target. Once you master that, you’ve built enough trust in your leadership to move the needle up a little more. Perhaps next, you perform the same mission, only this time against a target that’s maneuvering. Once you master that, you move up and learn how to fly with your wingman. You scale from there with 2 vs. 2 battles. Then you progress to 4 vs. 4 and so on. As a human, if you can prove time and again you can handle those scenarios, your commanding officer is going to trust you and your wingman to get the job done in a wider range of scenarios. The overarching hypothesis of the ACE program is that we can build trust with AI in the same manner. So it’s AI that is now in the cockpit as the rookie pilot, having to prove its trustworthiness to the skilled human pilots in the actual seat.
Q: What’s the primary AI technique you’re using in the Dynetics phase of the ACE program?
Albarado: We’re using a lot of reinforcement learning, which is not unlike trying to get your kids to achieve small objectives you know will lead to some bigger objective you’re trying to get at, like learning to put a few toys back in the bin, as a part of eventually being able to clean an entire room on their own, or at least have the skill to clean their rooms, regardless of how many times they have to be told to do so. At its core, reinforcement learning shares a foundation with traditional machine learning techniques. For a typical machine learning problem, you have a big data set with inputs and desired outputs, and your goal is to develop a model to predict the outputs given the inputs. Reinforcement learning is the same problem, only we don’t know the data yet. Through trial and error, we develop policy models that dictate actions (the output) given a current state (the input) to maximize some reward.
Q&AI: CONVERSATIONS ON ARTIFICIAL INTELLIGENCE
VIEW MORE INSIGHTS
AUTHOR
Brandon BucknerSenior Editorial Manager
Brandon is a writer and content marketer based in the Washington, D.C. area. He loves to cover emerging technology and its power to improve society.
POSTED
August 5, 2020
ESTIMATED READ TIME
4 minutes
AUTHOR
Brandon Buckner
TAGS
Q&A
Artificial Intelligence