From @Tipu7
“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to stimulate it.” This is how artificial intelligence (AI) was first perceived in 1956 during a conference held at Dartmouth College. In general, AI is the capability of computer systems to undertake functions that are often associated with human intelligence (HI). It uses fast, iterative, and intelligent algorithms to process a large volume of data and automatically formulate new patterns. This capability offers immense potential in all aspects of the modern world – including the military.
Elon Musk once said, “Robots will be able to do everything better than us.” His remarks are not far-fetched, and an increasing number of intelligentsias now share a similar belief. AI is gradually outperforming HI in a multitude of tasks. In 1997, IBM’s Deep Blue computer system defeated the world champion, Gary Kasparov, in chess. In 2016, Lee Sedol, a top player of the Asian game Go, was beaten by Google DeepMind’s AlphaGo system. A similar pattern is now being observed in the military domain. In August 2020, during an event organised by US Defense Advanced Research Projects Agency (DARPA), an AI algorithm outclassed a human pilot in a virtual dogfight between two F-16 fighter aircraft. The remarkable event symbolised the maturity AI has attained and the growing feasibility of its employment in the military sphere.
The distinction between autonomous and automated systems is also essential to identify. If a system purely operates within well-defined sets and rules, then it will be termed an automated system. In contrast, a system designed to work independently of well-defined sets and regulations is called an autonomous system. Lethal Autonomous Weapon Systems (LAWS), like Israel’s Harop and US Switchblade loitering munitions, have already been used by Azerbaijan and Ukraine in Nagorno-Karabakh and Russia-Ukraine conflicts, respectively, with encouraging results.
A variety of military systems with varying degrees of autonomy are now being drawn around the globe by military industries. U.S. Long Range Anti-Ship Missile (LRSAM), which can autonomously route its fight path in a communication-denied environment, is an example of the incorporation of AI in a traditional weapon system. Similarly, AI has been used in the sensor fusion suite of F-35 fighter aircraft by automating Mission Data Files (MDF) and transforming the static threat library into a dynamic threat knowledge base. Moreover, MQ-25 Stingray – the only operational carrier-borne tanker drone in the world, utilises AI for undertaking complex landing mechanisms.
MQ-25 carrier based drone landing on aircraft carrier.
It can be argued that the extent of autonomy will determine the feasibility of AI-based weapons in future warfare. The autonomy, depending upon the nature of interaction with human controls in the Observe, Orient, Decide, and Act (OODA) loop, can be split into three categories, i.e., human-in-the-loop (semi-autonomy), human-on-the-loop (supervised autonomy), and human-out-of-the-loop (full autonomy). The level of human involvement in the OODA loop determines the level of human control over an AI-based system.
AI technologies are rapidly getting integrated into a wide range of military applications. In general, AI offers six major advantages in the military domain. First, AI significantly enhances the speed of the decision-making process. Threats that can outclass human judgment in terms of speed and time can be nullified by incorporating more autonomy in the OODA loop. This feature is specifically important for defensive applications like missile defence systems.
Second, AI mitigates manpower issues. A smart machine can perform the task more effectively in a high-risk environment. Thus, human life security can be increased, and a proportional increment in the efficiency of the military can be achieved.
Third, AI ensures enhanced information by incorporating Big Data within the command, control, computers, communication, intelligence, surveillance, and reconnaissance (C4ISR). Due to higher processing speed, AI-based systems can process a large amount of data in quick intervals for painting a more precise battlespace picture. This minimises the fog of war and ensures net combat efficiency.
Fourth, AI catalyses the performance of weapon systems. The data-linked weapon systems, specifically long-range cruise missiles, will be relying more on AI to enhance target identification and strike precision in electronically saturated conditions. AI is already being integrated into unmanned systems for sustained operations in communication-denied environments.
Fifth, machine learning can be used for training in order to close human skill gaps. The AI can simulate unpredictable scenarios in wargames and training exercises to improve the skills of trainees. This learning can also be used to optimise manned-unmanned combined operations better.
And lastly, AI can strengthen cyber defence capabilities. AI can learn and evolve on its own without human involvement. Therefore, it can spot such vulnerabilities in cyberspace which are beyond human intelligence.
There are, however, several risks associated with the incorporation of autonomy in military systems. For example, autonomous systems are vulnerable to errors and miscalculations, are limited to the amount of data they can receive, can tempt unnecessary usage of military force, cannot undertake multi-tasking, and can yield inadvertent escalation. Besides, machines lack human cognition and the ability to understand the context. Unlike humans, AI-based machines merely carry out assigned functions to the best of their capacity.
Therefore, to mitigate the aforementioned risks, there is an emerging need to formulate necessary safeguards for filtering out inadvertent risks associated with AI militarisation. These weapons should be developed and used within the mutually agreed framework and in compliance with International Humanitarian Law. Instead of fully autonomous systems, semi-autonomous and supervised autonomous systems should be given preference to ensure the requisite level of precaution. Thus, the key is the extent of human control over these machines. If controls are efficient, even AI-based machines can be made to comply with human regulations.
The rapid pace of the proliferation of AI tech in the military indicates that the benefits offered by AI in the military domain are too tempting to be ignored. Thus, it is very likely that the future battlespace will be dominated by autonomy. The AI-based technologies will have a redefining impact on the nature of force projection and, thus, on power politics.
“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to stimulate it.” This is how artificial intelligence (AI) was first perceived in 1956 during a conference held at Dartmouth College. In general, AI is the capability of computer systems to undertake functions that are often associated with human intelligence (HI). It uses fast, iterative, and intelligent algorithms to process a large volume of data and automatically formulate new patterns. This capability offers immense potential in all aspects of the modern world – including the military.
Elon Musk once said, “Robots will be able to do everything better than us.” His remarks are not far-fetched, and an increasing number of intelligentsias now share a similar belief. AI is gradually outperforming HI in a multitude of tasks. In 1997, IBM’s Deep Blue computer system defeated the world champion, Gary Kasparov, in chess. In 2016, Lee Sedol, a top player of the Asian game Go, was beaten by Google DeepMind’s AlphaGo system. A similar pattern is now being observed in the military domain. In August 2020, during an event organised by US Defense Advanced Research Projects Agency (DARPA), an AI algorithm outclassed a human pilot in a virtual dogfight between two F-16 fighter aircraft. The remarkable event symbolised the maturity AI has attained and the growing feasibility of its employment in the military sphere.
The distinction between autonomous and automated systems is also essential to identify. If a system purely operates within well-defined sets and rules, then it will be termed an automated system. In contrast, a system designed to work independently of well-defined sets and regulations is called an autonomous system. Lethal Autonomous Weapon Systems (LAWS), like Israel’s Harop and US Switchblade loitering munitions, have already been used by Azerbaijan and Ukraine in Nagorno-Karabakh and Russia-Ukraine conflicts, respectively, with encouraging results.
Launch of Harop suicide drone from truck based launcher.
Launch of Switchblade loitering munition from a man-portable mortar based launcher.
A variety of military systems with varying degrees of autonomy are now being drawn around the globe by military industries. U.S. Long Range Anti-Ship Missile (LRSAM), which can autonomously route its fight path in a communication-denied environment, is an example of the incorporation of AI in a traditional weapon system. Similarly, AI has been used in the sensor fusion suite of F-35 fighter aircraft by automating Mission Data Files (MDF) and transforming the static threat library into a dynamic threat knowledge base. Moreover, MQ-25 Stingray – the only operational carrier-borne tanker drone in the world, utilises AI for undertaking complex landing mechanisms.
MQ-25 carrier based drone landing on aircraft carrier.
It can be argued that the extent of autonomy will determine the feasibility of AI-based weapons in future warfare. The autonomy, depending upon the nature of interaction with human controls in the Observe, Orient, Decide, and Act (OODA) loop, can be split into three categories, i.e., human-in-the-loop (semi-autonomy), human-on-the-loop (supervised autonomy), and human-out-of-the-loop (full autonomy). The level of human involvement in the OODA loop determines the level of human control over an AI-based system.
AI technologies are rapidly getting integrated into a wide range of military applications. In general, AI offers six major advantages in the military domain. First, AI significantly enhances the speed of the decision-making process. Threats that can outclass human judgment in terms of speed and time can be nullified by incorporating more autonomy in the OODA loop. This feature is specifically important for defensive applications like missile defence systems.
Second, AI mitigates manpower issues. A smart machine can perform the task more effectively in a high-risk environment. Thus, human life security can be increased, and a proportional increment in the efficiency of the military can be achieved.
Third, AI ensures enhanced information by incorporating Big Data within the command, control, computers, communication, intelligence, surveillance, and reconnaissance (C4ISR). Due to higher processing speed, AI-based systems can process a large amount of data in quick intervals for painting a more precise battlespace picture. This minimises the fog of war and ensures net combat efficiency.
Fourth, AI catalyses the performance of weapon systems. The data-linked weapon systems, specifically long-range cruise missiles, will be relying more on AI to enhance target identification and strike precision in electronically saturated conditions. AI is already being integrated into unmanned systems for sustained operations in communication-denied environments.
Fifth, machine learning can be used for training in order to close human skill gaps. The AI can simulate unpredictable scenarios in wargames and training exercises to improve the skills of trainees. This learning can also be used to optimise manned-unmanned combined operations better.
And lastly, AI can strengthen cyber defence capabilities. AI can learn and evolve on its own without human involvement. Therefore, it can spot such vulnerabilities in cyberspace which are beyond human intelligence.
There are, however, several risks associated with the incorporation of autonomy in military systems. For example, autonomous systems are vulnerable to errors and miscalculations, are limited to the amount of data they can receive, can tempt unnecessary usage of military force, cannot undertake multi-tasking, and can yield inadvertent escalation. Besides, machines lack human cognition and the ability to understand the context. Unlike humans, AI-based machines merely carry out assigned functions to the best of their capacity.
Therefore, to mitigate the aforementioned risks, there is an emerging need to formulate necessary safeguards for filtering out inadvertent risks associated with AI militarisation. These weapons should be developed and used within the mutually agreed framework and in compliance with International Humanitarian Law. Instead of fully autonomous systems, semi-autonomous and supervised autonomous systems should be given preference to ensure the requisite level of precaution. Thus, the key is the extent of human control over these machines. If controls are efficient, even AI-based machines can be made to comply with human regulations.
The rapid pace of the proliferation of AI tech in the military indicates that the benefits offered by AI in the military domain are too tempting to be ignored. Thus, it is very likely that the future battlespace will be dominated by autonomy. The AI-based technologies will have a redefining impact on the nature of force projection and, thus, on power politics.