What's new

The Effect of Hypersonic Missile Reality on Surface Battles in the Seas

Bogeyman

FULL MEMBER
Joined
Nov 21, 2019
Messages
534
Reaction score
0
Country
Turkey
Location
Turkey
RED SKY IN MORNING: NAVAL COMBAT AT THE DAWN OF HYPERSONICS /FEBRUARY 28, 2019

USS Stark steams quietly near Bahrain on May 17, 1987. The Tanker Wars between Iran and Iraq are ongoing, and the United States is trying to keep commerce flowing in the Persian Gulf. Unbeknownst to the crew, it is the final hour of life for 37 sailors onboard the Stark.

20:00 hours: USS Coontz gains contact on an Iraqi F-1 Mirage attack aircraft and sends the Stark the track.

20:05 Range: 200 nautical miles. Stark’s commanding officer is informed.

20:55 The commanding officer asks why the Mirage isn’t on radar. The operator increases range scales. Target acquired. Range: 70 nautical miles.

21:02 The Mirage illuminates Stark with her radar, locking on. “Request permission to warn the incoming bogey,” the radar operator asks the tactical action officer — the person responsible for the defense of the ship. He replies, “No, wait.”

21:05 Range: 32.5 nautical miles. The Mirage turns to intercept Stark. The team in the Combat Information Center misses the turn.

21:07 Range: 22.5 nautical miles. The Iraqi pilot launches the first Exocet anti-ship cruise missile. Two minutes to impact. The forward lookout on Stark sees the launch but misidentifies it as a distant surface contact.

21:08 The Stark begins hailing the Mirage on the radio. The Iraqi pilot does not respond; he is busy launching his second missile. Stark’s systems detect another radar lock on their ship. The tactical action officer gives permission to arm the counter-measure launchers and place the Phalanx close-in weapons system, which would mount the final defense for the ship, on standby. One minute to impact.

21:09 Stark locks onto the Mirage with her radar. The lookout reports an inbound missile, but the report is not relayed to the tactical action officer. Seconds later, the first Exocet slams into Stark but fails to explode. Lt. Art Conklin, the damage control assistant, recalls,

21:09:30 The second Exocet rips into Stark and explodes. In less than a minute, nearly a fifth of the crew is killed and many more wounded as the fuel from the unexploded first missile continues to burn. Heroic efforts throughout the night are all that keep Stark from sinking.

The effects of the Exocet, the most lethal anti-ship cruise missile at the time the above story took place, were devastating for a ship not expected to survive even a single missile hit by the Navy’s ship survivability standards. But the Navy will not be so lucky in the future. As the team on the Stark demonstrated, even a capable crew can fail to adequately respond to the speed of naval combat in the missile age. An hour of confusing and threatening behavior by the Iraqi aircraft was followed by two short minutes to realize that Stark was under attack.

The inability to keep up with the increasing speed of naval combat is only going to get worse. The advent of hypersonic weapons, particularly anti-ship cruise missiles, represents a grave threat to U.S. surface forces. Today, the Kalibr missile system, used by Russia, China, and many other countries, accelerates to Mach 3 in the minutes prior to impact. It took almost two minutes for the subsonic Exocets to cover the 20 nautical miles to Stark; the Kalibr missile could cover that same distance in a mere 45 seconds. Russia announced initial operational capability for the Mach 8 Zircon cruise missile in mid-2017 as well.

Hypersonic missiles, which travel at speeds greater than Mach 5, shorten John Boyd’s famous observe-orient-decide-act loop, making it nearly impossible for human minds and teams to even comprehend the information, let alone defend against a short-range attack. These new weapons represent a paradigm shift in naval combat. When discussing salvos of anti-ship cruise missile attacks against U.S. warships, some brush off the threat, intimating that various anti-missile missiles and counter-measures will protect us. But a closer examination of U.S. defensive systems and adversaries’ offensive missiles calls such sanguine assessments into question. Combat leaders should strive to more deeply understand the nature of the threat — specifically the science behind naval combat and missile attacks. The mathematics shows that in a missile war, mere survival is difficult, let alone success. It only gets worse as the number of missiles inbound increases. This problem should prompt the Navy to take a hard look at how it will defend against this threat and how it should train its sailors — examinations that are underway but not adequate to address the magnitude of the current threat. A review of the salvo equations, another historical case study, and a logical next technological step are in order.

Doing the Math

Salvo equations provide a mathematical model of combat losses in missile warfare by relating the number of missiles fired and the probabilities of a miss, of shooting missiles down, and of damage resulting in a mission or catastrophic kill. The models are simple and can be implemented on a spreadsheet, making it easy to toy with “what if?” scenarios to see how each element of the equation can affect survival.

Capt. (Ret.) Wayne Hughes, the father of modern naval tactics, analyzed the historical record of missile attacks against various ships — both merchant shipping vessels and warships. He found that warships that failed to provide a defense when attacked faced a 68 percent probability of being hit. Mounting a credible defense reduced the chance of being hit to 26 percent. The salvo equations can be further analyzed to show the effects of poor human performance, improper use of radar and detection systems, and the inability to eliminate all incoming missiles in the salvo. Hughes sums up the conclusions succinctly: Attacking effectively first is paramount. Failing that, if the ship expects to survive and avoid taking hits, the human team has to mount a timely and effective defense, which is increasingly challenging and will only get more difficult as hypersonic weapons drive faster engagement speeds and observe-orient-decide-act loops.

Naval Combat in the Missile Age

S.L.A. Marshall, the eminent Army historian, asserted in his landmark 1947 study Men Against Fire that, even in the best infantry companies, only 25 percent of soldiers actually fired their weapons in combat. The statistic seems bizarre and counterintuitive, but Marshall provides solid evidence from World War II. Even in some of the grimmest, close-quarters combat in the Pacific islands and during the landings at Normandy, only an average of 15 percent of infantrymen actually fired their weapons. The underlying reasons most soldiers did not engage, according to Marshall, were a lack of a definite targets, worries about fratricide, unwillingness to reveal their position, and the fog of war.

Do Marshall’s conclusions hold true onboard a modern warship? Stark proved that a ready, well-trained Navy combat team could similarly fail to fire its weapons despite sufficient indications of impending attack. The HMS Sheffield (D80), sunk by the Argentinians during the Falklands War in May 1982, is another example of warships failing to engage in combat during wartime.

In both cases, watch teams reacted with human precision. Psychological biases, team dynamics, and personalities all play a role. A timid watch officer may hesitate to call the captain, even though he knows he should, let alone launch weapons or counter-measures to defend the ship. As the Navy moves into the age of hypersonic weapons, the lack of time available to observe, orient, decide, and act on the battlefield will overwhelm even the best watch teams. Humans simply cannot cope with the speed of future naval combat.

The Vincennes: Decision-Making in the Fog of Naval Warfare

The next war, fought at an accelerating cognitive tempo thanks to hypersonic weapons, cyber effects, and an even more saturated information environment, will require even more of human watch teams. The accidental downing of Iran Air Flight 655 on July 3, 1988 by the USS Vincennes demonstrates the dangers of a more chaotic combat environment and the increased cognitive demand that it requires.

03:30 USS Elmer Montgomery detects 13 Iranian gunboats nearby. They split into three groups, and one takes station off Montgomery’s port quarter.

04:11 Montgomery reports 5-7 explosions to the north near merchant shipping. Vincennes is ordered to assist. She orders her helicopter, OCEAN LORD 25, on ahead.

06:15 OCEAN LORD 25 is attacked by rockets and small arms fire. Vincennes sets general quarters and all hands man their battle stations.

06:20 Vincennes takes tactical control of Montgomery.

06:39 Vincennes calls her operational commander and requests to engage. Permission is granted.

06:43 Vincennes opens fire with her main 5 inch guns. She comes under small arms fire from the gunboats, now inside 8,000 yards.

06:47 The radar operator gains a new air contact taking off from Bandar Abbas, 47 nautical miles away and heading for the ship. Classification unknown.

06:48 USS Sides also detects the aircraft, now designated Track 4131, and locks on with a missile on her forward launcher.

06:49 Vincennes begins challenging the aircraft continuously on international and military distress frequencies.

06:50 Someone reports Track 4131 as an Iranian F-14, despite it transmitting a civilian transponder code. The forward gun suffers a casualty and can no longer fire. The tactical action officer orders the continuous challenging of Track 4131 over the radio.

06:51 Vincennes reports her intention to engage the F-14 at 20 nautical miles to her operational commander. Vincennes takes tactical control of Sides. Vincennes begins maneuvering radically, using maximum rudder and speed in an attempt to keep the aft gun mount engaged on the Iranian gunboats. Books, loose equipment, and other items begin falling from the shelves around the ship.

06:52 Several watchstanders incorrectly report Track 4131 descending in altitude.

06:53 Track 4131 closes to 12 nautical miles, still on an intercept course with Vincennes. Vincennes requests to engage — granted.

06:54 Vincennes launches two missiles.

One minute later, both missiles destroy Iran Air Flight 655. The commanding officer of the Sides sees the explosions and the debris falling. It would be several hours, long after Flight 655 is reported overdue, that the mistake is realized.

In the inquiry that followed this real incident, the chairman of the Joint Chiefs of Staff concluded that, given the “pressure-filled environment” on board the Vincennes, the outcome was a “reasonable performance under the circumstances” and that “it is imperative to have an emotional and intellectual feel for that picture.” At the time, Vincennes had been battling several groups of Iranian fast-attack craft for three hours — each capable of damaging the ship and killing personnel, tracking an Iranian P-3 maritime patrol and reconnaissance aircraft flying a classic targeting profile to provide targeting information for Iranian attack aircraft, fighting with a gun mount out of commission, maneuvering violently to keep the aft gun shooting, and being tactically responsible for two additional warships (the Sides and the Montgomery).

This was a complex, cognitively taxing scenario for all involved, especially the decision-makers. The fog of war was thick. Fatigue from hours of close combat wore down the officers and sailors. Rear Adm. William M. Fogarty, the senior investigating officer, realized he needed professional advice from Medical Corps personnel specializing in combat stress to help analyze the interview and physical data for this event — the effects were that strong. He found that the tactical information coordinator, an enlisted watchstander in the combat information center, emerged as a de facto leader based on his perception of a weaker supervisor; his recommendations were accepted by all, and he firmly believed he was tracking an inbound Iranian F-14 — the Anchoring Effect in full force.

Both case studies show that Navy human teams, even with a reasonable expectation of combat, can take several minutes to recognize the situation and act. Factoring in hypersonic weapons, watch teams must now process the threat even more rapidly, take defensive measures, and perhaps, succeed in knocking the one incoming missile from the sky, to say nothing of a salvo of missiles. Factor in any friction, such as whether the aircraft detected is hostile or not, as in the case of Vincennes, and the decision cycle will be even further lengthened, assuming action is ordered at all.

Searching for a Solution

The related fields of artificial intelligence, machine learning, and deep learning have unique applications for naval warfare in the hypersonic age. Deep-learning machines formed the core of the ATHENA ship self-defense system in Peter Singer and August Cole’s Ghost Fleet, responding to incoming attacks with a precision and speed unachievable by human operators, automatically identifying, tracking, and assigning the best weapon to counter the threat, and, if in automatic, launching weapons to defeat it. Moving away from fiction, machine-learning algorithms would help the Navy baseline its systems and spaces to aid in damage control, detect and track contacts (especially in highly uncertain areas like sonar), and much more. Artificial intelligence algorithms can be trained to pair with humans to complete tasks like mission planning and developed to provide greater information, not just data, across the decision cycle. In particular, deep learning shows a remarkable ability to distill patterns across many sets of data. In the case of Vincennes, an algorithm could have parsed airline flight schedules or historical air traffic patterns and dispassionately evaluated Iran Air 655 as a non-threat. Similar pattern analysis would have aided Stark in identifying the attack profile presented by the Iraqi Mirage.

The Navy has begun to invest in artificial intelligence, but the efforts have been lackluster. Writing in War on the Rocks recently, Connor McLemore and Hans Lauzen provided great advice for the Navy as it looks to invest heavily in artificial intelligence. Many other authors have already proposed applications of artificial intelligence in other maritime areas, but the service’s urgency to deploy algorithms has lagged significantly behind. The institution’s overall advance into artificial intelligence is everything you would expect from a major, cross-cutting new development effort in a military branch — unfocused and lacking vision. Any proposed “strategy” looks more like a shopping list than a coherent strategy linking a diagnosis of the problem with a guiding policy and set of coordinated actions to achieve it. Navy leaders have come out openly to discuss the pursuit of artificial intelligence, but those statements do not indicate that the Navy will fundamentally change the way that it develops and acquires software or delivers capability to sailors. Given the proposed timelines for fielding modernization packages to ships and aircraft, and of developing and maturing net technologies, the Navy’s pursuit of artificial intelligence will likely be insufficient to counter the threat. America’s adversaries are publicly disclosing the successes of their hypersonic weapons. Their new capabilities seem to be deploying with greater speed and agility than the Navy is capable of doing right now. Artificial intelligence algorithms are what we train them to be based on the data sets we have. Regardless of the Department of Defense’s recently released artificial intelligence strategy, the lack of a coherent strategy and vision for what the Navy’s research and development or acquisition enterprises should be developing will only hamper efforts to deploy credible capabilities in the eyes of the warfighters.

At the dawn of hypersonic weapons, the Navy is woefully unprepared. Nearly a decade of budgetary cuts and nearly two decades of operations in support of the global war on terror have exhausted the fleet and slowed technological development. American sailors will be involved in missile combat in the future, whether from a terrorist-launched Kalibr missile or a classic state-on-state war, and lives will be lost. Just how many lives will depend on the willingness of the Navy to use America’s technological and intellectual might to pair man and machine against the threats. The Navy cannot afford to wait any longer to develop and deploy algorithms for offense and defense. Its very existence as a service depends on it.
https://warontherocks.com/2019/02/red-sky-in-morning-naval-combat-at-the-dawn-of-hypersonics/
 
.
TOWARDS POST-STRATEGY? THE HARD INTERVENTION OF ARTIFICIAL INTELLIGENCE ON MILITARY THOUGHT

Bulend Ozen

This essay considers the influence of artificial intelligence (AI) on the future of strategic thought. Specifically, it assesses the prospects of a new ‘post-strategy’ era dominated by AI and new technologies.

Introduction

Considering the adage “the rational answer is hidden in the right question,” there are critical questions that need to be answered by security and military experts and theorists — Is strategy transforming to post-strategy? Will long-established doctrines of the war surrender to the big data of artificial intelligence (AI)? Will the businesses use the accepted principles of military thinkers of ‘classical war’ better than the military?

The great and rapid change of civilization caused by the 19th century industrial revolution in Europe and the discovery of oil created both destructive effects along with opportunities in the military and security spheres. The strategy, tactics, and doctrinal literature that evolved over centuries of war have been threatened by rapid change in security parameters from the 20th century onwards, as exemplified by the devastating effects of nuclear weapons.

Moreover, the threat of nuclear weapons disrupted the harmony between (political) ends, (strategic) ways and (military) means in strategic efforts.[1] While states and armies had not yet fully adapted to this situation in the 20th century, and with the half-life of their systems and equipment not yet reached, the security and military world entered a new uncontrollable era in the 21st century with the introduction of ‘super’ advanced technology and AI. Soon, the destructive capacity of military technological capabilities powered by AI could be much bigger than the total destructive effect from all previous capacities combined. The question: “In what direction is strategy and tactics evolving in such an unpredictable security environment” must be the focus of security and military experts.[2]

In that sense, it should be noted that there is no general awareness that modern strategic thought does not seek to pursue or appreciate current developments. This is a phenomenon that has been repeated for centuries.[3]

Awakening to New Era

The triple-design that produces, maintains and develops strategy and tactics in war has not changed for centuries: namely; military organization, weapons-vehicles-equipment (materiel), and doctrine (MO-WVE-D). The strategic culture that every country’s army possesses also provides a characteristic contribution and difference to this triple design. The synergistic design that often-brought victory to countries and armies for centuries resembles that of a master chef — with his own style and secrets — in a perfect kitchen. However, in the coming period, computer software, coding, algorithms, big data, machine learning (ML), robotic task forces and AI have the potential to constrain and shape ‘organic’ strategy and the synergistic triple-design MO-WVE-D.

The laser weapon, which is accepted as typical for today's commanders, would have been a miracle for Prussian king and commander Friedrich the Great in the 18th century when scientists discovered electricity. If the same timeline is adapted to today, it is not possible to predict what ‘miracles’ will be seen 200 years down the road. In the recent past, an artificial muscle was transferred to a human and a human muscle transferred to a robot in labs. All this hopeful, but uncontrollable, positive developments show that not only the body of soldier but perhaps also his reasoning, his judgement, his strategic and tactic thinking processes will be influenced or under the control of algorithms in the future.

Despite technological advances in recent decades, generals and commanders of armies still continue to practice strategic and tactical maneuvers under camouflage, in command and control vehicles, and using traditional military maps (essentially a continuation of the classical era). Although this tradition of centuries has a mystical and traditional characteristic, the commanders of the future will have to revise their purposes — think Hans Delbrück’s ‘Niederwerfungsstrategie’ (destruction strategy)[4] or Clausewitz’s “Destruction of Military Power and Removal of Enemy Wills”[5] — because they will find fewer enemy units, weapons systems and ammunition depots to take under fire on the battlefield. The human factor that generates strategy and tactics shifts from the battlefield to offices and from soldiers to software developers and operators. Here, the type, ability, and task design of warfare weapons are shaped more by the enemy’s technological threat capabilities than by soldiers’ professional warrior abilities. Soldiers must approach this subject critically, without entering fanatic proposals such as “soldiers wage the war and commanders decide” or “civilians do not understand war” and must increase their endeavors towards cooperation with expert civilians for making correct predictions and, in turn, receiving scientific advice.

Organic Strategy or Artificial Strategy

It is possible to summarize the situation in the following two points where AI and information technology can transform the ‘strategy’ and its field workers tactics to a ‘post-strategy’ under the threat of advanced technology:

* The “thinking, producing and decision-making mechanism” required for the emergence of strategy, tactics and doctrine is shifting from the commanders’ minds to machine-based AI systems,

* The human (soldier) factor, which can negatively impact the shooting hit ratio of any weapon, leaves its place to more sensitive control and decision-making mechanisms controlled by AI.

When these two points are evaluated together, it is seen that today’s “thinking, deciding and acting” chain carried out by the commanders and their troops will be fulfilled more and more by the decision of AI and its mixed (human, robot, cybernetic organisms) troops. Considering the decrease in error factors, the soldiers will gladly accept these algorithmic-systems that potentially reduce human loss on the battlefield, which in turn makes the commanders’ jobs easier, and at least helps them make no ‘logical’ mistakes. But on the other hand it might be expected that they will complain about this development’s killing professionalism.

In the near future, the AI systems might appear to influence the organic ties among strategy, tactics, and doctrine with their advantages and disadvantages. For the AI that directly affects these organic ties, the experts offer a three-stage development period in general: Narrow artificial intelligence, general artificial intelligence and super artificial intelligence.[6] In the narrow period that we have just entered, it is possible to make some predictions about the coming ‘general AI’ period, but it is not yet known what the ‘super AI’ period will bring. It can be assumed this three-stage period are acceptable periods for military artificial intelligence [(M)AI] too.

The main problem with the future of the war is who will take the initiative and control of military judgement and decision-making processes. As ‘general AI’ continues to penetrate to military weapons, vehicles and equipment (MO-WVE) with an increasing speed leading to the inception of ‘military artificial intelligence’ [(M)AI]. These developments have been anticipated since ‘Deep Blue’ beat Kasparov in chess in 1997.

The ability of staff officers to “offer a decision not to leave options other than accepting or refusing to the commander” will be a natural function of the (M)AI in the future and especially on the tactical/operational field, which will force the commanders to choose “the most strongly recommended algorithmic option.” This situation can create long-term negative effects on tactics and doctrine.

Since the speed of AI that cannot be compared with the human decision-making mechanism, ‘(intelligent) speed’ will become the prerequisite fundamental tactical principle in the battlefields for determining victory for attacker and survival for defender. Moreover it seems that the responsibility will rely on the AI systems more than human minds. In this context, the AI versions, which will be as successful as the commanders in tactical and operative operations, will the subject of curiosity and imitation in the military circles of the future. By the way, it is important to note that the late French theorist Paul Virilio predicted in the early 1980s that the ‘speed’ factor would become one of the basic parameters of future war.[7]

AI operating at tactical level will give consequences that will harm the thinking and decision making processes of commander by offering the orders automatically which the commander must give quickly during an operation. Recently, many international laboratory tests have confirmed the hypothesis that learning machines can carry out battlefield analysis that cannot be distinguished from the analyses of subject matter experts (SMEs).[8] This demonstrate that AI can develop its own ‘doctrine’ and data in the strategy and tactics arena that promulgate ‘loyal’ algorithms[9] as in simple chess game. In other words, the organic mind of a human, who plays with force, time, and space for the emergence of strategy and tactics, has begun to share this task with AI. It is therefore imperative that the military thinkers/SMEs and professional soldiers working in national security must explore and inform the development of new technological developments including artificial intelligence, data mining, machine learning and deep learning.

Considering these above-mentioned considerations, the effect of AI on strategy, tactics, doctrine, and the military decision-making process in the future is described in the following figure figure.

BulendOzen1.jpg


Figure 1 - The Effect of Artificial Intelligence on Strategy, Tactics, Doctrine and Military Decision Making Process

Conclusion

Considering the possible effects of AI in the field of operations in the future, it is still difficult to argue that a ‘post-strategy’ situation has emerged in place of strategy; but we can argue that its symptoms have begun to appear. More data and practice are required in order to defend this claim. The fact that AI is in the introductory (narrow) phase in the security arena also affects this situation. However, it is clear that the ‘organic minds’ that produce strategy, tactics, and doctrine do so day-by-day as strategy and tactics are more digitized in AI algorithms. The fact that today's professional soldiers, security workers and military theorists are still ignoring this issue does not change the situation.

There is no indication that the digitization of strategy and tactics varies by land, sea and air forces operations. It is also likely that AI and integrated robotics technology develops simultaneously according to land, sea, air, special operations, and joint operational needs. These needs simultaneously influence the emergence of doctrine. The clearest studies on this subject today are the conceptual studies that address operations and the doctrine together in order to recognize requirements of human-machine interface in reconnaissance and attacks by drone swarms. [10]

The human factor, in other words the military command, gives the greatest flexibility, aesthetic, and unpredictability to strategy and tactics at war. Commanders are continuously faced with thinking and decision-making processes (i.e. MDMP) while leading their troops. The commanders in this decision-making process can make rational, emotional, intuitive, autocratic, dependent, and immediate decisions or postpone decision-making, especially if these decisions have potentially lethal consequences.[11] In the future, despite the predictable decisions of AI and learning machines, it seems possible that existence of ‘organic’ strategy will continue thanks to the unpredictable decisions of commanders who use these decision-making styles. However, in any case, because of the decrease in decision-making times, commanders will be more dependent on data screens and algorithms dictated by AI and will be forced (and constrained) by these factors when making a tactical decision.

Today, the low and middle level military leaders still rely on command and control (C2) vehicles, camouflage, and on traditional maps along with map with great motivation to perform their duties. But in the near future, it can be expected that they might handle and manage the operations remotely, away from battlefronts, in comfortable offices, and perhaps in casual clothing, with their teammates, informatics and operators together while eating pizza. There is no doubt that this situation will create differentiation in the perceptions about strategy, tactics, doctrine, and military leadership, with both destructive and ethical problems.

With introduction of the AI into the battlefield, the number of ‘civilianized’ soldiers and ‘militarized’ civilians will increase. With the emerging of this heterogeneous structure, it should be expected that the ‘civilian’ generals who ‘understand’ strategy and tactics should increase.

All of the issues mentioned above explain the situation of AI for conventional army units and their role in conflict. The issues facing professional terrorist organizations and the benefits from AI and other 21st century technological developments should be considered in another article because of their different dimensions and parameters. At this point, its sufficient to consider Max Boot's warning: “Technology in guerrilla warfare is not as important as conventional warfare, but this may change... A terrorist cell capable of reaching to chemical, biological, and nuclear weapons may be capable of killing more than the Armed Forces of Brazil and Egypt, which do not possess nuclear weapons.”[12] This reality is enough to overturn all the accumulation of strategy, tactics, and doctrine.

It is now seen that some thinkers working on strategy focus more on political strategies and avoid the unpredictability of developing technology. However, strategy and its representative element (military strategy) in the national security arena have been nearly abandoned since the end of the Cold War period. This gap is being filled by the great enthusiasm of business management science, which is in quest of new philosophies in its field. Many different ideas, such as marketing strategy, branding strategy, growth strategy, guerilla strategy, competitive strategy, are born and conceptualized under the heading of business management strategies. The business world pays more attention than the national and global security arena to the strategic and tactical principles of the military thinkers and commanders such as Sun Tzu and Napoleon. As a result of these developments, those who study the art and science of strategy are now drowned in the bibliography of the business world and consequently they are confused. The way of preventing this confusion and preserving strategy-tactics-doctrine is through adoption of the triple design (organization, weapon/equipment, doctrine) of the operational area with a classical, philosophical, and holistic sense, while searching for new ways to develop a contemporary understanding of strategy and tactics that minimizes intellectual competition with AI.

As Gray has pointed out, strategy is an important part of political solutions and is a story with no permanent result;[13] based on this idea, it seems that strategy is a phenomenon that will never be over. However, the potential that strategy will transform into a ‘post-strategy’ model and the corresponding role of the human (politician, security specialist, and soldier) factor in the strategy remains to be seen in the ‘super AI’ period. The risk of human isolation from strategy cannot be underestimated in an age when we are so close to cybernetic (human-machine mix) organisms.

End Notes

[1] Colin S. Gray, The Future of Strategy. New York: Polity, 2015, Kindle Edition.

[2] See for example Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology is Only Half the Battle.” War on the Rocks, 25 December 2018,
https://warontherocks.com/2018/12/a...-military-technology-is-only-half-the-battle/.

[3] See for example Lukas Milevski, The Evolution of Modern Grand Strategic Thought. New York: Oxford University Press, 2016 and Lawrence Freedman, Strategy: A History. New York: Oxford University Press, 2013, and Martin van Creveld, A History of Strategy: From Sun Tzu to William S. Lind. Kouvola, Finland: Castalia House, 2015.

[4] Lawrence Freedman, Strateji [Strategy]. Istanbul: Alfa Publishing, 2017, p. 201.

[5] Carl von Clausewitz, Savaş Üzerine [On War]. Istanbul: Eris Publishing, 2003, p. 34.

[6] Stephan De Spiegeleire, Matthijs Maas and Tim Sweijs. Artificial Intelligence And The Future Of Defense: Strategic Implications For Small- And Medium-Sized Force Providers. The Hague: The Hague Centre for Strategic Studies (HCSS), p. 100, https://www.researchgate.net/publication/316983844_Artificial_Intelligence_and_the_Future_of_Defense.

[7] See Paul Virilio, Speed and Politics. Los Angeles: Semiotext(e), 2006 (first published in France in 1977) and Paul Virilio and Sylvère Lotringer. Pure War, Los Angeles: Semiotext(e), 1998 for insights on speed in conflict. Virilio recently passed away in September 2018.

[8] Deepak Kumar Gupta, “Military Applications Of Artificial Intelligence,” CLAWS Articles #1878. New Delhi: The Centre for Land Warfare Studies (CLAWS), 17 March 2018, http://www.claws.in/1878/military-applications-of-artificial-intelligence-deepak-kumar-gupta.html.

[9] Michael C. Horowitz, “The Promise and Peril Of Military Applications Of Artificial Intelligence.” Bulletin of the Atomic Scientists, 23 April 2018, https://thebulletin.org/2018/04/the...tary-applications-of-artificial-intelligence/.

[10] Sean M. Williams, “Swarm Weapons: Demonstrating a Swarm Intelligent Algorithm for Parallel Attack.” OTH (Over the Horizon), 13 August 2018, https://othjournal.com/2018/08/13/s...rm-intelligent-algorithm-for-parallel-attack/.

[11] Bulend Ozen, “A Study on the Effects of Strategic, Critical, and Creative Thinking Skills on Decision Making Styles.” (Doctorate Thesis). Istanbul: Halic University, 2017, p. 152.

[12] Max Boot, Görünmeyen Ordular – Gerilla Tarihi [Invisible Armies: An Epic History of Guerrilla Warfare from Ancient Times to the Present]. Istanbul: Inkilap Publishing, 2014, p. 500.

[13] Colin S. Gray, The Future of Strategy (see note 1).

 
.

Latest posts

Back
Top Bottom