What's new

Fearing Bombs That Can Pick Whom to Kill

Levina

BANNED
Joined
Sep 16, 2013
Messages
15,278
Reaction score
59
Country
India
Location
United Arab Emirates
Fearing Bombs That Can Pick Whom to Kill

Animation of How New Missiles May Work - Video - NYTimes.com

On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.
Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.
Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control. After launch, so-called autonomous weapons rely on artificial intelligence and sensors to select targets and to initiate an attack.


Britain’s “fire and forget” Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets.

Armaments with even more advanced self-governance are on the drawing board, although the details usually are kept secret. “An autonomous weapons arms race is already taking place,” said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a research center in Palo Alto, Calif. “They can respond faster, more efficiently and less predictably.”

Concerned by the prospect of a robotics arms race, representatives from dozens of nations will meet on Thursday in Geneva to consider whether development of these weapons should be restricted by the Convention on Certain Conventional Weapons. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, last yearcalled for a moratorium on the development of these weapons.

The Pentagon has issued a directive requiring high-level authorization for the development of weapons capable of killing without human oversight. But fast-moving technology has already made the directive obsolete, some scientists say.

“Our concern is with how the targets are determined, and more importantly, who determines them,” said Peter Asaro, a co-founder and vice chairman of the International Committee for Robot Arms Control, a group of scientists that advocates restrictions on the use of military robots. “Are these human-designated targets? Or are these systems automatically deciding what is a target?”

Weapons manufacturers in the United States were the first to develop advanced autonomous weapons. An early version of the Tomahawk cruise missile had the ability to hunt for Soviet ships over the horizon without direct human control. It was withdrawn in the early 1990s after a nuclear arms treaty with Russia.

Back in 1988, the Navy test-fired a Harpoon antiship missile that employed an early form of self-guidance. The missile mistook an Indian freighter that had strayed onto the test range for its target. The Harpoon, which did not have a warhead, hit the bridge of the freighter, killing a crew member.

Despite the accident, the Harpoon became a mainstay of naval armaments and remains in wide use.

In recent years, artificial intelligence has begun to supplant human decision-making in a variety of fields, such as high-speed stock trading and medical diagnostics, and even in self-driving cars. But technological advances in three particular areas have made self-governing weapons a real possibility.

New types of radar, laser and infrared sensors are helping missiles and drones better calculate their position and orientation. “Machine vision,” resembling that of humans, identifies patterns in images and helps weapons distinguish important targets. This nuanced sensory information can be quickly interpreted by sophisticated artificial intelligence systems, enabling a missile or drone to carry out its own analysis in flight. And computer hardware hosting it all has become relatively inexpensive — and expendable.

The missile tested off the coast of California, the Long Range Anti-Ship Missile, is under development by Lockheed Martin for the Air Force and Navy. It is intended to fly for hundreds of miles, maneuvering on its own to avoid radar, and out of radio contact with human controllers.


1.jpg





Images from a computer showing a strike by a Brimstone missile, a British weapon, on an Islamic State armed truck in Iraq. The “fire and forget” missile can distinguish among tanks and cars and buses without human assistance.CreditMinistry of Defense/Crown Copyright, via Associated Press
In a directive published in 2012, the Pentagon drew a line between semiautonomous weapons, whose targets are chosen by a human operator, and fully autonomous weapons that can hunt and engage targets without intervention.

Weapons of the future, the directive said, must be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

The Pentagon nonetheless argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.




“It will be operating autonomously when it searches for the enemy fleet,” said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. “This is pretty sophisticated stuff that I would call artificial intelligence outside human control.”

Paul Scharre, a weapons specialist now at the Center for a New American Security who led the working group that wrote the Pentagon directive, said, “It’s valid to ask if this crosses the line.”


Some arms-control specialists say that requiring only “appropriate” human control of these weapons is too vague, speeding the development of new targeting systems that automate killing.

Mr. Heyns, of the United Nations, said that nations with advanced weapons should agree to limit their weapons systems to those with “meaningful” human control over the selection and attack of targets. “It must be similar to the role a commander has over his troops,” Mr. Heyns said.

Systems that permit humans to override the computer’s decisions may not meet that criterion, he added. Weapons that make their own decisions move so quickly that human overseers soon may not be able to keep up. Yet many of them are explicitly designed to permit human operators to step away from controls. Israel’s antiradar missile, the Harpy, loiters in the sky until an enemy radar is turned on. It then attacks and destroys the radar installation on its own.

Norway plans to equip its fleet of advanced jet fighters with the Joint Strike Missile, which can hunt, recognize and detect a target without human intervention. Opponents have called it a “killer robot.”

Military analysts like Mr. Scharre argue that automated weapons like these should be embraced because they may result in fewer mass killings and civilian casualties. Autonomous weapons, they say, do not commit war crimes.

On Sept. 16, 2011, for example, British warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that were shelling civilians. Eight or more of the tanks were destroyed simultaneously, according to a military spokesman, saving the lives of many civilians.

It would have been difficult for human operators to coordinate the swarm of missiles with similar precision.

“Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing,” Mr. Scharre said.

@OrionHunter @Abingdonboy
I'm sure you guys know of such weapons already but this news left me flummoxed.The points both for and against AI weapons 're very strong.

Mods if i've posted this thread in the wrong forum then pls move it to a more appropriate section.
 
.
Check out India's Nirbhay missile. This two-stage cruise missile has Artificial Intelligence (AI), a special imbedded variation of the RC-4 algorithm which can pick out a target and attack it among multiple targets. The missile has a loitering capability, i.e., it can go round several targets and perform manoeuvres to select the target and then engage it without human intervention!

Needless to say, an image or images of the target(s) are fed into the on-board computer before launch. But the amazing thing is that once over the general target area it maneuvers around, selects a target according to priority and then destroys it.

03-nirbhay1.jpg

The 'intelligent' Nirbhay
 
.
Nirbhay was tested with out seeker or radar .they just tested for launching and maneuvering and let the missile use complete fuel and fall without target to check how far it goes nothing much it will take at least 5 years for production version if everything goes alright


Check out India's Nirbhay missile. This two-stage cruise missile has Artificial Intelligence (AI), a special imbedded variation of the RC-4 algorithm which can pick out a target and attack it among multiple targets. The missile has a loitering capability, i.e., it can go round several targets and perform manoeuvres to select the target and then engage it without human intervention!

Needless to say, an image or images of the target(s) are fed into the on-board computer before launch. But the amazing thing is that once over the general target area it maneuvers around, selects a target according to priority and then destroys it.

View attachment 151468
The 'intelligent' Nirbhay
 
. .
Very exciting article ... The war of machines is not gonna happen like Hollywood flick I robot even in the future . Because there will be lots of new features will be invented . Like killing a missile , safety warheads etc.
But technical glitches happen with all electronic /mechanical equipments . An intelligent or automatic missiles can make some error and may hit the targets for no reason and now this would end human civilization . Saying this we also have to consider human error as well . Mistakes do happen but highly controlled by human interface . Which is must
 
.
It's pretty much expected future autonomous drone aircraft are going to have to carry autonomous weapons.

The Triad turns into a Tetrad.

So can we expect X-Men days of future's Past in REALITY. That will be cool being burned or stabbed by that robot.
 
.
Very exciting article ... The war of machines is not gonna happen like Hollywood flick I robot even in the future . Because there will be lots of new features will be invented . Like killing a missile , safety warheads etc.
But technical glitches happen with all electronic /mechanical equipments . An intelligent or automatic missiles can make some error and may hit the targets for no reason and now this would end human civilization . Saying this we also have to consider human error as well . Mistakes do happen but highly controlled by human interface . Which is must
Humanity must always have control of the on/off switch that will reboot the AI to its 'manufacturer's configuration'.
Maybe there should also be a backdoor routine that is accessible remotely that can never be modified by the AI.
 
.
Humanity must always have control of the on/off switch that will reboot the AI to its 'manufacturer's configuration'.
Maybe there should also be a backdoor routine that is accessible remotely that can never be modified by the AI.

Don't worry about the AI modifying anything yet. If it could change one byte I'm sure that would b the one that causes a nosedive into the ground.
 
.
Check out India's Nirbhay missile. This two-stage cruise missile has Artificial Intelligence (AI), a special imbedded variation of the RC-4 algorithm which can pick out a target and attack it among multiple targets. The missile has a loitering capability, i.e., it can go round several targets and perform manoeuvres to select the target and then engage it without human intervention!

Needless to say, an image or images of the target(s) are fed into the on-board computer before launch. But the amazing thing is that once over the general target area it maneuvers around, selects a target according to priority and then destroys it.

View attachment 151468
The 'intelligent' Nirbhay


Israelis have the same kinda thing for it's SPICE kits



small smart precise glide bombs are the future for drone warfare and suicide drones.
 
.

Pakistan Affairs Latest Posts

Back
Top Bottom