What's new

Google is working on a kill switch to prevent an AI (Artificial Intelligence) uprising

Do you think AI can become a serious threat to human civilization at some point in future?


  • Total voters
    28

dray

BANNED
Joined
Apr 8, 2013
Messages
10,853
Reaction score
-1
Country
India
Location
India
Google is working on a kill switch to prevent an AI uprising
But it isn't ready to be implemented across the board just yet.

Timothy J. Seppala
06.03.16 in Robots

476996566-ed.jpg


Elon Musk and noted astrophysicist Stephen Hawking are so determined to warn us of the terrifying implications that could culminate in a Skynet situation where the robots and algorithms stop listening to us. Google is keen to keep this sort of thing from happening, as well, and has published a paper (PDF) detailing the work its Deep Mind team is doing to ensure there's a kill switch in place to prevent a robocalypse situation.

Essentially, Deep Mind has developed a framework that'll keep AI from learning how to prevent -- or induce -- human interruption of whatever it's doing. The team responsible for toppling a word Go champion hypothesized a situation where a robot was working in a warehouse, sorting boxes or going outside to bring more boxes in.

The latter is considered more important, so the researchers would give the robot a bigger reward for doing so. But human intervention to prevent damage is needed because it rains pretty frequently here. That alters the task for the robot, making it want to stay out of the rain, and then adopting the human interruption as part of the task rather than being a one-off thing.

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," the researchers write.

Deep Mind isn't sure that its interruption mechanisms could be applicable to all algorithms. Specifically? Those related to policy-search robotics (a part of machine learning), so it sounds like there's still a ways to go before the kill switch can be implemented across the board. Sleep tight.

http://www.engadget.com/2016/06/03/google-ai-killswitch/
 
.
Google is working on a kill switch to prevent an AI uprising
But it isn't ready to be implemented across the board just yet.

Timothy J. Seppala
06.03.16 in Robots

476996566-ed.jpg


Elon Musk and noted astrophysicist Stephen Hawking are so determined to warn us of the terrifying implications that could culminate in a Skynet situation where the robots and algorithms stop listening to us. Google is keen to keep this sort of thing from happening, as well, and has published a paper (PDF) detailing the work its Deep Mind team is doing to ensure there's a kill switch in place to prevent a robocalypse situation.

Essentially, Deep Mind has developed a framework that'll keep AI from learning how to prevent -- or induce -- human interruption of whatever it's doing. The team responsible for toppling a word Go champion hypothesized a situation where a robot was working in a warehouse, sorting boxes or going outside to bring more boxes in.

The latter is considered more important, so the researchers would give the robot a bigger reward for doing so. But human intervention to prevent damage is needed because it rains pretty frequently here. That alters the task for the robot, making it want to stay out of the rain, and then adopting the human interruption as part of the task rather than being a one-off thing.

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," the researchers write.

Deep Mind isn't sure that its interruption mechanisms could be applicable to all algorithms. Specifically? Those related to policy-search robotics (a part of machine learning), so it sounds like there's still a ways to go before the kill switch can be implemented across the board. Sleep tight.

http://www.engadget.com/2016/06/03/google-ai-killswitch/

...and when the AI finds out about it..they will implement their own kill switch.
 
.
just another thing to be blamed for human criminal instincts
 
. .
first make the AI smart enough.i am still not satisfied wirh the AI in total war:D so how can they even manage to take control:o:
 
. .
Only matter of time , human bio degradable bodies with life span of 60 years can't match robustness of Titanium bodies and disease free synthetic lubricant (instead of blood) and brain with learning capacity of 1,000,000 Humans and life span of 20,000 years

Human powers peak between age of 10 - 29 year old age , before 10 human is too weak and beyond 29 , the body starts to slow down
 
.
Didn't Microsoft recently create an AI type programme that had to be switched off because it turned into a neo nazi anti jewish racist bigot after learning from the likea of twitter and facebook lol may not need a kill switch if we can keep it away from today's social media
 
. .
Nothing happens by chance or accident and neither will consciousness.

Not true.. Not true at all. Consciousness does happen by chance.. that's how it developed in us.. No??

See, the issue is, that many people see the concept of consciousness as an “on or off" switch. You're either conscious or you're not. But It doesn't work that way. Consciousness isn't a switch, its a ladder. There are different degrees of consciousness. An ant is more conscious than my shirt. A spider is more conscious than an ant. A rat is more conscious than a spider. A chimpanzee is more conscious than a rat. And a human being is more conscious than a chimpanzee.

Now, AI has been steadily moving up the ladder. Currently, its consciousness might be lower than an ant. But unlike living creatures, who take millions of years to evolve, the AI can make huge strides in quick time. It won't take it much time to surpass the ant, then the rat, then the chimpanzee.. And all the while, we'll be sitting there amused and fascinated by computer program that behaves like a baby chimpanzee. So cute. We'll be applauding and encouraging it to go even farther.. But what happens when the AI consciousness finally surpasses that of humans..??
 
.
Only matter of time , human bio degradable bodies with life span of 60 years can't match robustness of Titanium bodies and disease free synthetic lubricant (instead of blood) and brain with learning capacity of 1,000,000 Humans and life span of 20,000 years

Human powers peak between age of 10 - 29 year old age , before 10 human is too weak and beyond 29 , the body starts to slow down
Right now the supercomputers in the top500 cannot match the power of human brain and I'm not talking about your desktop PC, these are computers with thousands of cores spread over acres of floor space.....how come a human sized robot accommodate a brain power of a million people??
 
.
Not true.. Not true at all. Consciousness does happen by chance.. that's how it developed in us.. No??

See, the issue is, that many people see the concept of consciousness as an “on or off" switch. You're either conscious or you're not. But It doesn't work that way. Consciousness isn't a switch, its a ladder. There are different degrees of consciousness. An ant is more conscious than my shirt. A spider is more conscious than an ant. A rat is more conscious than a spider. A chimpanzee is more conscious than a rat. And a human being is more conscious than a chimpanzee.

Now, AI has been steadily moving up the ladder. Currently, its consciousness might be lower than an ant. But unlike living creatures, who take millions of years to evolve, the AI can make huge strides in quick time. It won't take it much time to surpass the ant, then the rat, then the chimpanzee.. And all the while, we'll be sitting there amused and fascinated by computer program that behaves like a baby chimpanzee. So cute. We'll be applauding and encouraging it to go even farther.. But what happens when the AI consciousness finally surpasses that of humans..??

My opinions differs from yours and I request you to respect my views as I do for yours.


No it's actually very irrational to think that Consciousness or life in general happened by sheer chance, that's a probability of 1 in billions and billions for the sub atomic particles within our universe that are present in almost infinite amount to align in such a way that life comes into existence and with it consciousness just by chance is unimaginably small likewise for the AI with consciousness to come into existence by chance is also unimaginably small, the real question is can humanity deliberately is capable of creating life or Consciousness? but then again:

"Then did you think that We created you in play and that to Us you would not be returned?" - Quran 23:115
 
.
My opinions differs from yours and I request you to respect my views as I do for yours.


No it's actually very irrational to think that Consciousness or life in general happened by sheer chance, that's a probability of 1 in billions and billions for the sub atomic particles within our universe that are present in almost infinite amount to align in such a way that life comes into existence and with it consciousness just by chance is unimaginably small likewise for the AI with consciousness to come into existence by chance is also unimaginably small, the real question is can humanity deliberately is capable of creating life or Consciousness? but then again:

"Then did you think that We created you in play and that to Us you would not be returned?" - Quran 23:115

Oh boy.. Religion..!! Well, in that case, I can't argue anymore. Nobody ever wins against religious arguments. Peace out..
 
.
Not true.. Not true at all. Consciousness does happen by chance.. that's how it developed in us.. No??

See, the issue is, that many people see the concept of consciousness as an “on or off" switch. You're either conscious or you're not. But It doesn't work that way. Consciousness isn't a switch, its a ladder. There are different degrees of consciousness. An ant is more conscious than my shirt. A spider is more conscious than an ant. A rat is more conscious than a spider. A chimpanzee is more conscious than a rat. And a human being is more conscious than a chimpanzee.

Now, AI has been steadily moving up the ladder. Currently, its consciousness might be lower than an ant. But unlike living creatures, who take millions of years to evolve, the AI can make huge strides in quick time. It won't take it much time to surpass the ant, then the rat, then the chimpanzee.. And all the while, we'll be sitting there amused and fascinated by computer program that behaves like a baby chimpanzee. So cute. We'll be applauding and encouraging it to go even farther.. But what happens when the AI consciousness finally surpasses that of humans..??
We don't even know what a consciousness is, in terms of biology, so to say it is by chance is ridiculous.

Ignoring religious arguments, we can't prove consciousness, thus we can't prove that it was random. Until we can figure out what makes up a consciousness, we won't know if it was merely a random coincidence, or simply an evolutionary trait.

You're confusing consciousness with self-awareness, those are two different things; For example, while horses may be smarter than an ant, they're not as self-aware as, let's say dolphins. Meanwhile, dolphins have been observed recognizing their own reflection, which would indicate a certain level of self-awareness.

tl;dr consciousness =/= self-awareness. Both the horse and the dolphin are conscious beings, but have a different level of self-awareness.

----------

As for the topic, I have a lot of doubt that self-aware AI will be a thing anytime soon, if ever. The truth is, as long as we don't figure out what makes us conscience, we won't ever have self-aware ai. While humanity has made massive strides in terms of computing technology, humanity has made slow progress in terms of neuroscience, which is what is needed to understand the make up of the human consciousness. We're nowhere near close to the level we need to be, to even thing about creating a conscious self aware machine.

People like to point towards modern ai systems being built by universities and corporations, but the thing to keep in mind is that none of them have shown any signs of self-awareness, or even a consciousness. They're "learning computers", but that doesn't necessarily mean they're self-aware, it just means they can memorize what they see.
 
.
An AI hooked up to a supercomputer with a self learning algorithm will wipe out possible counter measure. So far no one has been able to develop a unique self learning algorithm which can potentially cause any harm so every one can sleep tonight with ease.
 
.

Pakistan Defence Latest Posts

Back
Top Bottom