What's new

[Skynet] Google Is Making AI That Can Make More AI

Hamartia Antidote

ELITE MEMBER
Joined
Nov 17, 2013
Messages
35,188
Reaction score
30
Country
United States
Location
United States
https://www.yahoo.com/news/google-making-ai-more-ai-175338896.html

Designing a good artificial intelligence is hard. For a company like Google, which relies heavily on AI, designing the best possible AI software is crucial. And who better to design an AI than another AI?

If you said "literally anyone else" you might be right, but folks at Google's AI research lab, Google Brain, would disagree. The lab is reportedly building AI software that can build more AI software, with the goal of making future AI cheaper and easier.

Currently, building a powerful AI is hard work. It takes time to carefully train AIs using machine-learning, and money to hire experts who know the tools required to do it. Google Brain's ultimate aim is to reduce these costs and make artificial intelligence more accessible and efficient. If a university or corporation looking to build an AI of their own could simply rent an AI builder instead of hiring a team of experts, it would lower the cost and increase the number of AIs, spreading the benefits of the technology far and wide.

What's more, using AIs to build more AIs may also increase the speed at which new AIs can be made. Currently, AIs can require weeks or months to learn how to do tasks by using unfathomably large amounts of computing power to try things over and over again, quite literally starting with no understanding of anything they're doing. AI trainers might find ways to optimize that process that no human could hope to discover.

how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained." class="canvas-atom canvas-text Mb(1.0em) Mb(0)--sm Mt(0.8em)--sm" data-type="text" data-reactid="20">The downside is that AI building more AIs sure seems like it's inviting a runaway cascade and, eventually, Skynet. After all, one of the things that makes machine-learning AIs so effective is that they learn in ways that are completely unlike-and often completely opaque to-humans. Once you've trained an AI to accomplish a certain goal, you can't necessarily crack it open and see how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained.

For now, Google says its AI maker is not advanced enough yet to compete with human engineers. However, given the rapid pace of AI development, it may only be a few years before that is no longer true. Hopefully it doesn't happen before we are ready.
 
. . . . .
The shallowness of popular debate in the west goes unnoticed in presence of their over-bearing achievements. Its impossible for machines to behave like humans but this idea was much pandered about. Humans will simply never let machines lose on the whole world. They will keep processing limited data forever. Unlike humans who process the whole universe.
 
.
https://www.yahoo.com/news/google-making-ai-more-ai-175338896.html

Designing a good artificial intelligence is hard. For a company like Google, which relies heavily on AI, designing the best possible AI software is crucial. And who better to design an AI than another AI?

If you said "literally anyone else" you might be right, but folks at Google's AI research lab, Google Brain, would disagree. The lab is reportedly building AI software that can build more AI software, with the goal of making future AI cheaper and easier.

Currently, building a powerful AI is hard work. It takes time to carefully train AIs using machine-learning, and money to hire experts who know the tools required to do it. Google Brain's ultimate aim is to reduce these costs and make artificial intelligence more accessible and efficient. If a university or corporation looking to build an AI of their own could simply rent an AI builder instead of hiring a team of experts, it would lower the cost and increase the number of AIs, spreading the benefits of the technology far and wide.

What's more, using AIs to build more AIs may also increase the speed at which new AIs can be made. Currently, AIs can require weeks or months to learn how to do tasks by using unfathomably large amounts of computing power to try things over and over again, quite literally starting with no understanding of anything they're doing. AI trainers might find ways to optimize that process that no human could hope to discover.

how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained." class="canvas-atom canvas-text Mb(1.0em) Mb(0)--sm Mt(0.8em)--sm" data-type="text" data-reactid="20">The downside is that AI building more AIs sure seems like it's inviting a runaway cascade and, eventually, Skynet. After all, one of the things that makes machine-learning AIs so effective is that they learn in ways that are completely unlike-and often completely opaque to-humans. Once you've trained an AI to accomplish a certain goal, you can't necessarily crack it open and see how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained.

For now, Google says its AI maker is not advanced enough yet to compete with human engineers. However, given the rapid pace of AI development, it may only be a few years before that is no longer true. Hopefully it doesn't happen before we are ready.



Google will be responsible for mankind's demise
 
.
I doubt anyone can make an actual AI. Well they all have bounded AI. The Actual deadly AI would be a one that will be able to evolve it self. The actual AI will be without a programing loop. So there are no AIs
 
.
Whilst people have every right to fear these machines, I'm sure we can find ways to prevent them from killing us off.
 
. .
https://futurism.com/googles-new-ai-is-better-at-creating-ai-than-the-companys-engineers/

Google’s New AI Is Better at Creating AI Than the Company’s Engineers

One of the more noteworthy remarks to come out of Google I/O ’17 conference this week was CEO Sundar Pichai recalling how his team had joked that they have achieved “AI inception” with AutoML. Instead of crafting layers of dreams like in the Christopher Nolan flick, however, the AutoML system layers artificial intelligence (AI), with AI systems creating better AI systems.

The AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google’s idea was to create AI that could do it for them.

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”

So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google’s experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.

AI FOR EVERYTHING
AI that can supplement human efforts to develop better machine learning technologies could democratize the field as the relatively few experts wouldn’t be stretched so thin. “If we succeed, we think this can inspire new types of neural nets and make it possible for non-experts to create neural nets tailored to their particular needs, allowing machine learning to have a greater impact to everyone,” according to Google’s blog post.


AutoML has the potential to impact many of the other AI and machine learning-driven softwares that were discussed at the conference. It could lead to improvements in the speech recognition tech required for a voice-controlled Google Home, the facial recognition software powering the Suggested Sharing feature in Google Photos, and the image recognition technology utilized by Google Lens, which allows the user to point their Google Phone at an object (such as a flower) in order to identify it.

Truly, AI has the potential to affect far more than just our homes and phones. It’s already leading to dramatic advancements in healthcare, finance, agriculture, and so many other fields. If we can use an already remarkable technology to actually improve that same kind of technology, every advancement made by humans can lead to machine-powered advancements, which lead to better tools for humans, and so on. The potential of AI then draws to mind the title of another sci-fi film: limitless.
 
.
Give almond milk-shake to the coders and advise them to sleep at 10 PM daily. they are doing a task bigger than Titanic.
 
.
http://www.ml4aad.org/automl/

Here is more information about what they are doing. It is not a general AI thing like skynet. It is an AI tool to help build AI faster. In the sense it is a specific AI to make other specific AIs but it still cannot create an AI without finding what type of AI you want to make.

These tools have long existed in almost every other aspect. They work by principles of optimization. It is used in programming in code optimization for long, in the sense modern compilers can produce faster machine code than hand written one. It is used in electronic circuit design called as design space exploration tools, in the sense that it finds the best design for an item like CPU, cache etc. These things are more like sewing machine for tailors and not like automated fashion design AI.
 
.
I doubt anyone can make an actual AI. Well they all have bounded AI. The Actual deadly AI would be a one that will be able to evolve it self. The actual AI will be without a programing loop. So there are no AIs

Now AI is being developed to evolve on its own. Now AI can even lie!

https://www.technologyreview.com/s/414934/robots-evolve-the-ability-to-deceive/

AI can now write its own program by stealing / copying from other programs.
https://www.newscientist.com/articl...its-own-code-by-stealing-from-other-programs/
 
.
Whilst people have every right to fear these machines, I'm sure we can find ways to prevent them from killing us off.

Follow what genius British did! Make them fight among themselves. more precisely Divide and rule.
 
.
Back
Top Bottom