Hamartia Antidote
ELITE MEMBER
- Joined
- Nov 17, 2013
- Messages
- 35,188
- Reaction score
- 30
- Country
- Location
https://www.yahoo.com/news/google-making-ai-more-ai-175338896.html
Designing a good artificial intelligence is hard. For a company like Google, which relies heavily on AI, designing the best possible AI software is crucial. And who better to design an AI than another AI?
If you said "literally anyone else" you might be right, but folks at Google's AI research lab, Google Brain, would disagree. The lab is reportedly building AI software that can build more AI software, with the goal of making future AI cheaper and easier.
Currently, building a powerful AI is hard work. It takes time to carefully train AIs using machine-learning, and money to hire experts who know the tools required to do it. Google Brain's ultimate aim is to reduce these costs and make artificial intelligence more accessible and efficient. If a university or corporation looking to build an AI of their own could simply rent an AI builder instead of hiring a team of experts, it would lower the cost and increase the number of AIs, spreading the benefits of the technology far and wide.
What's more, using AIs to build more AIs may also increase the speed at which new AIs can be made. Currently, AIs can require weeks or months to learn how to do tasks by using unfathomably large amounts of computing power to try things over and over again, quite literally starting with no understanding of anything they're doing. AI trainers might find ways to optimize that process that no human could hope to discover.
how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained." class="canvas-atom canvas-text Mb(1.0em) Mb(0)--sm Mt(0.8em)--sm" data-type="text" data-reactid="20">The downside is that AI building more AIs sure seems like it's inviting a runaway cascade and, eventually, Skynet. After all, one of the things that makes machine-learning AIs so effective is that they learn in ways that are completely unlike-and often completely opaque to-humans. Once you've trained an AI to accomplish a certain goal, you can't necessarily crack it open and see how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained.
For now, Google says its AI maker is not advanced enough yet to compete with human engineers. However, given the rapid pace of AI development, it may only be a few years before that is no longer true. Hopefully it doesn't happen before we are ready.
Designing a good artificial intelligence is hard. For a company like Google, which relies heavily on AI, designing the best possible AI software is crucial. And who better to design an AI than another AI?
If you said "literally anyone else" you might be right, but folks at Google's AI research lab, Google Brain, would disagree. The lab is reportedly building AI software that can build more AI software, with the goal of making future AI cheaper and easier.
Currently, building a powerful AI is hard work. It takes time to carefully train AIs using machine-learning, and money to hire experts who know the tools required to do it. Google Brain's ultimate aim is to reduce these costs and make artificial intelligence more accessible and efficient. If a university or corporation looking to build an AI of their own could simply rent an AI builder instead of hiring a team of experts, it would lower the cost and increase the number of AIs, spreading the benefits of the technology far and wide.
What's more, using AIs to build more AIs may also increase the speed at which new AIs can be made. Currently, AIs can require weeks or months to learn how to do tasks by using unfathomably large amounts of computing power to try things over and over again, quite literally starting with no understanding of anything they're doing. AI trainers might find ways to optimize that process that no human could hope to discover.
how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained." class="canvas-atom canvas-text Mb(1.0em) Mb(0)--sm Mt(0.8em)--sm" data-type="text" data-reactid="20">The downside is that AI building more AIs sure seems like it's inviting a runaway cascade and, eventually, Skynet. After all, one of the things that makes machine-learning AIs so effective is that they learn in ways that are completely unlike-and often completely opaque to-humans. Once you've trained an AI to accomplish a certain goal, you can't necessarily crack it open and see how it is doing it. Its understanding of the world is utterly alien. That's why Google's plans to prevent a Skynet type catastrophe involve gently discouraging AIs from disabling their own killswitches as they are being trained.
For now, Google says its AI maker is not advanced enough yet to compete with human engineers. However, given the rapid pace of AI development, it may only be a few years before that is no longer true. Hopefully it doesn't happen before we are ready.