Мы в Telegram
Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26
27
28
29
30
Game News |

'Godfather of Deep Learning' quits Google and warns of AI dangers: 'I don’t think they should scale this up more until they have understood whether they can control it'

 'Godfather of Deep Learning' quits Google and warns of AI dangers: 'I don’t think they should scale this up more until they have understood whether they can control it'

Geoffrey Hinton, known colloquially as the "Godfather of Deep Learning," spent the past decade working on artificial intelligence development at Google. But in an interview with The New York Times, Hinton announced that he has resigned from his position, and said he's worried about the rate of AI development and its potential for harm.

Hinton is one of the foremost researchers in the field of AI development. The Royal Society, to which he was elected as a Fellow in 1998, describes him as "distinguished for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher," and said that his work "may well be the start of autonomous intelligent brain-like machines."

In 2012, he and students Alex Krizhevsky and Ilya Sutskever developed a system called AlexNet, a "convolutional neural network" able to recognize and identify objects in images with far greater accuracy than any preceding system. Shortly after using AlexNet to win the 2012 ImageNet challenge, they launched a startup company called DNN Research, which Google quickly snapped up for $44 million.

Hinton continued his AI work on a part-time basis at Google—he's also a professor at the University of Toronto—and to lead advancements in the field: In 2018, for instance, he was a co-winner of the Turing Award for "major breakthroughs in artificial intelligence." 

"He was one of the researchers who introduced the back-propagation algorithm and the first to use backpropagation for learning word embeddings," his presumably soon-to-be-deleted Google employee page says. "His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His research group in Toronto made major breakthroughs in deep learning that have revolutionized speech recognition and object classification."

(Image credit: Google)

More recently, though, he's apparently had a dramatic change of heart about the nature of his work. Part of Hinton's new concern arises from the "scary" rate at which AI development is moving forward. "The idea that this stuff could actually get smarter than people—a few people believed that," Hinton said. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

That's happening at least in part a result of competing corporate interests, as Microsoft and Google race to develop more advanced AI systems. It's unclear what can be done about it: Hinton said he believes that race to the top can only be managed through some form of global regulation, but that may be impossible because there's no way to know what companies are working on behind closed doors. Thus, he thinks it falls to the science community to take action.

"I don’t think they should scale this up more until they have understood whether they can control it," he said.

But even if scientists elect to take a slower and more deliberate approach to AI (which I think is unlikely), the inevitable outcome of continued development obviously worries Hinton too: "It is hard to see how you can prevent the bad actors from using it for bad things," he said.

Hinton's latest comments stand in interesting contrast to a 2016 interview with Maclean's, in which he expressed a need for caution but said that it shouldn't be used to hinder the development of AI in the future. 

"It’s a bit like… as soon as you have good mechanical technology, you can make things like backhoes that can dig holes in the road. But of course a backhoe can knock your head off," Hinton said. "But you don’t want to not develop a backhoe because it can knock your head off, that would be regarded as silly.

"Any new technology, if it’s used by evil people, bad things can happen. But that’s more a question of the politics of the technology. I think we should think of AI as the intellectual equivalent of a backhoe. It will be much better than us at a lot of things. And it can be incredibly good—backhoes can save us a lot of digging. But of course, you can misuse it."

People should be thinking about the impact that AI will have on humanity, he said, but added, "the main thing shouldn’t be, how do we cripple this technology so it can’t be harmful, it should be, how do we improve our political system so people can’t use it for bad purposes?"

Hinton made similar statements in a 2016 interview with TVO, in which he acknowledged the potential for problems but said he expected them to be much further down the road than they're actually proving to be.

Interestingly, Hinton was not one of the signatories to recent open letters calling for a six-month "pause" on the development of new AI systems. According to the Times, he didn't want to publicly criticize Google or other companies until after he had resigned. Hinton clarified on Twitter that he did not leave Google so he could speak out about the company, however, but so that he could "talk about the dangers of AI without considering how this impacts Google."

"Google has acted very responsibly," he added.

See more

Be that as it may, it's a very big deal that one of the foremost minds in AI development is now warning that it could all be very bad for us one day. Hinton's new outlook has obvious parallels to Oppenheimer's regret about his role in developing nuclear weapons. Of course, Oppenheimer's second thoughts came after the development and use of the atomic bomb, when it was easy to see just how dramatically the world had changed. It remains to be seen whether Hinton's regrets also come after the horse has bolted, or if there's still time (and sufficiently regulatory capability in global governments) to avoid the worst.





Читайте также

Знакомые все лица: игры, в которых главные герои возвращаются

The first official reference to the 'AMD Ryzen 9000 series' gives Zen 5 a name and hints at an imminent release date

State of PC gaming roundtable: Larian, Digital Extremes, CCP, Mega Crit on making games in 2024

Москва

Олимпиада по финансовой грамотности МГУ проходит при поддержке СберСтрахования жизни

Новости тенниса



Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.



Персональные новости

Более 100 студентов посетило СЛД Курск в рамках акции «Неделя без турникетов»

Врач-гигиенист сети клиник «Мегастом» Ольга Жидких: как правильно ухаживать на полостью рта и зубов

Шапки женские на Wildberries — скидки от 398 руб. (на новые оттенки)

Компания ICDMC стала победителем престижной премии в сфере ЗОЖ – Green Awards 2023/24