Добавить новость
Январь 2010Февраль 2010Март 2010Апрель 2010Май 2010
Июнь 2010
Июль 2010Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011Март 2011Апрель 2011Май 2011Июнь 2011Июль 2011Август 2011
Сентябрь 2011
Октябрь 2011Ноябрь 2011Декабрь 2011Январь 2012Февраль 2012Март 2012Апрель 2012Май 2012Июнь 2012Июль 2012Август 2012Сентябрь 2012Октябрь 2012Ноябрь 2012Декабрь 2012Январь 2013Февраль 2013Март 2013Апрель 2013Май 2013Июнь 2013Июль 2013Август 2013Сентябрь 2013Октябрь 2013Ноябрь 2013Декабрь 2013Январь 2014Февраль 2014
Март 2014
Апрель 2014Май 2014Июнь 2014Июль 2014Август 2014Сентябрь 2014Октябрь 2014Ноябрь 2014Декабрь 2014Январь 2015Февраль 2015Март 2015Апрель 2015Май 2015Июнь 2015Июль 2015Август 2015Сентябрь 2015Октябрь 2015Ноябрь 2015Декабрь 2015Январь 2016Февраль 2016Март 2016Апрель 2016Май 2016Июнь 2016Июль 2016Август 2016Сентябрь 2016Октябрь 2016Ноябрь 2016Декабрь 2016Январь 2017Февраль 2017Март 2017Апрель 2017Май 2017
Июнь 2017
Июль 2017
Август 2017Сентябрь 2017Октябрь 2017Ноябрь 2017Декабрь 2017Январь 2018Февраль 2018Март 2018Апрель 2018Май 2018Июнь 2018Июль 2018Август 2018Сентябрь 2018Октябрь 2018Ноябрь 2018Декабрь 2018Январь 2019
Февраль 2019
Март 2019Апрель 2019Май 2019Июнь 2019Июль 2019Август 2019Сентябрь 2019Октябрь 2019Ноябрь 2019Декабрь 2019Январь 2020
Февраль 2020
Март 2020Апрель 2020Май 2020Июнь 2020Июль 2020Август 2020Сентябрь 2020Октябрь 2020Ноябрь 2020Декабрь 2020Январь 2021Февраль 2021Март 2021Апрель 2021Май 2021Июнь 2021Июль 2021Август 2021Сентябрь 2021Октябрь 2021Ноябрь 2021Декабрь 2021Январь 2022Февраль 2022Март 2022Апрель 2022Май 2022Июнь 2022Июль 2022Август 2022Сентябрь 2022Октябрь 2022Ноябрь 2022Декабрь 2022Январь 2023Февраль 2023Март 2023Апрель 2023Май 2023Июнь 2023Июль 2023Август 2023Сентябрь 2023Октябрь 2023
12
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Game News |

AI industry begs someone to please stop the AI industry before all human life is extinguished by the AI industry

 AI industry begs someone to please stop the AI industry before all human life is extinguished by the AI industry

The people making artificial intelligence say that artificial intelligence is an existential threat to all life on the planet and we could be in real trouble if somebody doesn't do something about it.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," the prelude to the Center for AI Safety's Statement on AI Risk states. "Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. 

"The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously."

And then, finally, the statement itself:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

(Image credit: Center for AI Safety)

It's a real banger, alright, and more than 300 researchers, university professors, institutional chairs, and the like have put their names to it. The top two signatories, Geoffrey Hinton and Yoshua Bengio, have both been referred to in the past as "godfathers" of AI; other notable names include Google Deepmind CEO (and former Lionhead lead AI programmer) Demis Hassabis, OpenAI CEO Sam Altman, and Microsoft CTO Kevin Scott.

It's a veritable bottomless buffet of big brains, which makes me wonder how they seem to have collectively overlooked what I think is a pretty obvious question: If they seriously think their work threatens the "extinction" of humanity, then why not, you know, just stop? 

Maybe they'd say that they intend to be careful, but that others will be less scrupulous. And there are legitimate concerns about the risks posed by runaway, unregulated AI development, of course. Still, it's hard not to think that this sensational statement isn't also strategic. Implying that we're looking at a Skynet scenario unless government regulators step in could benefit already-established AI companies by making it more difficult for upstarts to get in on the action. It could also provide an opportunity for major players like Google and Microsoft—again, the established AI research companies—to have a say in how such regulation is shaped, which could also work to their benefit.

Professor Ryan Calo of the University of Washington School of Law suggested a couple of other possible reasons for the warning: distraction from more immediate, addressable problems with AI, and hype building.

"The first reason is to focus the public's attention on a far fetched scenario that doesn’t require much change to their business models. Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow 'waking up' is not," Calo tweeted.

"The second is to try to convince everyone that AI is very, very powerful. So powerful that it could threaten humanity! They want you to think we've split the atom again, when in fact they’re using human training data to guess words or pixels or sounds."

Calo said that to the extent AI does threaten the future of humanity, "it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources."

"I get that many of these folks hold a sincere, good faith belief," Calo said. "But ask yourself how plausible it is. And whether it's worth investing time, attention, and resources that could be used to address privacy, bias, environmental impacts, labor impacts, that are actually occurring."

Professor Emily M. Bender was somewhat blunter in her assessment, calling the letter "a wall of shame—where people are voluntarily adding their own names."

"We should be concerned by the real harms that corps and the people who make them up are doing in the name of 'AI', not abt Skynet," Bender wrote.

See more

Hinton, who recently resigned from his research position at Google, expressed more nuanced thoughts about the potential dangers of AI development in April, when he compared AI to "the intellectual equivalent of a backhoe," a powerful tool that can save a lot of work but that's also potentially dangerous if misused. A single-sentence like this can't carry any real degree of complexity, but—as we can see from the widespread discussion of the statement—it sure does get attention.

Interestingly, Hinton also suggested in April that governmental regulation of AI development may be pointless because it's virtually impossible to track what individual research agencies are up to, and no corporation or national government will want to risk letting someone else gain an advantage. Because of that, he said it's up to the world's leading scientists to work collaboratively to control the technology—presumably by doing more than just firing off a tweet asking someone else to step in.



Читайте также

Wind Wings 1.3.92

Wordle today: Hint and answer for #835 Monday, October 2

Компьютер месяца — октябрь 2023 года




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.



Персональные новости

Продавайте лучше с помощью Ace Place: больше клиентов и продаж

Более 10 тысяч человек участвуют в общеевропейском митинге в поддержку Нагорного Карабаха в Брюсселе. Фоторяд

Эрдоган поддержал Алиева, и описал этнические чистки и геноцид в Нагорном Карабахе, как «торжество справедливости». ФОТОРЕПОРТАЖ ТРАГЕДИИ

Два автомобиля столкнулись на востоке Москвы