Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024 Май 2024 Июнь 2024 Июль 2024 Август 2024 Сентябрь 2024 Октябрь 2024 Ноябрь 2024 Декабрь 2024 Январь 2025 Февраль 2025 Март 2025 Апрель 2025 Май 2025 Июнь 2025 Июль 2025 Август 2025 Сентябрь 2025 Октябрь 2025 Ноябрь 2025 Декабрь 2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Game News |

AI industry begs someone to please stop the AI industry before all human life is extinguished by the AI industry

The people making artificial intelligence say that artificial intelligence is an existential threat to all life on the planet and we could be in real trouble if somebody doesn't do something about it.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," the prelude to the Center for AI Safety's Statement on AI Risk states. "Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. 

"The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI's most severe risks seriously."

And then, finally, the statement itself:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

(Image credit: Center for AI Safety)

It's a real banger, alright, and more than 300 researchers, university professors, institutional chairs, and the like have put their names to it. The top two signatories, Geoffrey Hinton and Yoshua Bengio, have both been referred to in the past as "godfathers" of AI; other notable names include Google Deepmind CEO (and former Lionhead lead AI programmer) Demis Hassabis, OpenAI CEO Sam Altman, and Microsoft CTO Kevin Scott.

It's a veritable bottomless buffet of big brains, which makes me wonder how they seem to have collectively overlooked what I think is a pretty obvious question: If they seriously think their work threatens the "extinction" of humanity, then why not, you know, just stop? 

Maybe they'd say that they intend to be careful, but that others will be less scrupulous. And there are legitimate concerns about the risks posed by runaway, unregulated AI development, of course. Still, it's hard not to think that this sensational statement isn't also strategic. Implying that we're looking at a Skynet scenario unless government regulators step in could benefit already-established AI companies by making it more difficult for upstarts to get in on the action. It could also provide an opportunity for major players like Google and Microsoft—again, the established AI research companies—to have a say in how such regulation is shaped, which could also work to their benefit.

Professor Ryan Calo of the University of Washington School of Law suggested a couple of other possible reasons for the warning: distraction from more immediate, addressable problems with AI, and hype building.

"The first reason is to focus the public's attention on a far fetched scenario that doesn’t require much change to their business models. Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow 'waking up' is not," Calo tweeted.

"The second is to try to convince everyone that AI is very, very powerful. So powerful that it could threaten humanity! They want you to think we've split the atom again, when in fact they’re using human training data to guess words or pixels or sounds."

Calo said that to the extent AI does threaten the future of humanity, "it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources."

"I get that many of these folks hold a sincere, good faith belief," Calo said. "But ask yourself how plausible it is. And whether it's worth investing time, attention, and resources that could be used to address privacy, bias, environmental impacts, labor impacts, that are actually occurring."

Professor Emily M. Bender was somewhat blunter in her assessment, calling the letter "a wall of shame—where people are voluntarily adding their own names."

"We should be concerned by the real harms that corps and the people who make them up are doing in the name of 'AI', not abt Skynet," Bender wrote.

See more

Hinton, who recently resigned from his research position at Google, expressed more nuanced thoughts about the potential dangers of AI development in April, when he compared AI to "the intellectual equivalent of a backhoe," a powerful tool that can save a lot of work but that's also potentially dangerous if misused. A single-sentence like this can't carry any real degree of complexity, but—as we can see from the widespread discussion of the statement—it sure does get attention.

Interestingly, Hinton also suggested in April that governmental regulation of AI development may be pointless because it's virtually impossible to track what individual research agencies are up to, and no corporation or national government will want to risk letting someone else gain an advantage. Because of that, he said it's up to the world's leading scientists to work collaboratively to control the technology—presumably by doing more than just firing off a tweet asking someone else to step in.



Читайте также

This roguelite claims to have the dubious honor of being 'the world's first fully playable game created 100% through AI' in a milestone for slop everywhere

Confession time: How long do you stick it out before you abandon a terrible MMO dungeon/raid group?

Why I love player housing in MMOs




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.