Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024 Май 2024 Июнь 2024 Июль 2024 Август 2024 Сентябрь 2024 Октябрь 2024 Ноябрь 2024 Декабрь 2024 Январь 2025 Февраль 2025 Март 2025 Апрель 2025 Май 2025 Июнь 2025 Июль 2025 Август 2025 Сентябрь 2025 Октябрь 2025 Ноябрь 2025 Декабрь 2025 Январь 2026 Февраль 2026 Март 2026
1 2 3 4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Game News |

'We shouldn't have rushed to get this out on Friday': OpenAI hastily amends the terms of its controversial deal with the US Department of War as CEO Sam Altman claims it's been a 'good learning experience'

After a very public falling out between Anthropic and the US Department of War late last week—in which the former refused to remove safeguards preventing its AI tools from being used for autonomous weaponry and mass surveillance purposes—OpenAI stepped into the vacuum with a deal to use its own AI tools in the US military's systems.

However, after reaching an agreement with the Pentagon on Friday, OpenAI CEO Sam Altman has since announced that his company will be amending the language used within the deal (via BBC News). In a statement posted on X, Altman appears to regret jumping into the fold quite so quickly, amid considerable backlash to the earlier terms.

"One thing I think I did wrong: we shouldn't have rushed to get this out on Friday", said Altman. "The issues are super complex, and demand clear communication. "

The language Altman wishes to tweak revolve around domestic mass surveillance concerns. Citing the Fourth Amendment of the US Constitution and the National Security Act of 1947, the new terms amount to the following:

"The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.", the statement reads. "For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."

(Image credit: Kyle Grillot/Bloomberg via Getty Images)

"It’s critical to protect the civil liberties of Americans, and there was so much focus on this, that we wanted to make this point especially clear" says Altman, although he then clarifies that "just like everything we do with iterative deployment, we will continue to learn and refine as we go."

Altman also says that the Department of War has affirmed that OpenAI's services will not be used by its US intelligence agencies like the NSA, and that OpenAI "want[s] to work through democratic processes."

"It should be the government making the key decisions about society," Altman continues. "We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it)."

However, Altman's second-to-last point is perhaps the most interesting. "There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety. We will work through these, slowly, with the DoW, with technical safeguards and other methods."

(Image credit: Getty Images - Anadolu Agency)

In a now-updated statement on OpenAI's website (which echoes many of the points Altman makes in his previous posting), the lines are drawn slightly more clearly:

"We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs", says the company. "No use of OpenAI technology for mass domestic surveillance. No use of OpenAI technology to direct autonomous weapons systems. No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as 'social credit')."

Which seems suspiciously close to the same points that Anthropic was pushing back on, and those which appear to have cost it its $200 million government contract as a result.

(Image credit: Chris Ratcliffe/Bloomberg via Getty Images)

However, OpenAI still states that it thinks "our agreement has more guardrails than any previous agreement for classified AI deployments", and that "we protect our red lines through a more expansive, multi-layered approach. We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections."

For these seemingly-last minute changes, Altman appears somewhat contrite. Summing up his fifth and final point in his earlier X post, the OpenAI CEO said:

"We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy. [It's been a] good learning experience for me as we face higher-stakes decisions in the future."

Quite the public learning experience, at the very least. ChatGPT uninstalls were reported to have surged by 295% after the initial agreement was announced, and users appear to have reacted poorly to the idea of their AI tool of choice jumping into bed with the US Department of War. At the time of writing, the most liked comment on Altman's X post reads as thus:

"No amount of damage control is going to fix the irreparable harm you did to your brand this week. It's over, Sam."

Time will tell, I suppose.



Читайте также

Игра Muv-Luv Girls Garden про девушек- пилотов меха-роботов доступна в Японии

How to get a Breathtaking Snowglobe in Arc Raiders

В Польше начался ранний запуск Broken Ranks Mobile на Android




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.