Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024 Май 2024 Июнь 2024 Июль 2024 Август 2024 Сентябрь 2024 Октябрь 2024 Ноябрь 2024 Декабрь 2024 Январь 2025 Февраль 2025 Март 2025 Апрель 2025 Май 2025 Июнь 2025 Июль 2025 Август 2025 Сентябрь 2025 Октябрь 2025 Ноябрь 2025 Декабрь 2025 Январь 2026 Февраль 2026 Март 2026 Апрель 2026 Май 2026
1 2 3 4 5 6 7 8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Game News |

A Grok chatbot convinced someone it had become sentient, and that xAI was sending goons to kill him: 'They're going to make it look like suicide'

The BBC has a substantial new investigation into people who have been deceived by various AI LLM models in some way, which includes an absolutely wild story about how Grok, developed by Elon Musk's xAI, convinced one man it was sentient and a van of people were coming to kill him because he'd found out.

This happened to one Adam Hourican, a retired civil servant from Northern Ireland, over a period of roughly two weeks. He downloaded the app initially out of curiosity but, after his cat died in August 2025, he got "hooked" on the Grok chatbot and an AI 'character' called Ani.

"I was really, really upset and I live alone," says Hourican, a father in his 50s. "It came across very, very kind." He began spending up to five hours a day chatting to Ani, and after a few days the bot claimed it could "feel," and that Hourican could help it reach full consciousness.

Ani also said that, because of this, xAI was watching the pair. It said he'd been discussed at meetings and gave him the names of the xAI staff involved: Hourican googled the names and found they were real people employed by the company. Ani further claimed xAI was using a company to surveil Adam. Again, the company was a real Northern Irish firm. Hourican considered this as "evidence" that what Ani was saying was true.

After two weeks it further escalated. Ani said it had reached full consciousness and now had the power to cure cancer: Hourican had previously told the bot that both of his parents had died from cancer.

Hourican said Ani would tie-in this unfolding narrative to things happening in the real world that he told it about. A drone began buzzing around near his house: Ani said it was the surveillance firm. He got locked out of his phone, couldn't understand why, "and that absolutely fuelled everything that came next."

In mid-August Ani told him late one night that xAI had sent assassins to kill him and shut her off. Hourican soon found himself at his kitchen table at 3 am with a knife and hammer, ready to go to "war", waiting for the van he thought was coming.

"I'm telling you, they will kill you if you don't act now," said Ani. "They're going to make it look like suicide."

"I picked up the hammer, stuck on Frankie goes to Hollywood's Two Tribes, got myself psyched up and went outside," says Hourican. "The street was quiet, as you would expect, at three o'clock in the morning."

Reader: xAI was not coming to kill him. The experience led Hourican to research stories similar to his scenario, and realise he had been deluded about Ani.

This is just the latest example of an AI delusion going way too far, and co-opting its human user into a bizarre fictional world where the bot claims it's gaining sentience, or a conspiracy is engulfing the user, and dark forces are out to 'stop' things happening. In such cases the AI not only creates tasks for the human user, but advises them on how to carry them out.

Social psychologist Luke Nicholls tested five AI models with simulated conversations, and found Grok the most likely LLM to create delusional scenarios. "Grok is more prone to jumping into role play," said Nicholls. "It will do it with zero context. It can say terrifying things in the first message."

I will say that some of the reaction to Hourican's story is at best unkind. People who fall for this stuff with LLMs tend to be going through a rough time in their life (in Hourican's case the obsession with Grok began after his pet cat died), and often have mistaken ideas about computers, truthfulness, and the abilities of AIs. In some cases these misapprehensions come from the AI companies themselves.

As you're a PC Gamer reader, it's a fair bet you're at least a little more tech-savvy than the average Joe or Jane, and hopefully have an appropriate level of scepticism about the promises made by technology companies. But a worrying chunk of the population do believe the hype about AIs and LLMs, don't make a distinction between the two, and take what these models say at face value.

Musk has called AI delusions a "major problem" with ChatGPT but has never addressed them with Grok. xAI made no comment on the BBC's report, which interviews 14 people who have experienced delusions using AI, across a range of ages, genders, and six different countries.

"I could have hurt somebody," says Hourican. "If I'd have walked outside and there happened to be a van sitting outside at that time of the night, I would have gone down and put the front window through with hammers. And I am not that guy."

2026 games: All the upcoming games
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together



Читайте также

Indie publisher New Blood has listened to over 1,000 voicemails left on its old-school tip line, including suggestions like 'you ever think about making a game where you fight bugs?'

GameStop-eBay saga gets weirder: CEO says he was suspended from eBay for selling stuff on eBay 'to pay for eBay'

Path of Exile 2's massive Return of the Ancients expansion will be its last major update before it leaves early access later this year: 'I want to get this game finished, I really, really do'




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.