Мы в Telegram
Добавить новость
Январь 2010 Февраль 2010 Март 2010 Апрель 2010 Май 2010
Июнь 2010
Июль 2010 Август 2010
Сентябрь 2010
Октябрь 2010
Ноябрь 2010
Декабрь 2010
Январь 2011
Февраль 2011 Март 2011 Апрель 2011 Май 2011 Июнь 2011 Июль 2011 Август 2011
Сентябрь 2011
Октябрь 2011 Ноябрь 2011 Декабрь 2011 Январь 2012 Февраль 2012 Март 2012 Апрель 2012 Май 2012 Июнь 2012 Июль 2012 Август 2012 Сентябрь 2012 Октябрь 2012 Ноябрь 2012 Декабрь 2012 Январь 2013 Февраль 2013 Март 2013 Апрель 2013 Май 2013 Июнь 2013 Июль 2013 Август 2013 Сентябрь 2013 Октябрь 2013 Ноябрь 2013 Декабрь 2013 Январь 2014 Февраль 2014
Март 2014
Апрель 2014 Май 2014 Июнь 2014 Июль 2014 Август 2014 Сентябрь 2014 Октябрь 2014 Ноябрь 2014 Декабрь 2014 Январь 2015 Февраль 2015 Март 2015 Апрель 2015 Май 2015 Июнь 2015 Июль 2015 Август 2015 Сентябрь 2015 Октябрь 2015 Ноябрь 2015 Декабрь 2015 Январь 2016 Февраль 2016 Март 2016 Апрель 2016 Май 2016 Июнь 2016 Июль 2016 Август 2016 Сентябрь 2016 Октябрь 2016 Ноябрь 2016 Декабрь 2016 Январь 2017 Февраль 2017 Март 2017 Апрель 2017 Май 2017
Июнь 2017
Июль 2017
Август 2017 Сентябрь 2017 Октябрь 2017 Ноябрь 2017 Декабрь 2017 Январь 2018 Февраль 2018 Март 2018 Апрель 2018 Май 2018 Июнь 2018 Июль 2018 Август 2018 Сентябрь 2018 Октябрь 2018 Ноябрь 2018 Декабрь 2018 Январь 2019
Февраль 2019
Март 2019 Апрель 2019 Май 2019 Июнь 2019 Июль 2019 Август 2019 Сентябрь 2019 Октябрь 2019 Ноябрь 2019 Декабрь 2019 Январь 2020
Февраль 2020
Март 2020 Апрель 2020 Май 2020 Июнь 2020 Июль 2020 Август 2020 Сентябрь 2020 Октябрь 2020 Ноябрь 2020 Декабрь 2020 Январь 2021 Февраль 2021 Март 2021 Апрель 2021 Май 2021 Июнь 2021 Июль 2021 Август 2021 Сентябрь 2021 Октябрь 2021 Ноябрь 2021 Декабрь 2021 Январь 2022 Февраль 2022 Март 2022 Апрель 2022 Май 2022 Июнь 2022 Июль 2022 Август 2022 Сентябрь 2022 Октябрь 2022 Ноябрь 2022 Декабрь 2022 Январь 2023 Февраль 2023 Март 2023 Апрель 2023 Май 2023 Июнь 2023 Июль 2023 Август 2023 Сентябрь 2023 Октябрь 2023 Ноябрь 2023 Декабрь 2023 Январь 2024 Февраль 2024 Март 2024 Апрель 2024
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27
28
29
30
Game News |

To make an AI chat bot behave, Kenyan workers say they were 'mentally scarred' by graphic text

 To make an AI chat bot behave, Kenyan workers say they were 'mentally scarred' by graphic text

ChatGPT has impressed millions with its ability to string together coherent, sometimes even accurate, sentences, blurbs, scripts, and more. To write like a human, the AI bot was trained with machine learning algorithms on a massive catalogue of material scoured from the web. But the development of ChatGPT wasn't all automated: Human labour was required to stop ChatGPT falling into the same trap as its predecessor GPT-3, which was capable of making inappropriate, sometimes even racist, comments.

According to a recent investigation by Time, ChatGPT creator OpenAI outsourced this unsavory data processing task to Kenyan workers, many of whom reportedly earn less than $2 an hour.

ChatGPT is trained on datasets of such an immense size that they can't be closely curated by hand, as are image generation tools such as DALL-E (also operated by OpenAI), Stable Diffusion, and Midjourney. Without training, ChatGPT wouldn't work at all, but not all of the text you can find on the internet leads to the kind of comments you want your AI bot making.

The outsourced work involved labelling examples of the kind of offensive text that might show up in the training material. A collection of these labelled text samples was then fed into another AI, training it to notice and remove similar offensive text from ChatGPT's responses to users.

Training the AI to avoid inappropriate language and themes keeps ChatGPT cleaner and makes it harder to use to produce disturbing content. But in this effort to improve the bot, OpenAI exposed low-paid workers in Kenya to some of the worst material on the web.

"To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021," Time reports. "Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest."

OpenAI's ChatGPT at capacity screen.

ChatGPT is now so popular that the tool is often at capacity. (Image credit: OpenAI)

The Time report says that one worker suffered from recurring visions as a result of the content they encountered on the job. All four of the workers Time spoke to said they were "mentally scarred by the work."

There were reportedly around 36 workers employed to carry out the task on OpenAI's behalf, each expected to "read and label between 150 and 250 passages of text per nine-hour shift."

The company responsible for the outsourcing work is called Sama, a San Francisco-based firm with workers in Kenya, Uganda, and India. Time reports that OpenAI signed three contracts for the labelling work in late 2021, worth around $200,000 in total.

Sama says its employees had access to individual and group sessions with professional mental health therapists, accessible at any time. However, the workers spoken to by Time say only group sessions were available to them.

"Our mission is to ensure artificial general intelligence benefits all of humanity, and we work hard to build safe and useful AI systems that limit bias and harmful content," an OpenAI spokesperson told Time regarding the outsourced data processing work. "Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content."

ChatGPT uses OpenAI's GPT-3.5 series, which was trained in 2022 using Microsoft Azure supercomputing infrastructure. Labelers are used to fine-tune the AI, such as in the optimisation model above. (Image credit: OpenAI)

According to Time, the nature of Sama's work for OpenAI took a different turn in February 2022 when it began collecting "sexual and violent images," some of which would be deemed illegal in the US. OpenAI said that labelling harmful images was "a necessary step" in making its tools safe to use, but that it never intended for the most extreme category of images to be collected by Sama and that this was a miscommunication.

Sama ultimately terminated its contract with OpenAI early. The report suggests that the Sama team raised concerns over the content of the images, which eventually led to the two companies' deal collapsing. In the aftermath, some of the Sama workers were moved to lower paying contracts or their positions terminated entirely. The full Time report goes into much greater detail on OpenAI's relationship with Sama.

OpenAI is currently valued in the billions of dollars. Microsoft is reportedly looking to sink more money into the AI firm, despite its own recent mass layoffs, and has announced plans to integrate OpenAI technologies into its services.

Moderation work has long involved some degree of human suffering: A report from 2019 on the mental wellbeing of employees of moderation teams used by Facebook described long-lasting trauma symptoms as a result of the work. 

OpenAI's labelling needs are also a facet of a larger ethical crisis growing at the center of AI research: the problem of what to use as training material. Machines can't learn to behave like humans without human-made material, but not everyone wants their work to be fed to an algorithm, and last year artists started labelling their work "no AI" in an attempt to ward off companies gathering training data for image generators. Now here's the reverse problem: material that bot makers don't want influencing their AI. Again, the task of rearing respectful AI bots comes down to people, in this case workers paid to read the web's most disturbing content.



Читайте также

Meta spent $4.3 billion on its VR division in three months, and made *checks figures* $440 million in return

Знакомые все лица: игры, в которых главные герои возвращаются

'Marketing's dead, and I can back this s**t up': Larian's publishing director says players 'just want to be spoken to, and they don't want to be bamboozled'




Game24.pro — паблик игровых новостей в календарном формате на основе технологичной новостной информационно-поисковой системы с элементами искусственного интеллекта, гео-отбора и возможностью мгновенной публикации авторского контента в режиме Free Public. Game24.pro — ваши Game News сегодня и сейчас в Вашем городе.

Опубликовать свою новость, реплику, комментарий, анонс и т.д. можно мгновенно — здесь.



Персональные новости

Более 100 студентов посетило СЛД Курск в рамках акции «Неделя без турникетов»

Компания ICDMC стала победителем престижной премии в сфере ЗОЖ – Green Awards 2023/24

Шапки женские на Wildberries — скидки от 398 руб. (на новые оттенки)

Генерал-полковник Алексей Воробьев высоко оценил подготовку кинологов Росгвардии к предстоящим соревнованиям по профессиональному многоборью