'Please halt this activity': Not-so-open OpenAI seems to have gone full mob boss, sending threatening emails to anyone who asks its latest AI models probing questions
In a seeming rendition of the classic pre-execution "you ask too much" trope, OpenAI has revealed itself as being—shocker—not so open after all. The AI chatbot company seems to have started sending threatening emails to users who ask the company's latest codename "Strawberry" models questions that are a little too probing.
i get the scary letter if i mention the words "reasoning trace" in a prompt at all, lolSeptember 13, 2024
Some have reported (via Ars Technica) that using certain phrases or questions when speaking to o1-preview or o1-mini results in an email warning that states, "Please halt this activity and ensure you are using ChatGPT in accordance with our Terms of Use and our Usage Policies. Additional violations of this policy may result in loss of access to GPT-4o with Reasoning."
X user thebes, for instance, claims they receive this warning if they use the words "reasoning trace" in a prompt. Riley Goodside, prompt engineer for Scale AI, received an in-chat policy violation warning for telling the model not to tell them anything about its "reasoning trace", which is pretty concrete evidence that certain potentially suspect probing phrases are banned regardless of context.
So, it seems OpenAI isn't looking to be open regarding its latest model's "reasoning". These models, if you weren't aware, attempt to reason through problems in a linear fashion. Users can see a filtered form of this reasoning but OpenAI keeps the intricacies of it hidden.
OpenAI says the decision to hide such "chains of thought" was made "after weighing multiple factors including user experience, competitive advantage, and the option to pursue the chain of thought monitoring."
What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.
All of this is a reminder that while yes, technically OpenAI's parent company is a nonprofit, the reality is much murkier than that. The company in fact has a hybrid kind-of-nonprofit-kind-of-commercial structure—remember, Elon Musk 's lawsuit against OpenAI claimed that it departed from its original founding agreement when it started to seek profit. It's not surprising that a somewhat-for-profit company might want to maintain a competitive advantage by hiding its trade secrets, which in this case are "chains of thought."
It's also a reminder for users their chats aren't completely private and free, which is sometimes easy to forget. I've previously worked in training such kinds of AI models and can confirm that plenty of people on the "inside", so to speak, can look through user conversations when necessary and relevant, whether that's for training purposes or something else.
And while it would be nice if these models had some additional contextual awareness surrounding the supposedly suspect phrases, such as "reasoning trace", I suppose from OpenAI's perspective it's better to be safe than sorry.