
The complex math of counterfactuals could help Spotify pick your next favorite song
A new kind of machine-learning model built by a team of researchers at the music-streaming firm Spotify captures for the first time the complex math behind counterfactual analysis, a precise technique that can be used to identify the causes of past events and predict the effects of future ones. The model, described earlier this year…
Read More
Three ways AI chatbots are a security disaster
AI language models are the shiniest, most exciting thing in tech right now. But they’re poised to create a major new problem: they are ridiculously easy to misuse and to deploy as powerful phishing or scamming tools. No programming skills are needed. What’s worse is that there is no known fix. Tech companies are racing…
Read More
What if we could just ask AI to be less biased?
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Think of a teacher. Close your eyes. What does that person look like? If you ask Stable Diffusion or DALL-E 2, two of the most popular AI image generators, it’s a white…
Read More
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
Whether it’s based on hallucinatory beliefs or not, an artificial-intelligence gold rush has started over the last several months to mine the anticipated business opportunities from generative AI models like ChatGPT. App developers, venture-backed startups, and some of the world’s largest corporations are all scrambling to make sense of the sensational text-generating bot released by…
Read MoreMarch 20 ChatGPT outage: Here’s what happened
An update on our findings, the actions we’ve taken, and technical details of the bug.
Read MoreChatGPT plugins
We’ve implemented initial support for plugins in ChatGPT. Plugins are tools designed specifically for language models with safety as a core principle, and help ChatGPT access up-to-date information, run computations, or use third-party services.
Read MoreThese new tools let you see for yourself how biased AI image models are
Popular AI image-generating systems notoriously tend to amplify harmful biases and stereotypes. But just how big a problem is it? You can now see for yourself using interactive new online tools. (Spoiler alert: it’s big.) The tools, built by researchers at AI startup Hugging Face and Leipzig University and detailed in a non-peer-reviewed paper, allow…
Read More
The bearable mediocrity of Baidu’s ChatGPT competitor
China Report is MIT Technology Review’s newsletter about technology developments in China. Sign up to receive it in your inbox every Tuesday. Did you stay up late last week to watch the release of Ernie Bot, the first Chinese rival to ChatGPT? It felt like the most anticipated event in China’s tech world so far this year,…
Read More
How AI experts are using GPT-4
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. WOW, last week was intense. Several leading AI companies had major product releases. Google said it was giving developers access to its AI language models, and AI startup Anthropic unveiled its AI assistant Claude….
Read More
Language models might be able to self-correct biases—if you ask them
Large language models are infamous for spewing toxic biases, thanks to the reams of awful human-produced content they get trained on. But if the models are large enough, and humans have helped train them, then they may be able to self-correct for some of these biases. Remarkably, all we have to do is ask. That’s…
Read More