Uncategorized

March 3, 2022

Lessons learned on language model safety and misuse

We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.

Read More
February 2, 2022

Solving (some) formal math olympiad problems

We built a neural theorem prover for Lean that learned to solve a variety of challenging high-school olympiad problems, including problems from the AMC12 and AIME competitions, as well as two problems adapted from the IMO.

Read More
January 27, 2022

Aligning language models to follow instructions

We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment research. These InstructGPT models, which are trained with humans in the loop, are now deployed as the default language models on our API.

Read More
December 7, 2021

Hello world!

Read More
How Accountability Practices Are Pursued by AI Engineers in the Federal Government  
October 21, 2021

How Accountability Practices Are Pursued by AI Engineers in the Federal Government  

By John P. Desmond, AI Trends Editor    Two experiences of how AI developers within the federal government are pursuing AI accountability practices were outlined at the AI World Government event held virtually and in-person this week in Alexandria, Va.  Taka Ariga, chief data scientist and director at the US Government Accountability Office, described an AI accountability framework he uses within his agency […]

Read More
en_USEnglish