Microsoft is investing $1 billion in OpenAI to support us building artificial general intelligence (AGI) with widely distributed economic...
We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry ...
We hosted the first OpenAI Robotics Symposium on April 27, 2019.
Our second class of OpenAI Scholars has concluded, with all eight scholars producing an exciting final project showcased at Scholars Demo...
Our second class of OpenAI Fellows has wrapped up, with each Fellow going from a machine learning beginner to core OpenAI contributor in ...
We’ve created MuseNet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combi...
We’ve developed the Sparse Transformer, a deep neural network which sets new records at predicting what comes next in a sequence—whether ...
OpenAI Five is the first AI to beat the world champions in an esports game, having won two back-to-back games versus the world champion D...
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalizat...
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statist...
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while inc...
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between ne...
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, v...
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when ac...
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performanc...
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in t...
We’ve discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on...
We’re releasing CoinRun, a training environment which provides a metric for an agent’s ability to transfer its experience to novel situat...
We’re releasing Spinning Up in Deep RL, an educational resource designed to let anyone learn to become a skilled practitioner in deep rei...
We’ve developed an energy-based model that can quickly learn to identify and generate instances of concepts, such as near, above, between...