Our third class of OpenAI Scholars presented their final projects at virtual Demo Day, showcasing their research results from over the pa...
We’re excited to announce that OpenAI is co-organizing two NeurIPS 2020 competitions with AIcrowd, Carnegie Mellon University, and DeepMi...
We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel seq...
We’re releasing an API for accessing new AI models developed by OpenAI.
We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNe...
We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and arti...
We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence...
We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organism...
We are standardizing OpenAI’s deep learning framework on PyTorch.
We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and th...
We’re releasing Procgen Benchmark, 16 simple-to-use procedurally-generated environments which provide a direct measure of how quickly a r...
We’re releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect s...
As the final model release of GPT-2’s staged release, we’re releasing the largest version (1.5B parameters) of GPT-2 along with code and ...
We’ve trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The neural networks are trained entirely ...
We are now accepting applications for our third class of OpenAI Scholars.
We’ve fine-tuned the 774M parameter GPT-2 language model using human feedback for various tasks, successfully matching the preferences of...
We’ve observed agents discovering progressively more complex tool use while playing a simple game of hide-and-seek. Through training in o...
We’ve developed a method to assess whether a neural network classifier can reliably defend against adversarial attacks not seen during tr...
We’re releasing the 774 million parameter GPT-2 language model after the release of our small 124M model in February, staged release of o...
At OpenAI, each Thursday is Learning Day: a day where employees have the option to self-study technical skills that will make them better...