scripod.com

Greg Brockman on OpenAI's Road to AGI

Shownote

Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI. 00:00 Introductions 01:04 The Evolution of Reasoning at OpenAI 04:01 Online vs Offline Learning in Language Models 06:44 Sample Efficiency and Human Curation in Reinforcement Learning 08:16 Scaling Compute and Supercritical Learning 13:21 Wall clock time limitations in RL and real-world interactions 16:34 Experience with ARC Institute and DNA neural networks 19:33 Defining the GPT-5 Era 22:46 Evaluating Model Intelligence and Task Difficulty 25:06 Practical Advice for Developers Using GPT-5 31:48 Model Specs 37:21 Challenges in RL Preferences (e.g., try/catch) 39:13 Model Routing and Hybrid Architectures in GPT-5 43:58 GPT-5 pricing and compute efficiency improvements 46:04 Self-Improving Coding Agents and Tool Usage 49:11 On-Device Models and Local vs Remote Agent Systems 51:34 Engineering at OpenAI and Leveraging LLMs 54:16 Structuring Codebases and Teams for AI Optimization 55:27 The Value of Engineers in the Age of AGI 58:42 Current state of AI research and lab diversity 01:01:11 OpenAI’s Prioritization and Focus Areas 01:03:05 Advice for Founders: It's Not Too Late 01:04:20 Future outlook and closing thoughts 01:04:33 Time Capsule to 2045: Future of Compute and Abundance 01:07:07 Time Capsule to 2005: More Problems Will Emerge

Highlights

In this episode of the Latent Space podcast, Greg Brockman, co-founder and president of OpenAI, joins the conversation to explore the cutting-edge developments shaping the future of AI. From the evolution of reasoning in large language models to the practical applications of reinforcement learning, Brockman offers insights into how OpenAI is pushing the boundaries of what's possible in artificial intelligence.
00:04
Greg Brockman proud of the team's achievements
01:04
After training GPT-4, the team realized it could do chat and began questioning why it wasn’t AGI.
05:10
Reinforcement learning enables models to learn from fewer examples
06:44
Compute remains the limiting factor in AI despite efficient algorithms
08:16
Compute is a fundamental fuel for intelligence, turning energy into reusable models
13:21
Algorithms can now handle complex environments like Dota.
16:34
A 40B neural net trained on DNA achieves early GPT-level performance
19:33
GPT-5 can perform great intellectual feats in hard domains like math and is more reliable than previous versions
22:51
GPT-5 excels at solving complex intellectual problems and uses interactive coding with user feedback for training.
29:43
OpenAI uses defense in depth with instruction hierarchy for agent robustness
31:48
Models are trained on human thought and refined through reinforcement learning.
38:17
Leveraging preference data improves model training
39:13
GPT-5's router selects between reasoning and non-reasoning models based on application needs
45:20
1,000x cost improvement for the same intelligence since GPT-4
48:37
Practical engineering decisions demonstrate the model's capabilities using cutting-edge techniques
49:11
GPT-5 can work with GPT-OSS and Codex infrastructure enabling seamless interplay and multiplayer
51:34
AI tools offer a productivity boost, enabling teams to accomplish more work with current structures
54:16
Building codebases around LLM strengths with modular design and quick unit tests
55:27
AI projects are compared to the New Deal and Apollo program in scale, signaling a major economic shift.
58:42
OpenAI focuses on long-term bets for major breakthroughs in AI research.
1:01:45
Teams shifted from robotics to digital tools for faster progress.
1:03:05
Connecting AI models to real-world applications is valuable, despite the feeling that all ideas are taken.
1:04:20
I haven't done angel investing for years as it's a distraction from OpenAI.
1:06:00
Compute will be the key resource shaping future societies
1:07:07
Greg wishes he had internalized earlier that amazing tools are now available to revolutionize fields.

Chapters

Introductions
00:00
The Evolution of Reasoning at OpenAI
01:04
Online vs Offline Learning in Language Models
04:01
Sample Efficiency and Human Curation in Reinforcement Learning
06:44
Scaling Compute and Supercritical Learning
08:16
Wall clock time limitations in RL and real-world interactions
13:21
Experience with ARC Institute and DNA neural networks
16:34
Defining the GPT-5 Era
19:33
Evaluating Model Intelligence and Task Difficulty
22:46
Practical Advice for Developers Using GPT-5
25:06
Model Specs
31:48
Challenges in RL Preferences (e.g., try/catch)
37:21
Model Routing and Hybrid Architectures in GPT-5
39:13
GPT-5 pricing and compute efficiency improvements
43:58
Self-Improving Coding Agents and Tool Usage
46:04
On-Device Models and Local vs Remote Agent Systems
49:11
Engineering at OpenAI and Leveraging LLMs
51:34
Structuring Codebases and Teams for AI Optimization
54:16
The Value of Engineers in the Age of AGI
55:27
Current state of AI research and lab diversity
58:42
OpenAI’s Prioritization and Focus Areas
1:01:11
Advice for Founders: It's Not Too Late
1:03:05
Future outlook and closing thoughts
1:04:20
Time Capsule to 2045: Future of Compute and Abundance
1:04:33
Time Capsule to 2005: More Problems Will Emerge
1:07:07

Transcript

Alessio: Hey, everyone. Welcome to the Latent Space: The AI Engineer Podcast. This is Alessio, founder of Kernel Labs, and I'm joined by swyx, founder of Small AI. swyx: Hello, hello. And we are so excited to have Greg Brockman join us. Greg Brockman: We...