-
ChatGPT & the Inflection Point
- ChatGPT launch (Nov 2022) was the fastest product to 100M users in history
- Hit 1M users in 5 days; 100M in 60 days — Instagram took 2.5 years
- Sam describes it as “releasing something we didn’t know how well it worked into the wild”
- Why the timing mattered
- RLHF (Reinforcement Learning from Human Feedback) had been developed over years
- InstructGPT showed models could be made helpful, harmless, and honest simultaneously
- The interface — a simple chat box — was the key design decision
- ChatGPT launch (Nov 2022) was the fastest product to 100M users in history
-
GPT-4 Capabilities
- Passes the bar exam in the top 10% of test takers
- Scores 163 on the LSAT (88th percentile for law school admissions)
- Sam’s view: “We are building something that is genuinely powerful and we are being careful about it”
- Multimodality (vision) was a significant architectural leap
- Allows GPT-4 to understand charts, handwritten equations, images in context
-
AGI — Definition and Timeline
- Sam refuses to commit to a specific AGI date
- “I think we could be pretty close, or we could be further than people think”
- His working definition: AGI = “a system that can do the work of the median human across most cognitive domains”
- The path he sees
- Current models: very good at specific tasks, inconsistent across domains
- Next milestone: models that can reason reliably and plan over long horizons
- “Agents” — AI that can take actions in the world — as a key step toward AGI
- Sam refuses to commit to a specific AGI date
-
AI Safety — OpenAI’s Approach
- The “capped profit” structure
- OpenAI is a non-profit that controls a for-profit subsidiary
- Investors get capped returns (currently ~100× on investment); rest goes to the mission
- Sam: “We need a lot of capital to do this safely. We don’t want to just give the company away.”
- Iterative deployment as safety strategy
- Releasing progressively more powerful models to the public — learning in the open
- “Each deployment teaches us something we couldn’t learn in a lab”
- Red-teaming
- Hundreds of external researchers try to break each model before release
- They specifically look for CBRN (chemical, biological, radiological, nuclear) uplift risks
- The “capped profit” structure
-
OpenAI’s Competition & Strategy
- Google, Anthropic, Mistral, Meta — a crowded field
- Sam is sanguine: “Competition is good — it increases the pace of safety research too”
- The key differentiator he points to: compute scale + alignment research depth
- The Microsoft partnership
- $10B investment gave OpenAI exclusive Azure access for training
- In exchange, Microsoft gets deeply integrated models (Copilot, Bing, etc.)
- Sam: “We needed compute at a scale no one else was willing to fund”
- Open source vs. closed development debate
- OpenAI’s position: closed development until safety is better understood
- Meta (LLaMA) takes the opposite stance — Sam respectfully disagrees
- Google, Anthropic, Mistral, Meta — a crowded field
-
What Sam Altman Thinks About Most
- The alignment problem remains unsolved at the frontier
- “We don’t really know how to make a model that is robustly aligned across all situations”
- Scalable oversight (using AI to supervise AI) as a candidate solution
- The “principal-agent” problem of AI
- Who does the AI serve when there are conflicts? The user? The company? Society?
- Sam sees this as the central political question of the next decade
- Abundance vs. power concentration
- His biggest fear: a small group (including OpenAI itself) gaining disproportionate power through AI
- “If we ever find ourselves doing things that concentrate power inappropriately, treat that as a signal something has gone badly wrong”
- The alignment problem remains unsolved at the frontier
science •AI
Sam Altman — OpenAI and the Future of AI
Lex Fridman Podcast • Ep. 367 • • 3 sec • #1
Lex Fridman sits down with OpenAI CEO Sam Altman to discuss GPT-4, AGI timelines, AI safety, the ChatGPT phenomenon, and what it means to build the most transformative technology in human history.