scripod.com

Supintelligence: To Ban or Not to Ban? Max Tegmark & Dean Ball join Liron Shapira on Doom Debates

Shownote

Max Tegmark and Dean Ball debate whether we should ban the development of superintelligence in a crossover episode from Doom Debates hosted by Liron Shapira. PSA for AI builders: Interested in alignment, governance, or AI safety? Learn more about the MATS ...

Highlights

In a thought-provoking debate hosted by Liron Shapira, Max Tegmark and Dean Ball present opposing views on whether the development of artificial superintelligence should be halted. The discussion cuts to the core of one of the most pressing technological dilemmas of our time: how to balance rapid innovation with existential risk mitigation. With starkly different assessments of AI’s potential dangers, the two experts explore what responsible governance might look like in an era where machines could surpass human cognitive capabilities.
03:29
Dean Ball estimates p(doom) at 0.01%, significantly lower than the host's over 1%.
05:43
AI's downside may be greater than hydrogen bombs.
09:21
Defining superintelligence in a law could ban beneficial technologies
17:11
Companies should provide evidence of AI benefits and harms to independent experts, as with thalidomide.
32:35
Digital gain-of-function research lacks binding regulations unlike its biological counterpart.
36:33
Common-law liability systems fail to address tail risks in AI-enabled biothreats
42:47
Superintelligence has a much greater downside and requires a new regulatory approach
47:03
System 2 reasoning model demonstrated significant advancements in bio and cyber capabilities, altering risk assessment.
52:27
Creating an 'FDA for AI' with tiered safety levels could lead to an AI golden age.
1:02:40
Banning unwanted AI outcomes can spur innovation and safety.
1:17:14
If superintelligence takes over, it will be too late to regulate.
1:25:30
Max calculates controlling superintelligence fails 92% of the time in optimistic scenarios
1:34:40
Pursuing uncontrollable superintelligence is a suicide race
1:54:17
Respectful debate on AI policy is crucial social infrastructure
2:00:36
The gap in p(doom) estimates between experts must be resolved quickly to enable effective AI policy decisions.

Chapters

About the Episode
00:00
Cold open and intro
05:43
Opening statements: ban debate (Part 1)
09:21
Sponsors: Framer | Agents of Scale
14:49
Opening statements: ban debate (Part 2)
17:11
Liability, tail risks (Part 1)
26:52
Sponsors: Tasklet | Shopify
33:24
Liability, tail risks (Part 2)
36:32
Timelines and precautionary regulation
39:23
Defining superintelligence and risk
47:03
Risk-based safety standards
52:26
Current regulations and definitions
56:28
Max's doom scenario
1:05:23
P-doom gap and adaptation
1:19:46
National security and China
1:34:40
Closing statements and reflections
1:43:57
Host debrief and outro
1:55:22
Outro
2:02:10

Transcript

Speaker 4: Hello, and welcome back to The Cognitive Revolution. A couple of quick notes before getting started today. First, if you're interested in a career in AI alignment and safety, you should know that. MATS will soon be opening applications for their...