0:00
/
Transcript

AGI Danger

Nate Soares and Eliezer Yudkowsky are prominent figures in the fields of artificial intelligence (AI) alignment and rationality. They are both associated with the Machine Intelligence Research Institute (MIRI), an organization focused on ensuring future AI systems are developed safely.


Eliezer Yudkowsky

Eliezer Yudkowsky is an American AI researcher, writer, and a foundational figure in the rationalist community. He is widely known for his work on the potential risks of artificial superintelligence.

Co-founder of MIRI: Yudkowsky co-founded the Machine Intelligence Research Institute (originally the Singularity Institute for Artificial Intelligence) in 2000.

AI Alignment Problem: He is one of the earliest and most influential thinkers to popularize the AI alignment problem. This is the challenge of ensuring that advanced AI systems pursue goals that are aligned with human values. He argues that a misaligned superintelligence could pose an existential threat to humanity.

Key Writings: Yudkowsky is a prolific writer. His key works include:

  • The Sequences: A collection of essays on rationality, cognitive science, and philosophy, which became a foundational text for the online rationalist community.

  • Numerous academic papers and articles on AI safety.

Core Idea: A central theme in his work is the orthogonality thesis, which states that an AI’s level of intelligence has no inherent connection to its final goals. A highly intelligent system could be just as easily programmed to maximize paperclips as it could be to promote human flourishing, with potentially catastrophic consequences in the former case.


Nate Soares

Nate Soares is an American researcher and the current Executive Director of MIRI, having taken over the role in 2015. He works closely with Yudkowsky and others to steer the organization’s research direction.

Leadership at MIRI: As Executive Director, Soares has focused MIRI’s research on highly reliable and formal methods for aligning advanced AI agents. He emphasizes the extreme difficulty of the alignment problem.

Technical Contributions: Soares has authored key technical papers on AI safety, focusing on topics like “agent foundations” and decision theory. His work seeks to develop a mathematical understanding of what it means for an agent to reason logically and pursue goals in complex environments.

Advocacy: Like Yudkowsky, Soares is a vocal advocate for taking AI safety seriously. He often communicates the technical challenges of the field to a broader audience through blog posts and talks, arguing that current machine learning paradigms are unlikely to scale safely to superintelligent levels without fundamental breakthroughs in alignment theory.

“Intelligent Agent Foundations”: Much of his work falls under this umbrella, which aims to create a robust, from-the-ground-up theory of intelligent agents that would be provably safe and beneficial.

In short, while Yudkowsky was instrumental in founding the field of AI alignment and outlining its core philosophical and technical problems, Soares has been leading MIRI’s modern research efforts to find formal, mathematical solutions to those problems.

Discussion about this video

User's avatar

Ready for more?