Eliezer Yudkowsky is an American artificial intelligence researcher, writer, and decision theorist best known for his work on the risks and alignment problems of advanced AI systems, particularly the concept of friendly AI and AI alignment.
Source: Lex Fridman interview April 2023
Source: Lex Fridman interview March 2023
Source: Multiple interviews and posts in 2023
Source: Multiple recent writings and interviews
Source: Various alignment forum discussions
Source: AGI Ruin: A List of Lethalities and related posts
Source: COVID analysis posts
Source: COVID prediction posts on LessWrong
Source: Predictions about AI development patterns
Source: Various alignment forum posts
Source: Historical skepticism about LLMs referenced in interviews
Source: Hanson-Yudkowsky FOOM debate 2008
Source: Hanson-Yudkowsky FOOM debate 2008
Source: Hanson-Yudkowsky FOOM debate 2008
Source: Hanson-Yudkowsky FOOM debate 2008
Source: Historical positions referenced in critiques
Source: Hanson-Yudkowsky debate context
Source: Early 2000s SIAI documentation
Source: SIAI early documentation
Source: Forum post from 1999, original link not found