Eliezer Yudkowsky
Eliezer S. Yudkowsky (/ˌɛliˈɛzər ˌjʌdˈkaʊski/ EH-lee-EH-zər YUD-KOW-skee; born September 11, 1979) is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.
Eliezer Yudkowsky | |
---|---|
Yudkowsky at Stanford University in 2006 | |
Born | Eliezer Shlomo Yudkowsky September 11, 1979 |
Organization | Machine Intelligence Research Institute |
Known for | Coining the term friendly artificial intelligence Research on AI safety Rationality writing Founder of LessWrong |
Website | www |
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.