My Thoughts on AI Safety
4 days ago
- #Existential Risk
- #AI Safety
- #Technological Progress
- The author had an unexpected productive discussion about AI safety at a Christmas party.
- The author reflects on the unknowable nature of superintelligent AI and its potential risks.
- Common AI safety discussions often revolve around existential risks, like AI causing human extinction.
- The author compares AI risks to other technological risks, such as nuclear weapons and synthetic biology.
- The author argues against stagnation due to fear of existential risks, advocating for technological progress.
- A proposed extreme measure to prevent AI risks is arresting those attempting to create superintelligent AI.
- The author was surprised that AI safety researchers found their thoughts on the topic novel and interesting.
- The author plans to write a blog post on why they're not overly afraid of AI destroying humanity and how to minimize risks.
- The post will also cover what genuinely scares the author about AI.