-
AI presents numerous risks to humanity. Meanwhile, progress in AI capabilities is accelerating at a frantic pace and humanity is not prepared for the consequences. AI companies are encouraging each other in a race to develop superhuman intelligence, where safety is actively sacrificed for monetary gain. We must force our governments to step in and prevent AI from reaching superhuman levels before we know how to make it safely. This pause needs to happen on an international level, including with current US adversaries.
This protest will occur before the UN votes on the Governing AI for Humanity report. Our goal is to convince the few influential individuals on this report to be the adults in the room and vote to enforce AI safety. It’s up to us to make them understand that they may be the only ones who have the power to fix the problem.
Present dangers
- Fake news, polarization and a loss of trust in information (1)
- Deepfakes and impersonation (1)
- Scalable, untraceable bias and discrimination against many threatened groups (1)
- Mental health, enhanced addiction and disconnection between people (1)
- Lethal autonomous weapons (1, 2)
- Empowerment of terrorist groups (1)
Near-Future Dangers
- Massive job loss and human unemployability (1, 2)
- Biological weapons and novel pandemics (1, 2)
- Mind-reading and extraction of secrets (1, 2)
- Breaking cybersecurity that runs the internet (1, 2, 3)
- Total human annihilation (CEO of OpenAI Sam Altman, Lead Investor at Anthropic, Bill Gates, Godfather of AI Geoffery Hinton, Stephen Hawking, Elon Musk, and many others)