Quantum physicists have shrunk and "de-censored" DeepSeek R1
4 days ago
- #model compression
- #quantum computing
- #AI censorship
- Quantum physicists at Multiverse Computing created DeepSeek R1 Slim, a 55% smaller version of DeepSeek R1 with reduced censorship.
- The team used quantum-inspired tensor networks to compress the model and remove Chinese censorship layers.
- Testing involved 25 politically sensitive questions, with GPT-5 judging responses for censorship levels.
- Compressed models like DeepSeek R1 Slim aim to save energy and costs while maintaining performance.
- Other compression methods include distillation, quantization, and pruning, but quantum-inspired approaches offer precise redundancy reduction.
- Chinese AI models are required to include censorship, influencing global open-source AI ecosystems.
- Stanford and Princeton researchers found Chinese models exhibit higher censorship rates, especially for Chinese prompts.
- Perplexity AI also released an uncensored DeepSeek R1 variant, but experts caution fully removing censorship is complex.
- Censorship in Chinese AI is deeply embedded, from data collection to final alignment, making complete removal challenging.