How Does a Blind Model See the Earth?
13 days ago
- #AI
- #LLMs
- #Geography
- The author laments the loss of incomplete maps, which once reflected personal perspectives and the limits of knowledge.
- An experiment is described to visualize how a large language model (LLM) perceives the Earth by querying it about land and water at specific coordinates.
- The process involves sampling coordinates globally, asking the model to classify each as 'Land' or 'Water', and compiling the results into a map.
- Different LLMs (e.g., Qwen, DeepSeek, GPT, Claude, Gemini) are tested, showing varying degrees of accuracy and detail in their geographical knowledge.
- Results reveal that larger models generally produce more accurate maps, with some showing surprising detail, while smaller models struggle.
- The author notes differences in performance between base models and fine-tuned variants, as well as between dense and sparse models.
- The experiment raises questions about how LLMs internally represent geographical knowledge and how training methods affect this.
- Future directions include exploring base model performance, internal knowledge structures, and expert activation maps in MoE models.