Solving Semantle with the Wrong Embeddings
3 days ago
- #Embedding Models
- #Word Game
- #Semantle
- A solver for Semantle was built that uses relative rankings of guesses to find the target word without needing the exact embedding model.
- Each guess comparison eliminates half of the embedding sphere, narrowing down possible target words.
- The solver works robustly even with different underlying embedding models, taking about 10-15 guesses to find the target.
- A probabilistic approach was introduced to handle disagreements between models, taking 100-200 guesses but still trending toward the target.
- The solver's behavior mimics human play, gradually homing in on the target by leveraging semantic relationships.