Reasoning Models Can Be Effective Without Thinking
a year ago
- #Artificial Intelligence
- #LLMs
- #Reasoning Models
- Recent LLMs have improved reasoning capabilities by including explicit thinking processes in generation.
- The paper questions the necessity of explicit thinking, showing that bypassing it (NoThinking) can be effective.
- NoThinking outperforms Thinking across seven reasoning datasets, especially in low-budget settings.
- Performance of NoThinking becomes more competitive with pass@k as k increases.
- A parallel scaling approach using NoThinking to generate N outputs independently and aggregating them is highly effective.
- The method outperforms baselines with similar latency and is comparable to Thinking with longer latency (up to 9x).
- The research encourages reconsidering the necessity of lengthy thinking processes for strong reasoning performance.