Hasty Briefsbeta

Bilingual

The Appeal and Reality of Recycling LoRAs with Adaptive Merging

4 hours ago
  • #Adaptive Merging
  • #Machine Learning
  • #LoRA
  • The paper explores adaptive merging of LoRA modules to improve performance in machine learning tasks.
  • It investigates recycling user-contributed LoRAs from repositories like Hugging Face Hub, using nearly 1,000 LoRAs trained from Llama 3.1 8B-Instruct.
  • Findings suggest adaptive merging improves performance over the base model but offers limited benefits compared to training a new LoRA on the same data.
  • The study reveals that the choice of LoRAs to merge has little impact, and even randomly initialized LoRAs yield similar performance.
  • This suggests adaptive merging may work via regularization rather than positive cross-task transfer.
  • Positive transfer is confirmed when highly relevant LoRAs are available in the pool.
  • The paper releases model checkpoints and code for further research.