Zeta-Alpha-E5-Mistral: Finetuning LLMs for Retrieval (with Arthur Câmara)
Manage episode 450164769 series 3446693
In the 30th episode of Neural Search Talks, we have our very own Arthur Câmara, Senior Research Engineer at Zeta Alpha, presenting a 20-minute guide on how we fine-tune Large Language Models for effective text retrieval. Arthur discusses the common issues with embedding models in a general-purpose RAG pipeline, how to tackle the lack of retrieval-oriented data for fine-tuning with InPars, and how we adapted E5-Mistral to rank in the top 10 on the BEIR benchmark.
## Sources
InPars
- https://github.com/zetaalphavector/InPars
- https://dl.acm.org/doi/10.1145/3477495.3531863
- https://arxiv.org/abs/2301.01820
- https://arxiv.org/abs/2307.04601
Zeta-Alpha-E5-Mistral
- https://zeta-alpha.com/post/fine-tuning-an-llm-for-state-of-the-art-retrieval-zeta-alpha-s-top-10-submission-to-the-the-mteb-be
- https://huggingface.co/zeta-alpha-ai/Zeta-Alpha-E5-Mistral
NanoBEIR
19 episodes