Using LLM as embedding models
Hey there, I was wondering whether using LLMs like Llama 3.1. Gemma 2 etc. just to embed the information could yield better results than using embedding model like BGE or Cohere since there is much more training happening on these models. Anybody has any experience with that? I guess the main problem is the substantial cost when running the whole large language model compare to running only embedding model?