RAG is an important technique, but there are many questions it simply can’t answer. How LLamaIndex is helping provide those answers.
Retrieval augmented generation (RAG) is an important technique that pulls from external knowledge bases to help improve the quality of large language model (LLM) outputs. It also provides transparency into model sources that humans can cross-check.
However, according to Jerry Liu, co-founder and CEO of LlamaIndex, basic RAG systems can have primitive interfaces and poor quality understanding and planning, lack function calling or tool use and are stateless (with no memory). Data silos only exacerbate this problem. Liu spoke during VB Transform in San Francisco yesterday.