Data is crucial for using generative AI (GenAI) applications in the enterprise. It’s what teaches large language models (LLMs) a company’s knowledge and internal wiring. One approach rapidly gaining traction in the enterprise is Retrieval-Augmented Generation (RAG), a method that grounds LLMs in an accurate knowledge base, reducing hallucinations and enhancing relevance.
For many data teams new to RAG — or even those already dipping their toes in — it’s a paradigm shift in everything from data modeling to the systems used.
This talk by Akash Sagar, Engineering Manager at Glean covers 5 key considerations when implementing RAG including:
Embedding models: How your data model feeds into your embedding model
Search systems: How data systems are evolving into quasi search systems and what that means for data teams
Ranking: What data should land in your embedding model versus your ranking model
Sagar draws from hands-on experience developing a Work AI platform, offering data leaders actionable insights for adopting RAG. CDOs and data teams attending will gain practical tools to harness the power of RAG, positioning their organizations for AI-driven success.
Don’t miss this opportunity to learn from one of the industry’s leading voices on RAG and generative AI in the enterprise.