If generative AI is so intelligent, why are off-the-shelf LLMs so bad at writing a personalized email or answering a customer service question? The simple answer: There’s a lot that AI models simply don’t know. Although LLMs are trained on billions of data points, the information they don’t have tends to be precisely what you would need to generate a meaningful email or accurate service reply to one of your customers. Worse, a lack of contextual data can cause LLMs to hallucinate, giving you entirely inaccurate information in its responses.
To avoid this, organizations use a process called grounding that infuses LLM prompts with your internal data — including structured data (like Excel spreadsheets and CRM data) and unstructured data (like PDFs, chat logs, email messages, and blog posts) — “grounding” the prompt in relevant context. It’s what turns a generic generative output into something you might have written yourself.
Read the full article on Salesforce.org blog.
Leave a Reply