As a Salesforce Admin, you know how important it is to get timely feedback on solutions your users rely on. When it comes to solutions like Einstein Copilot—where the user interface (UI) is pervasive, operates in real time, and might give slightly different answers to different users—how can you stay ahead on the robustness of prompts and actions? If a user witnesses an answer with a hallucination, or provides data irrelevant to the question, how can they easily report this behavior?
You need to be able to see how trust, like data masking and toxicity detection, is working to ensure the safety and accuracy of generated responses. You also need the flexibility to easily add additional custom metrics to measure how artificial intelligence (AI) is working for you. Having these capabilities allows you to identify areas of improvement and optimization across your predictive and generative AI-powered apps.
Salesforce’s Einstein 1
Leave a Reply