Prompting
To ensure that you get the most accurate and helpful responses from Credal, it’s crucial to craft your prompts effectively.
Here’s why prompt engineering is worth your attention:
- Improves Accuracy: Well-crafted prompts lead to more precise and relevant answers, reducing the likelihood of misinterpretations or general responses.
- Saves Time: By clearly directing the AI, you can obtain the desired information with fewer attempts.
- Enhances Complexity Handling: Complex tasks often require nuanced understanding. Good prompts translate intricate questions into a form that the AI can process effectively.
- Facilitates Innovation: Mastering prompt engineering allows you to push the boundaries of AI’s applications, driving innovation and enabling new solutions to complex problems.
This guide will provide you with the foundational understanding and practical steps to optimize your interactions with Credal for precise and valuable responses.
1. What is the Background Prompt
This is arguably the most effective way to customize your copilot. It defines the landscape in which your copilot operates, detailing its role, response style, and the types of queries it should handle. Crafting an effective background prompt ensures your copilot can seamlessly integrate into your specific use case.
2. Constructing a Background Prompt
Role and Goal
Outline the specific role your copilot is expected to fulfill. Include details such as:
- Who: The specific team or individual utilizing the copilot.
- What: The usual nature of queries or tasks it will address.
- Where: The operational context, be it customer service, technical support, etc.
Example
Output Format
LLM’s can be a bit unpredictable, but luckily we can just tell them what to do with clear instructions. Providing a template for expected responses is an example of this.
Here’s an example of setting up an output format for entity extraction:
Another more crude example:
Response Guidelines
Craft detailed instructions that anticipate potential pitfalls and standard operational procedures. Your Copilot will only know as much as you tell it. Noticing that the responses are too long? Specificy response length! Think there’s too much fluff in the language? Ask for concise language.
Credal has prompt snippets right under the background prompt in the Copilot Configuration tab. We recommend using those as a starting point for generating more useful responses, however your specific use case may require much more detail!
Here’s an example of a fully crafted background prompt…
3. Start with Simple Prompts and Iterate
Begin with simple prompts (zero-shot) and evolve to more complex ones (few-shot) as needed. If further precision is required, consider fine-tuning the AI model:
Zero-shot Example:
Few-shot Example:
Another example…
-
Zero-shot Example:
-
Few-shot Example:
4. Clarity Over Vagueness
Avoid imprecise language. Precise prompts lead to more accurate outcomes:
Less effective:
Better:
5. Say What to Do Instead of What Not to Do
Provide positive instructions to guide the model effectively.
Less effective:
Better:
6. Save your work in a “Suggested Question”
So, you did all the work to craft the perfect prompt for your super specific use-case… only to lose it when you hit send? Think again. If you’re a collaborator on a copilot, you can copy-paste your work of art into a Suggested Question in the Copilot config. Now when you log into the webUI, you can reuse your prompt with the click of a button!
We currently don’t support autocompleting prompts for our Slack integration. For Copilots deployed to Slack, we recommend crafting the background prompt to handle the diverse types of messages sent in the channel of your choice.
7. Use the Latest Model
Utilize the most recent and capable models to achieve the best results. Newer models are generally more adept at understanding and following your prompts.
8. Incorporate Feedback Loops
Continuously refine your prompts based on your impression of Copilot responses to improve accuracy and relevance. Negative feedback logs are a great resource for this!
We are currently working on some features that will incorporate negative feedback and usage trends into the copilot configuration workflow by suggesting ways in which you can improve your background prompt. Stay tuned!
9. What not to ask…
- Anything about access controls. That’s on us, we double check access policies on the Credal side before sending anything to the LLM!
- Anything about a Credal concept (Copilots, Document collections, “pinned” sources, etc.). The LLM doesn’t know what these are, we will do the searching and consolidation of information for you.
Feel free to adapt these examples and practices to best fit your specific use cases with Credal AI.