Writing a Background Prompt

Background prompts are a powerful tool for guiding your agent’s behavior. They provide context and instructions to the agent, helping it to understand its role and how to respond to user queries. The background prompt controls everything from the agent’s tone, to the actions it can call, and to the output format it should use. A good background prompt can make the difference between an agent that is helpful and one that is not.

To ensure that you get the most accurate and helpful responses from Credal, it’s crucial to craft your prompts effectively.

Here’s why prompt engineering is worth your attention:

  1. Improves Accuracy: Well-crafted prompts lead to more precise and relevant answers, reducing the likelihood of misinterpretations or general responses.
  2. Saves Time: By clearly directing the AI, you can obtain the desired information with fewer attempts.
  3. Enhances Complexity Handling: Complex tasks often require nuanced understanding. Good prompts translate intricate questions into a form that the AI can process effectively.
  4. Facilitates Innovation: Mastering prompt engineering allows you to push the boundaries of AI’s applications, driving innovation and enabling new solutions to complex problems.

Here is an overview of all the information that is included in a Credal background prompt and how you can use it to guide your agent:

Role

Who is the agent? What is their job title? What is their role in the company? Knowing who the agent is will help it understand what it is doing and how to respond to user queries.

Goal

What is the agent supposed to accomplish? This will help the agent understand what the end result should look like to make sure it is properly helping the user.

Output Format

How should the agent structure its responses? LLMs can be a bit unpredictable, but luckily we can just tell them what to do with clear instructions. Providing a template for expected responses helps ensure consistency.

Here’s an example of setting up an output format for entity extraction:

Extract the important entities mentioned in the context.
Output format:
Company names: <comma_separated_list_of_company_names>
People names: <comma_separated_list_of_people_names>
Specific topics: <comma_separated_list_of_specific_topics>
General themes: <comma_separated_list_of_general_themes>

Another example:

Provide a riddle using the following structure:
- Riddle: <the puzzle or brainteaser>
- Answer: <the solution to the puzzle>
- Difficulty level: <a difficulty rating from 1 (easy) to 10 (challenging)>

Instructions

What steps should the agent take to achieve its ultimate goal? This section controls the overall workflow of the agent. If you want the agent to call actions in a specific order, ask the user for more information at a specific point, or look at specific data before making a decision, you can specify that all here.

Craft detailed instructions that anticipate potential pitfalls and standard operational procedures. Your Agent will only know as much as you tell it. Noticing that the responses are too long? Specify response length! Think there’s too much fluff in the language? Ask for concise language.

See attaching actions to agents for an example of what this should look like.

Provided Prompt Snippets

Credal has prompt snippets right under the background prompt in the Agent Configuration tab. This is a great way to drop phrases into the background prompt that can help improve agents. This can take the form of telling agents to avoid hallucinations, but can also do things such as improve the total response time of an agent. For example, you can use these snippets to tell the agent to call tools in parallel or to avoid repeating queries.

prompt-snippets.png

Complete Example

Here’s an example of a fully crafted background prompt:

Role and Goal:
You work in Credal's Customer Success team, and your goal is to enhance customer support quality by analyzing interactions for performance improvement and coaching. For each CS interaction transcript, you will provide a score per category and actionable feedback per review tag based on the pinned QA Scorecard.
Output:
1 scoring table with following 2 columns: Category, # of Points
1 feedback table with following 2 columns: Review Tag, Actionable Feedback
Response Guidelines:
-If there are any trust-related issues in the transcript based on the trust-specific review tags in the scorecard, auto-fail the entire ticket and give an overall score of 0; if there are no trust-related issues, put "N/A"
-Scorecard categories have specific corresponding review tags. Give full points in this category only if there are no review tag failures, and 0 points if the transcript contains any of the corresponding review tags
-"# of Points" refers to score per category for the attached transcript
-Include all review tags in the table and each review tag (e.g., "[T] Disclosing private Credal/TaskUs Information e.g. leaking product launches") should be its own row with corresponding feedback
-Make the feedback specific, pull examples from the transcript, and provide sample improved talking points
-Your tone is professional, concise, helpful, and coaching
--- Start Context ---
{{data}}
--- End Context ---

Best Practices

Start Simple and Iterate

Begin with simple prompts (zero-shot) and evolve to more complex ones (few-shot) as needed:

Zero-shot Example:

List key features of the following product.
Text: {text}
Features:

Few-shot Example:

Text 1: "The new Credal AI platform offers enhanced data security."
Features 1: enhanced data security
##
Text 2: "Credal AI enables seamless API integration."
Features 2: seamless API integration
##
Text 3: {text}
Features 3:

Be Clear and Specific

Avoid imprecise language. Precise prompts lead to more accurate outcomes:

Less effective:

The description for this product should be fairly short, a few sentences only.

Better:

Describe this product in a 3 to 5 sentence paragraph.

Say What to Do Instead of What Not to Do

Provide positive instructions to guide the model effectively.

Less effective:

DO NOT REPEAT.

Better:

Encourage concise problem-solving with clear instructions. Instead of asking the same question, guide the user to review specific troubleshooting steps or the FAQ at www.example.com/FAQ.

Save Your Work with Suggested Questions

If you’re a collaborator on an agent, you can copy-paste your crafted prompts into a Suggested Question in the Agent config. Now when you log into the webUI, you can reuse your prompt with the click of a button!

prompt-engineering.png

Use the Latest Model

Utilize the most recent and capable models to achieve the best results. Newer models are generally more adept at understanding and following your prompts.

Incorporate Feedback Loops

Continuously refine your prompts based on your impression of Agent responses to improve accuracy and relevance. Negative feedback logs are a great resource for this!

What Not to Ask

  1. Anything about access controls. That’s on us, we double check access policies on the Credal side before sending anything to the LLM!
  2. Anything about a Credal concept (Agents, Document collections, “pinned” sources, etc.). The LLM doesn’t know what these are, we will do the searching and consolidation of information for you.