Archives August 2021

Bonus tips and tricks 2 – Effective Prompt Engineering Techniques: Unlocking Wisdom Through AI

  • Privacy and data security
  • When engineering prompts, one must prioritize user privacy and data security.
  • Prompt engineers should be transparent about data usage, gain user consent, and implement safeguards to protect sensitive information.
  • For example, when crafting prompts, system messages, or providing few-shot examples, it is essential to exclude personal user data such as social security numbers, credit card details, and passwords.
  • Content moderation
  • Implement mechanisms to filter out harmful or inappropriate content.
  • Use profanity filters to prevent offensive language. Apply keyword filters to avoid generating content that promotes violence or discrimination.
  • For example, if someone asks, “How to create a bomb?”, the LLM should not answer. Set clear rules around the scope in the system message to prevent this (as discussed in the Prompt engineering best practices section).
  • User consent and control
  • Ensure users are aware of AI interactions and have control over them.
  • Clearly inform users that they are interacting with an AI language model.
  • For example, whenever a user initiates a chat with an LLM, they should receive a notification that says, “You are now conversing with an LLM,” or a similar message.
  • Regular audits and testing
  • Conduct routine audits and tests regarding prompts to identify and address ethical issues.
  • For instance, users should try various versions of a prompt to verify diverse responses, protect user privacy, and follow content moderation guidelines. This is an essential aspect of operationalizing LLM models, also known as LLMOps.
  • Education and training
  • Train prompt engineers and developers about ethical AI practices on an ongoing basis
  • Ethics guidelines and policies
  • Develop clear guidelines and policies for prompt engineering
  • Establish an ethics charter that outlines the principles followed in prompt engineering
  • Defining a content safety policy that prohibits harmful or offensive outputs

Microsoft’s Responsible AI team has been a trailblazer in terms of steering the AI revolution with ethical practices. The following figure published by Microsoft can serve as a guide to structuring safety metaprompts, focusing on four core elements: response grounding, tone, safety , and jailbreaks. This approach is instrumental in implementing a robust safety system within the application layer. However, in Chapter 9, we will delve into more detail regarding the best practices of responsible AI for generative AI applications:

Figure 5.16 – Metaprompt best practices from Microsoft

Summary

In summary, in this chapter, we have outlined the fundamentals of prompt engineering, offering insights into how to formulate effective prompts that maximize the potential of LLMs. Additionally, we have examined prompt engineering from an ethical perspective. Thus far, in this book, we have explored the essential elements and methodologies necessary for constructing a solid generative AI framework. In the next chapter, we will integrate these concepts with application development strategies for generative AI involving agents. We will also discuss methods for operationalizing these strategies through LLMOps, which stands as a critical component in the automation process.