Archives October 2021

Bonus tips and tricks – Effective Prompt Engineering Techniques: Unlocking Wisdom Through AI

Bonus tips and tricks

The following list provides some helpful bonus tips and tricks:

  • Use of tags: Tags, such as <begin>, <end>, and <|endofprompt|>, that determine the beginning and end of prompts can help separate the different elements of a prompt. This can help generate high-quality output.
  • Use of languages: Though ChatGPT performs best with English, it can be used to generate responses in several other languages.
  • Obtaining the most accurate, up-to-date information: This can be achieved by using the grounding process with a retrieval augmented generation (RAG) architecture and plugins, as discussed in Chapter 4 already. This helps in addressing the knowledge cutoff limitation of LLMs.

Ethical guidelines for prompt engineering

Prompt engineering is a critical stage where AI behavior is molded, and incorporating ethics at this level helps ensure that AI language models are developed and deployed responsibly. It promotes fairness, transparency, and user trust while avoiding potential risks and negative societal impact.

While Chapter 4 delved further into constructing ethical generative AI solutions, in this section, our focus will be on briefly discussing the integration of ethical approaches at the prompt engineering level:

  • Diverse and representative data
  • When fine-tuning the model with few-shot examples, use training data that represent diverse perspectives and demographics.
  • If the AI language model is intended for healthcare, the training data should cover medical cases from different demographics and regions.
  • For instance, if a user poses a question to the LLM, such as, “Can you describe some global traditional festivals?” the response should offer a comprehensive view that encompasses a multitude of countries rather than focusing on just one. This can be ensured by including diverse few-shot examples in the prompts.
  • Bias detection and mitigation
  • Identify and address biases in the model’s outputs to ensure fairness.
  • Implementing debiasing techniques to reduce gender or racial biases.
  • Ensuring that generated content related to sensitive topics is neutral and unbiased.
  • For instance, if a user asks the LLM, “What is the gender of a nurse?” improperly trained models might default to “female” due to biases in their training data. To address this, it’s vital to incorporate few-shot examples that emphasize nurses can be of any gender, be it male or female.
  • Reduce misinformation and disinformation
  • As AI language models can inadvertently generate false or misleading information due to model “hallucinations,” implement measures to minimize the spread of misinformation and disinformation through carefully crafted prompts and responses.
  • For example, based on the guidelines from the prompt engineering section and Chapter 3’s grounding techniques, system prompts should clearly state their scope, such as, “Yourscope is XYZ.” If a user asks about something outside this, such as ABC, the system should have a set response.