Semantic Kernel – Developing and Operationalizing LLM-based Apps: Exploring Dev Frameworks and LLMOps

Semantic Kernel – Developing and Operationalizing LLM-based Apps: Exploring Dev Frameworks and LLMOps

Semantic Kernel

Semantic kernel, or SK, is a lightweight, open-source software development kit (SDK); it is a modern AI application development framework that enables software developers to build an AI orchestration to build agents, write code that can interact with agents, and also support generative AI tooling and concepts, such as natural language processing (NLP), which we covered in Chapter 2.

“Kernel” is at the core of everything!

Semantic Kernel revolves around the concept of a “kernel,” which is pivotal and is equipped with the necessary services and plugins to execute both native code and AI services, making it a central element for nearly all SDK components.

Every prompt or code executed within the semantic kernel passes through this kernel, granting developers a unified platform for configuring and monitoring their AI applications.

For instance, when a prompt is invoked through the kernel, it undertakes the process of selecting the optimal AI service, constructing the prompt based on a prompt template, dispatching the prompt to the service, and processing the response before delivering it back to the application. Additionally, the kernel allows for the integration of events and middleware at various stages, facilitating tasks such as logging, user updates, and the implementation of responsible AI practices, all from a single, centralized location called “kernel.”

Moreover, SK allows developers to define the syntax and semantics of natural language expressions and use them as variables, functions, or data structures in their code. SK also provides tools for parsing, analyzing, and generating natural language from code and, vice-versa, generating code from NLP.

You can build sophisticated and complex agents without having to be an AI expert by using semantic kernel SDK! The fundamental building blocks in semantic kernels for building agents are plugins, planners, and personas.

Fundamental components

Let’s dive into each one of them and understand what each one means.

  • Plugins enhance your agent’s functionality by allowing you to incorporate additional code. This enables the integration of new functions into plugins, utilizing native programming languages such as C# or Python. Additionally, plugins can facilitate interaction with LLMs through prompts or connect to external services via REST API calls. As an example, consider a plugin for a virtual assistant for a calendar application that allows it to schedule appointments, remind you of upcoming events, or cancel meetings. If you have used ChatGPT, you may be familiar with the concept of plugins, as they are integrated into it (namely, “Code Interpreter” or “Bing Search Plugin”).
  • Planners: In order to effectively utilize the plugin and integrate it with subsequent actions, the system must initially design a plan, a process that is facilitated by planners. This is where the planners help. Planners are sophisticated instructions that enable an agent to formulate a strategy for accomplishing a given task, often encapsulated in a simple prompt that guides the agent through function calling to achieve the objective.
  • As an example, take the development of a MeetingEventPlanner. This planner would guide the agent through the detailed process of organizing a meeting. It includes steps such as reviewing the availability of attendees’ calendars, sending out confirmation emails, drafting an agenda, and, finally, scheduling the meeting. Each step is carefully outlined to ensure the agent comprehensively addresses all the necessary actions for successful meeting preparation.
  • Personas: Personas are sets of instructions that shape the behavior of agents by imbuing them with distinct personalities. Often referred to as “meta prompts,” these guidelines endow agents with characters that can range from friendly and professional to humorous, and so forth. Additionally, they direct agents on the type of response to generate, which can vary from verbose to concise. We have explored meta prompts in great detail in Chapter 5; this concept is closely related.

Leave a Reply

Your email address will not be published. Required fields are marked *