Semantic Kernel: An Open-Source SDK for Building AI Agents with Microsoft’s Copilot Stack
In the world of large language models (LLMs), the term “stochastic parrots” is often used to describe them as less reliable and prone to “hallucinate.” However, when connected to specific data for retrieval-augmented generation (RAG), these models can become more trustworthy. This is where Microsoft’s Semantic Kernel comes into play, serving as an open-source SDK that allows you to build agents that can call your existing code.
Semantic Kernel acts as the glue that ties together LLMs, data, and code, creating a more reliable system that is less likely to go off the rails. By using planners within Semantic Kernel, you can generate plans using LLMs and templates, taking your AI applications to the next level.
While Semantic Kernel currently supports C#, Python, and Java, it is important to note that not all features are supported in all languages. Additionally, the documentation and examples may not be up to date for all languages, but the GitHub repository provides helpful examples to get you started.
One of the key features of Semantic Kernel is its ability to use plugins, connectors, and planners to orchestrate AI applications. By defining functions within plugins and using natural language prompts, you can create a seamless flow of information and actions within your AI applications.
Despite some drawbacks, such as the use of AI tokens and potential delays when using planners, Semantic Kernel offers a powerful tool for building AI agents that can interact with your existing code. Whether you’re a developer looking to enhance your AI applications or simply curious about the capabilities of LLMs, Semantic Kernel is worth exploring.
Overall, Semantic Kernel presents a promising opportunity to delve into the world of AI orchestration and create more reliable and efficient AI applications. With its open-source nature and support for multiple programming languages, it is a valuable tool for developers looking to harness the power of large language models in their projects.