On this tutorial, we stroll you thru the seamless integration of AutoGen and Semantic Kernel with Google’s Gemini Flash mannequin. We start by organising our GeminiWrapper and SemanticKernelGeminiPlugin lessons to bridge the generative energy of Gemini with AutoGen’s multi-agent orchestration. From there, we configure specialist brokers, starting from code reviewers to artistic analysts, demonstrating how we will leverage AutoGen’s ConversableAgent API alongside Semantic Kernel’s embellished features for textual content evaluation, summarization, code assessment, and artistic problem-solving. By combining AutoGen’s sturdy agent framework with Semantic Kernel’s function-driven method, we create a complicated AI assistant that adapts to a wide range of duties with structured, actionable insights.
!pip set up pyautogen semantic-kernel google-generativeai python-dotenv
import os
import asyncio
from typing import Dict, Any, Listing
import autogen
import google.generativeai as genai
from semantic_kernel import Kernel
from semantic_kernel.features import KernelArguments
from semantic_kernel.features.kernel_function_decorator import kernel_function
We begin by putting in the core dependencies: pyautogen, semantic-kernel, google-generativeai, and python-dotenv, making certain now we have all the mandatory libraries for our multi-agent and semantic perform setup. Then we import important Python modules (os, asyncio, typing) together with autogen for agent orchestration, genai for Gemini API entry, and the Semantic Kernel lessons and interior designers to outline our AI features.
GEMINI_API_KEY = "Use Your API Key Right here"
genai.configure(api_key=GEMINI_API_KEY)
config_list = [
{
"model": "gemini-1.5-flash",
"api_key": GEMINI_API_KEY,
"api_type": "google",
"api_base": "https://generativelanguage.googleapis.com/v1beta",
}
]
We outline our GEMINI_API_KEY placeholder and instantly configure the genai consumer so all subsequent Gemini calls are authenticated. Then we construct a config_list containing the Gemini Flash mannequin settings, mannequin title, API key, endpoint sort, and base URL, which we’ll hand off to our brokers for LLM interactions.
class GeminiWrapper:
"""Wrapper for Gemini API to work with AutoGen"""
def __init__(self, model_name="gemini-1.5-flash"):
self.mannequin = genai.GenerativeModel(model_name)
def generate_response(self, immediate: str, temperature: float = 0.7) -> str:
"""Generate response utilizing Gemini"""
attempt:
response = self.mannequin.generate_content(
immediate,
generation_config=genai.sorts.GenerationConfig(
temperature=temperature,
max_output_tokens=2048,
)
)
return response.textual content
besides Exception as e:
return f"Gemini API Error: {str(e)}"
We encapsulate all Gemini Flash interactions in a GeminiWrapper class, the place we initialize a GenerativeModel for our chosen mannequin and expose a easy generate_response methodology. On this methodology, we move the immediate and temperature into Gemini’s generate_content API (capped at 2048 tokens) and return the uncooked textual content or a formatted error.
class SemanticKernelGeminiPlugin:
"""Semantic Kernel plugin utilizing Gemini Flash for superior AI operations"""
def __init__(self):
self.kernel = Kernel()
self.gemini = GeminiWrapper()
@kernel_function(title="analyze_text", description="Analyze textual content for sentiment and key insights")
def analyze_text(self, textual content: str) -> str:
"""Analyze textual content utilizing Gemini Flash"""
immediate = f"""
Analyze the next textual content comprehensively:
Textual content: {textual content}
Present evaluation on this format:
- Sentiment: [positive/negative/neutral with confidence]
- Key Themes: [main topics and concepts]
- Insights: [important observations and patterns]
- Suggestions: [actionable next steps]
- Tone: [formal/informal/technical/emotional]
"""
return self.gemini.generate_response(immediate, temperature=0.3)
@kernel_function(title="generate_summary", description="Generate complete abstract")
def generate_summary(self, content material: str) -> str:
"""Generate abstract utilizing Gemini's superior capabilities"""
immediate = f"""
Create a complete abstract of the next content material:
Content material: {content material}
Present:
1. Govt Abstract (2-3 sentences)
2. Key Factors (bullet format)
3. Necessary Particulars
4. Conclusion/Implications
"""
return self.gemini.generate_response(immediate, temperature=0.4)
@kernel_function(title="code_analysis", description="Analyze code for high quality and options")
def code_analysis(self, code: str) -> str:
"""Analyze code utilizing Gemini's code understanding"""
immediate = f"""
Analyze this code comprehensively:
```
{code}
```
Present evaluation masking:
- Code High quality: [readability, structure, best practices]
- Efficiency: [efficiency, optimization opportunities]
- Safety: [potential vulnerabilities, security best practices]
- Maintainability: [documentation, modularity, extensibility]
- Ideas: [specific improvements with examples]
"""
return self.gemini.generate_response(immediate, temperature=0.2)
@kernel_function(title="creative_solution", description="Generate artistic options to issues")
def creative_solution(self, drawback: str) -> str:
"""Generate artistic options utilizing Gemini's artistic capabilities"""
immediate = f"""
Drawback: {drawback}
Generate artistic options:
1. Typical Approaches (2-3 commonplace options)
2. Modern Concepts (3-4 artistic options)
3. Hybrid Options (combining totally different approaches)
4. Implementation Technique (sensible steps)
5. Potential Challenges and Mitigation
"""
return self.gemini.generate_response(immediate, temperature=0.8)
We encapsulate our Semantic Kernel logic within the SemanticKernelGeminiPlugin, the place we initialize each the Kernel and our GeminiWrapper to energy customized AI features. Utilizing the @kernel_function decorator, we declare strategies like analyze_text, generate_summary, code_analysis, and creative_solution, every of which constructs a structured immediate and delegates the heavy lifting to Gemini Flash. This plugin lets us seamlessly register and invoke superior AI operations inside our Semantic Kernel setting.
class AdvancedGeminiAgent:
"""Superior AI Agent utilizing Gemini Flash with AutoGen and Semantic Kernel"""
def __init__(self):
self.sk_plugin = SemanticKernelGeminiPlugin()
self.gemini = GeminiWrapper()
self.setup_agents()
def setup_agents(self):
"""Initialize AutoGen brokers with Gemini Flash"""
gemini_config = {
"config_list": [{"model": "gemini-1.5-flash", "api_key": GEMINI_API_KEY}],
"temperature": 0.7,
}
self.assistant = autogen.ConversableAgent(
title="GeminiAssistant",
llm_config=gemini_config,
system_message="""You might be a complicated AI assistant powered by Gemini Flash with Semantic Kernel capabilities.
You excel at evaluation, problem-solving, and artistic considering. All the time present complete, actionable insights.
Use structured responses and contemplate a number of views.""",
human_input_mode="NEVER",
)
self.code_reviewer = autogen.ConversableAgent(
title="GeminiCodeReviewer",
llm_config={**gemini_config, "temperature": 0.3},
system_message="""You're a senior code reviewer powered by Gemini Flash.
Analyze code for finest practices, safety, efficiency, and maintainability.
Present particular, actionable suggestions with examples.""",
human_input_mode="NEVER",
)
self.creative_analyst = autogen.ConversableAgent(
title="GeminiCreativeAnalyst",
llm_config={**gemini_config, "temperature": 0.8},
system_message="""You're a artistic drawback solver and innovation professional powered by Gemini Flash.
Generate modern options, and supply contemporary views.
Stability creativity with practicality.""",
human_input_mode="NEVER",
)
self.data_specialist = autogen.ConversableAgent(
title="GeminiDataSpecialist",
llm_config={**gemini_config, "temperature": 0.4},
system_message="""You're a knowledge evaluation professional powered by Gemini Flash.
Present evidence-based suggestions and statistical views.""",
human_input_mode="NEVER",
)
self.user_proxy = autogen.ConversableAgent(
title="UserProxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=2,
is_termination_msg=lambda x: x.get("content material", "").rstrip().endswith("TERMINATE"),
llm_config=False,
)
def analyze_with_semantic_kernel(self, content material: str, analysis_type: str) -> str:
"""Bridge perform between AutoGen and Semantic Kernel with Gemini"""
attempt:
if analysis_type == "textual content":
return self.sk_plugin.analyze_text(content material)
elif analysis_type == "code":
return self.sk_plugin.code_analysis(content material)
elif analysis_type == "abstract":
return self.sk_plugin.generate_summary(content material)
elif analysis_type == "artistic":
return self.sk_plugin.creative_solution(content material)
else:
return "Invalid evaluation sort. Use 'textual content', 'code', 'abstract', or 'artistic'."
besides Exception as e:
return f"Semantic Kernel Evaluation Error: {str(e)}"
def multi_agent_collaboration(self, job: str) -> Dict[str, str]:
"""Orchestrate multi-agent collaboration utilizing Gemini"""
outcomes = {}
brokers = {
"assistant": (self.assistant, "complete evaluation"),
"code_reviewer": (self.code_reviewer, "code assessment perspective"),
"creative_analyst": (self.creative_analyst, "artistic options"),
"data_specialist": (self.data_specialist, "data-driven insights")
}
for agent_name, (agent, perspective) in brokers.gadgets():
attempt:
immediate = f"Job: {job}nnProvide your {perspective} on this job."
response = agent.generate_reply([{"role": "user", "content": prompt}])
outcomes[agent_name] = response if isinstance(response, str) else str(response)
besides Exception as e:
outcomes[agent_name] = f"Agent {agent_name} error: {str(e)}"
return outcomes
def run_comprehensive_analysis(self, question: str) -> Dict[str, Any]:
"""Run complete evaluation utilizing all Gemini-powered capabilities"""
outcomes = {}
analyses = ["text", "summary", "creative"]
for analysis_type in analyses:
attempt:
outcomes[f"sk_{analysis_type}"] = self.analyze_with_semantic_kernel(question, analysis_type)
besides Exception as e:
outcomes[f"sk_{analysis_type}"] = f"Error: {str(e)}"
attempt:
outcomes["multi_agent"] = self.multi_agent_collaboration(question)
besides Exception as e:
outcomes["multi_agent"] = f"Multi-agent error: {str(e)}"
attempt:
outcomes["direct_gemini"] = self.gemini.generate_response(
f"Present a complete evaluation of: {question}", temperature=0.6
)
besides Exception as e:
outcomes["direct_gemini"] = f"Direct Gemini error: {str(e)}"
return outcomes
We add our end-to-end AI orchestration within the AdvancedGeminiAgent class, the place we initialize our Semantic Kernel plugin, Gemini wrapper, and configure a set of specialist AutoGen brokers (assistant, code reviewer, artistic analyst, knowledge specialist, and person proxy). With easy strategies for semantic-kernel bridging, multi-agent collaboration, and direct Gemini calls, we allow a seamless, complete evaluation pipeline for any person question.
def most important():
"""Predominant execution perform for Google Colab with Gemini Flash"""
print("🚀 Initializing Superior Gemini Flash AI Agent...")
print("⚡ Utilizing Gemini 1.5 Flash for high-speed, cost-effective AI processing")
attempt:
agent = AdvancedGeminiAgent()
print("✅ Agent initialized efficiently!")
besides Exception as e:
print(f"❌ Initialization error: {str(e)}")
print("💡 Be sure to set your Gemini API key!")
return
demo_queries = [
"How can AI transform education in developing countries?",
"def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)",
"What are the most promising renewable energy technologies for 2025?"
]
print("n🔍 Operating Gemini Flash Powered Evaluation...")
for i, question in enumerate(demo_queries, 1):
print(f"n{'='*60}")
print(f"🎯 Demo {i}: {question}")
print('='*60)
attempt:
outcomes = agent.run_comprehensive_analysis(question)
for key, worth in outcomes.gadgets():
if key == "multi_agent" and isinstance(worth, dict):
print(f"n🤖 {key.higher().exchange('_', ' ')}:")
for agent_name, response in worth.gadgets():
print(f" 👤 {agent_name}: {str(response)[:200]}...")
else:
print(f"n📊 {key.higher().exchange('_', ' ')}:")
print(f" {str(worth)[:300]}...")
besides Exception as e:
print(f"❌ Error in demo {i}: {str(e)}")
print(f"n{'='*60}")
print("🎉 Gemini Flash AI Agent Demo Accomplished!")
print("💡 To make use of along with your API key, exchange 'your-gemini-api-key-here'")
print("🔗 Get your free Gemini API key at: https://makersuite.google.com/app/apikey")
if __name__ == "__main__":
most important()
Lastly, we run the principle perform that initializes the AdvancedGeminiAgent, prints out standing messages, and iterates by means of a set of demo queries. As we run every question, we accumulate and show outcomes from semantic-kernel analyses, multi-agent collaboration, and direct Gemini responses, making certain a transparent, step-by-step showcase of our multi-agent AI workflow.
In conclusion, we showcased how AutoGen and Semantic Kernel complement one another to supply a flexible, multi-agent AI system powered by Gemini Flash. We highlighted how AutoGen simplifies the orchestration of various professional brokers, whereas Semantic Kernel offers a clear, declarative layer for outlining and invoking superior AI features. By uniting these instruments in a Colab pocket book, we’ve enabled speedy experimentation and prototyping of advanced AI workflows with out sacrificing readability or management.
Take a look at the Codes. All credit score for this analysis goes to the researchers of this venture. Additionally, be happy to observe us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Publication.
Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.