• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
AimactGrow
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing
No Result
View All Result
AimactGrow
No Result
View All Result

A Coding Implementation to Construct a Hierarchical Planner AI Agent Utilizing Open-Supply LLMs with Instrument Execution and Structured Multi-Agent Reasoning

Admin by Admin
February 28, 2026
Home AI
Share on FacebookShare on Twitter


def executor_agent(step: Dict[str, Any], context: Dict[str, Any]) -> StepResult:
   step_id = int(step.get("id", 0))
   title = step.get("title", f"Step {step_id}")
   instrument = step.get("instrument", "llm")


   ctx_compact = {
       "purpose": context.get("purpose"),
       "assumptions": context.get("assumptions", []),
       "prior_results": [
           {"step_id": r.step_id, "title": r.title, "tool": r.tool, "output": r.output[:1500]}
           for r in context.get("outcomes", [])
       ],
   }


   if instrument == "python":
       code = llm_chat(
           EXECUTOR_SYSTEM,
           person=(
               f"Step:n{json.dumps(step, indent=2)}nn"
               f"Context:n{json.dumps(ctx_compact, indent=2)}nn"
               f"Write Python code that completes the step. Output ONLY code."
           ),
           max_new_tokens=700,
           temperature=0.2,
       )
       py = run_python(code)
       out = []
       out.append("PYTHON_CODE:n" + code)
       out.append("nEXECUTION_OK: " + str(py["ok"]))
       if py["stdout"]:
           out.append("nSTDOUT:n" + py["stdout"])
       if py["error"]:
           out.append("nERROR:n" + py["error"])
       return StepResult(step_id=step_id, title=title, instrument=instrument, output="n".be part of(out))


   result_text = llm_chat(
       EXECUTOR_SYSTEM,
       person=(
           f"Step:n{json.dumps(step, indent=2)}nn"
           f"Context:n{json.dumps(ctx_compact, indent=2)}nn"
           f"Return the step outcome."
       ),
       max_new_tokens=700,
       temperature=0.3,
   )
   return StepResult(step_id=step_id, title=title, instrument=instrument, output=result_text)




def aggregator_agent(job: str, plan: Dict[str, Any], outcomes: Listing[StepResult]) -> str:
   payload = {
       "job": job,
       "plan": plan,
       "outcomes": [{"step_id": r.step_id, "title": r.title, "tool": r.tool, "output": r.output[:2500]} for r in outcomes],
   }
   return llm_chat(
       AGGREGATOR_SYSTEM,
       person=f"Mix the whole lot into the ultimate reply.nnINPUT:n{json.dumps(payload, indent=2)}",
       max_new_tokens=900,
       temperature=0.2,
   )




def run_hierarchical_agent(job: str, verbose: bool = True) -> Dict[str, Any]:
   plan = planner_agent(job)


   if verbose:
       print("n====================")
       print("PLAN (from Planner)")
       print("====================")
       print(json.dumps(plan, indent=2))


   context = {
       "purpose": plan.get("purpose", job),
       "assumptions": plan.get("assumptions", []),
       "outcomes": [],
   }


   outcomes: Listing[StepResult] = []
   for step in plan.get("steps", []):
       res = executor_agent(step, context)
       outcomes.append(res)
       context["results"].append(res)


       if verbose:
           print("n--------------------")
           print(f"STEP {res.step_id}: {res.title}  [tool={res.tool}]")
           print("--------------------")
           print(res.output)


   last = aggregator_agent(job, plan, outcomes)
   if verbose:
       print("n====================")
       print("FINAL (from Aggregator)")
       print("====================")
       print(last)


   return {"job": job, "plan": plan, "outcomes": outcomes, "last": last}




demo_task = """
Create a sensible guidelines to launch a small multi-agent system in Python for coordinating logistics:
- One planner agent that decomposes duties
- Two executor brokers (routing + stock)
- A easy reminiscence retailer for previous choices
Hold it light-weight and runnable in Colab.
"""


_ = run_hierarchical_agent(demo_task, verbose=True)


print("nnType your individual job (or press Enter to skip):")
user_task = enter().strip()
if user_task:
   _ = run_hierarchical_agent(user_task, verbose=True)
Tags: AgentBuildCodingExecutionHierarchicalImplementationLLMsMultiAgentOpenSourcePlannerReasoningStructuredtool
Admin

Admin

Next Post
What’s !necessary #6: :heading, border-shape, Truncating Textual content From the Center, and Extra

What’s !necessary #6: :heading, border-shape, Truncating Textual content From the Center, and Extra

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recommended.

NVIDIA Researchers Introduce KVTC Remodel Coding Pipeline to Compress Key-Worth Caches by 20x for Environment friendly LLM Serving

NVIDIA Researchers Introduce KVTC Remodel Coding Pipeline to Compress Key-Worth Caches by 20x for Environment friendly LLM Serving

February 11, 2026
The place Silence Speaks: Kakeru Taira on Reworking On a regular basis Areas into Liminal Experiences

The place Silence Speaks: Kakeru Taira on Reworking On a regular basis Areas into Liminal Experiences

September 22, 2025

Trending.

The way to Clear up the Wall Puzzle in The place Winds Meet

The way to Clear up the Wall Puzzle in The place Winds Meet

November 16, 2025
Mistral AI Releases Voxtral TTS: A 4B Open-Weight Streaming Speech Mannequin for Low-Latency Multilingual Voice Era

Mistral AI Releases Voxtral TTS: A 4B Open-Weight Streaming Speech Mannequin for Low-Latency Multilingual Voice Era

March 29, 2026
Moonshot AI Releases π‘¨π’•π’•π’†π’π’•π’Šπ’π’ π‘Ήπ’†π’”π’Šπ’…π’–π’‚π’π’” to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

Moonshot AI Releases π‘¨π’•π’•π’†π’π’•π’Šπ’π’ π‘Ήπ’†π’”π’Šπ’…π’–π’‚π’π’” to Exchange Mounted Residual Mixing with Depth-Sensible Consideration for Higher Scaling in Transformers

March 16, 2026
Gemini 2.5 Professional Preview: even higher coding efficiency

Gemini 2.5 Professional Preview: even higher coding efficiency

April 12, 2026
Efecto: Constructing Actual-Time ASCII and Dithering Results with WebGL Shaders

Efecto: Constructing Actual-Time ASCII and Dithering Results with WebGL Shaders

January 5, 2026

AimactGrow

Welcome to AimactGrow, your ultimate source for all things technology! Our mission is to provide insightful, up-to-date content on the latest advancements in technology, coding, gaming, digital marketing, SEO, cybersecurity, and artificial intelligence (AI).

Categories

  • AI
  • Coding
  • Cybersecurity
  • Digital marketing
  • Gaming
  • SEO
  • Technology

Recent News

The Full AI Analysis Workflow: From Immediate Discovery to Content material Creation

The Full AI Analysis Workflow: From Immediate Discovery to Content material Creation

April 15, 2026
Redefining the way forward for software program engineering

Redefining the way forward for software program engineering

April 15, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

Β© 2025 https://blog.aimactgrow.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Technology
  • AI
  • SEO
  • Coding
  • Gaming
  • Cybersecurity
  • Digital marketing

Β© 2025 https://blog.aimactgrow.com/ - All Rights Reserved