Zum Inhalt

CrewAI Multi-Agent Framework Cheat Sheet

Überblick

CrewAI ist ein revolutionäres Multi-Agent-Orchestrations-Framework, das verwandelt, wie Entwickler KI-gestützte Anwendungen aufbauen und bereitstellen. Erstellt von João Moura, ermöglicht dieser Python-basierte Rahmen mehrere KI-Agenten, als zusammenhängende Einheit zusammenzuarbeiten, wobei jede einzelne Rolle übernimmt und Verantwortungen teilt, um komplexe Aufgaben zu erfüllen, die für einen einzigen Agenten schwierig wären, allein zu handhaben.

Was CrewAI auseinander setzt, ist seine Fähigkeit, anspruchsvolle Multiagent-Systeme zu orchestrieren, in denen Agenten autonom Aufgaben aufeinander übertragen können, Probleme lösen und spezialisierte Werkzeuge und Fähigkeiten nutzen können. Das Framework bietet sowohl hohe Einfachheit für schnelle Entwicklung als auch präzise Low-Level-Kontrolle für komplexe Szenarien, so dass es ideal für die Schaffung autonomer KI-Agenten ist, die auf jede geschäftliche oder technische Anforderung zugeschnitten sind.

CrewAI beschäftigt sich mit dem wachsenden Bedarf an KI-Systemen, die vielfältige Herausforderungen bewältigen können, indem sie in überschaubare Komponenten zerlegt werden, spezialisierte Agenten jeder Komponente zuordnen und ihre Bemühungen koordinieren, überlegene Ergebnisse im Vergleich zu herkömmlichen Single-Agent-Ansätzen zu erzielen.

Kernkonzepte

Agenten

Agenten sind die grundlegenden Bausteine von CrewAI-Systemen. Jeder Agent ist mit spezifischen Rollen, Zielen und Fähigkeiten entwickelt, die als autonome Körperschaften funktionieren, die Aufgaben in ihrem Kompetenzbereich ausrichten, planen und ausführen können.

Besatzungen

Eine Crew ist eine Sammlung von Agenten, die zu einem gemeinsamen Ziel zusammenarbeiten. Crews definieren die Struktur und den Workflow der multiagent Zusammenarbeit, indem sie festlegen, wie Agenten interagieren, Aufgaben delegieren und Informationen teilen.

Aufgaben

Aufgaben stellen spezifische Ziele oder Aktivitäten dar, die abgeschlossen werden müssen. Sie können den einzelnen Agenten zugeordnet oder über mehrere Agenten innerhalb einer Crew verteilt werden, je nach Komplexität und Anforderungen.

Werkzeuge

Tools erweitern die Agent-Fähigkeiten durch den Zugriff auf externe Dienste, APIs, Datenbanken oder spezialisierte Funktionen. Agenten können Tools verwenden, um Aktionen außerhalb ihrer Kernsprache-Modellfunktionen durchzuführen.

Installation und Inbetriebnahme

Einfache Installation

# Install CrewAI using pip
pip install crewai

# Install with additional tools
pip install 'crewai[tools]'

# Install development version
pip install git+https://github.com/crewAIInc/crewAI.git
```_

### Umwelt Setup
```python
import os
from crewai import Agent, Task, Crew, Process

# Set up API keys for LLM providers
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
# or
os.environ["ANTHROPIC_API_KEY"] = "your-anthropic-api-key"
# or other supported providers
```_

### Projektstruktur

my_crew_project/ ├── agents/ │ ├── init.py │ ├── researcher.py │ └── writer.py ├── tasks/ │ ├── init.py │ ├── research_tasks.py │ └── writing_tasks.py ├── tools/ │ ├── init.py │ └── custom_tools.py ├── crews/ │ ├── init.py │ └── content_crew.py └── main.py ```_

Agent Configuration

Grundlegende Agentenerstellung

```python from crewai import Agent

Create a basic agent

researcher = Agent( role='Research Specialist', goal='Conduct thorough research on given topics', backstory="""You are an experienced researcher with expertise in gathering, analyzing, and synthesizing information from multiple sources. You have a keen eye for detail and can identify reliable sources.""", verbose=True, allow_delegation=False ) ```_

Advanced Agent Configuration

```python from crewai import Agent from crewai_tools import SerperDevTool, WebsiteSearchTool

Create an agent with tools and advanced settings

research_agent = Agent( role='Senior Research Analyst', goal='Provide comprehensive analysis and insights on market trends', backstory="""You are a senior research analyst with 10+ years of experience in market research and competitive analysis. You excel at identifying patterns, trends, and actionable insights from complex data sets.""", tools=[SerperDevTool(), WebsiteSearchTool()], verbose=True, allow_delegation=True, max_iter=5, memory=True, step_callback=lambda step: print(f"Agent step: \\{step\\}"), system_template="""You are \\{role\\}. \\{backstory\\} Your goal is: \\{goal\\} Always provide detailed analysis with supporting evidence.""" ) ```_

Agent mit benutzerdefinierten LLM

```python from langchain.llms import OpenAI from crewai import Agent

Use custom LLM configuration

custom_llm = OpenAI(temperature=0.7, model_name="gpt-4")

analyst = Agent( role='Data Analyst', goal='Analyze data and provide statistical insights', backstory='Expert in statistical analysis and data interpretation', llm=custom_llm, verbose=True ) ```_

Multimodaler Agent

```python from crewai import Agent

Agent with multimodal capabilities

visual_analyst = Agent( role='Visual Content Analyst', goal='Analyze images and visual content for insights', backstory='Specialist in visual content analysis and interpretation', multimodal=True, # Enable multimodal capabilities tools=[image_analysis_tool], verbose=True ) ```_

Aufgabendefinition und -management

Grundlegende Aufgabenstellung

```python from crewai import Task

Define a simple task

research_task = Task( description="""Conduct comprehensive research on artificial intelligence trends in 2024. Focus on: 1. Emerging AI technologies 2. Market adoption rates 3. Key industry players 4. Future predictions

Provide a detailed report with sources and citations.""",
agent=researcher,
expected_output="A comprehensive research report with citations"

) ```_

Erweiterte Aufgabenkonfiguration

```python from crewai import Task

Task with dependencies and callbacks

analysis_task = Task( description="""Analyze the research findings and create strategic recommendations for AI adoption in enterprise environments.""", agent=analyst, expected_output="Strategic recommendations document with actionable insights", context=[research_task], # Depends on research_task completion callback=lambda output: save_to_database(output), async_execution=False, output_file="analysis_report.md" ) ```_

Aufgabe mit benutzerdefinierten Ausgabe Parsing

```python from crewai import Task from pydantic import BaseModel from typing import List

class ResearchOutput(BaseModel): title: str summary: str key_findings: List[str] sources: List[str] confidence_score: float

structured_task = Task( description="Research AI market trends and provide structured output", agent=researcher, expected_output="Structured research findings", output_pydantic=ResearchOutput ) ```_

Bedingte Ausführung der Aufgaben

```python from crewai import Task

def should_execute_task(context): # Custom logic to determine if task should execute return len(context.get('findings', [])) > 5

conditional_task = Task( description="Perform detailed analysis if sufficient data is available", agent=analyst, expected_output="Detailed analysis report", condition=should_execute_task ) ```_

Mannschaftsorchester

Basic Crew Setup

```python from crewai import Crew, Process

Create a basic crew

content_crew = Crew( agents=[researcher, writer], tasks=[research_task, writing_task], verbose=2, process=Process.sequential )

Execute the crew

result = content_crew.kickoff() print(result) ```_

Erweiterte Crew Konfiguration

```python from crewai import Crew, Process from crewai.memory import LongTermMemory

Advanced crew with memory and custom settings

advanced_crew = Crew( agents=[researcher, analyst, writer, reviewer], tasks=[research_task, analysis_task, writing_task, review_task], process=Process.hierarchical, memory=LongTermMemory(), verbose=2, manager_llm=manager_llm, function_calling_llm=function_llm, max_rpm=10, share_crew=True, step_callback=crew_step_callback, task_callback=crew_task_callback ) ```_

Hierarchischer Prozess

```python from crewai import Crew, Process, Agent

Manager agent for hierarchical process

manager = Agent( role='Project Manager', goal='Coordinate team activities and ensure quality deliverables', backstory='Experienced project manager with strong leadership skills', allow_delegation=True )

hierarchical_crew = Crew( agents=[manager, researcher, analyst, writer], tasks=[research_task, analysis_task, writing_task], process=Process.hierarchical, manager_agent=manager, verbose=2 ) ```_

Parallele Ausführung der Aufgaben

```python from crewai import Crew, Process

Crew with parallel task execution

parallel_crew = Crew( agents=[researcher1, researcher2, researcher3], tasks=[task1, task2, task3], process=Process.sequential, # Overall sequential, but tasks can run in parallel max_execution_time=3600, # 1 hour timeout verbose=2 )

Execute with parallel capabilities

result = parallel_crew.kickoff(inputs=\\{ 'topic': 'AI in Healthcare', 'deadline': '2024-12-31' \\}) ```_

Werkzeugintegration

Eingebaute Werkzeuge

```python from crewai_tools import ( SerperDevTool, WebsiteSearchTool, FileReadTool, DirectoryReadTool, CodeDocsSearchTool, YoutubeVideoSearchTool )

Configure built-in tools

search_tool = SerperDevTool() web_tool = WebsiteSearchTool() file_tool = FileReadTool() code_tool = CodeDocsSearchTool()

Agent with multiple tools

multi_tool_agent = Agent( role='Research Assistant', goal='Gather information from multiple sources', backstory='Versatile researcher with access to various information sources', tools=[search_tool, web_tool, file_tool, code_tool], verbose=True ) ```_

Personalentwicklung

```python from crewai_tools import BaseTool from typing import Type from pydantic import BaseModel, Field

class DatabaseQueryInput(BaseModel): query: str = Field(description="SQL query to execute") database: str = Field(description="Database name")

class DatabaseQueryTool(BaseTool): name: str = "Database Query Tool" description: str = "Execute SQL queries against specified databases" args_schema: Type[BaseModel] = DatabaseQueryInput

def _run(self, query: str, database: str) -> str:
    # Implement database query logic
    try:
        # Connect to database and execute query
        result = execute_database_query(database, query)
        return f"Query executed successfully: \\\\{result\\\\}"
    except Exception as e:
        return f"Query failed: \\\\{str(e)\\\\}"

Use custom tool

db_tool = DatabaseQueryTool() database_agent = Agent( role='Database Analyst', goal='Query and analyze database information', backstory='Expert in database operations and SQL', tools=[db_tool], verbose=True ) ```_

API Integration Tool

```python from crewai_tools import BaseTool import requests

class APIIntegrationTool(BaseTool): name: str = "API Integration Tool" description: str = "Make HTTP requests to external APIs"

def _run(self, endpoint: str, method: str = "GET", data: dict = None) -> str:
    try:
        if method.upper() == "GET":
            response = requests.get(endpoint)
        elif method.upper() == "POST":
            response = requests.post(endpoint, json=data)

        return response.json()
    except Exception as e:
        return f"API request failed: \\\\{str(e)\\\\}"

Agent with API capabilities

api_agent = Agent( role='API Integration Specialist', goal='Interact with external services via APIs', backstory='Expert in API integration and data retrieval', tools=[APIIntegrationTool()], verbose=True ) ```_

Speicher- und Kontextmanagement

Langfristiger Speicher

```python from crewai.memory import LongTermMemory from crewai import Crew

Crew with persistent memory

memory_crew = Crew( agents=[researcher, analyst], tasks=[research_task, analysis_task], memory=LongTermMemory(), verbose=2 )

Memory persists across executions

result1 = memory_crew.kickoff(inputs=\\{'topic': 'AI Ethics'\\}) result2 = memory_crew.kickoff(inputs=\\{'topic': 'AI Regulation'\\}) ```_

Inhaltsverteilung

```python from crewai import Task, Agent

Tasks that share context

context_task1 = Task( description="Research market trends", agent=researcher, expected_output="Market trend analysis" )

context_task2 = Task( description="Analyze the market trends and provide recommendations", agent=analyst, expected_output="Strategic recommendations", context=[context_task1] # Uses output from context_task1 ) ```_

Custom Memory Implementierung

```python from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.long_term.long_term_memory import LongTermMemory

class CustomMemory(LongTermMemory): def init(self, storage_path: str = "./custom_memory"): super().init(storage_path=storage_path) self.custom_entities = \\{\\}

def save_entity(self, entity_name: str, entity_data: dict):
    self.custom_entities[entity_name] = entity_data
    # Implement custom storage logic

def retrieve_entity(self, entity_name: str) -> dict:
    return self.custom_entities.get(entity_name, \\\\{\\\\})

Use custom memory

custom_memory = CustomMemory("./project_memory") crew_with_custom_memory = Crew( agents=[researcher, analyst], tasks=[research_task, analysis_task], memory=custom_memory ) ```_

Erweiterte Funktionen

Vertreter der Delegation

```python from crewai import Agent

Senior agent that can delegate

senior_researcher = Agent( role='Senior Research Director', goal='Oversee research projects and delegate tasks', backstory='Experienced research director with team management skills', allow_delegation=True, max_delegation=3, verbose=True )

Junior agents that can receive delegated tasks

junior_researcher1 = Agent( role='Junior Researcher - Technology', goal='Research technology trends and innovations', backstory='Specialized in technology research', allow_delegation=False )

junior_researcher2 = Agent( role='Junior Researcher - Market Analysis', goal='Analyze market conditions and competitive landscape', backstory='Specialized in market research and analysis', allow_delegation=False )

Crew with delegation hierarchy

delegation_crew = Crew( agents=[senior_researcher, junior_researcher1, junior_researcher2], tasks=[complex_research_task], process=Process.hierarchical, verbose=2 ) ```_

Begründung und Planung

```python from crewai import Agent

Agent with enhanced reasoning capabilities

reasoning_agent = Agent( role='Strategic Planner', goal='Develop comprehensive strategies with detailed reasoning', backstory='Expert strategic planner with strong analytical skills', reasoning=True, # Enable reasoning capabilities planning=True, # Enable planning capabilities verbose=True, max_iter=10 )

Task that requires complex reasoning

strategic_task = Task( description="""Develop a comprehensive 5-year strategic plan for AI adoption in the healthcare industry. Consider: 1. Current market conditions 2. Regulatory environment 3. Technology readiness 4. Competitive landscape 5. Implementation challenges

Provide detailed reasoning for each recommendation.""",
agent=reasoning_agent,
expected_output="Comprehensive strategic plan with detailed reasoning"

) ```_

Rückruf und Überwachung

```python from crewai import Crew, Agent, Task

def agent_step_callback(agent_output): print(f"Agent \\{agent_output.agent\\} completed step: \\{agent_output.step\\}") # Log to monitoring system log_agent_activity(agent_output)

def task_completion_callback(task_output): print(f"Task completed: \\{task_output.description\\}") # Send notification or update dashboard notify_task_completion(task_output)

def crew_step_callback(crew_output): print(f"Crew step completed: \\{crew_output\\}") # Update progress tracking update_progress(crew_output)

Crew with comprehensive monitoring

monitored_crew = Crew( agents=[researcher, analyst, writer], tasks=[research_task, analysis_task, writing_task], step_callback=crew_step_callback, task_callback=task_completion_callback, verbose=2 )

Agents with individual monitoring

monitored_agent = Agent( role='Monitored Researcher', goal='Conduct research with detailed monitoring', backstory='Researcher with comprehensive activity tracking', step_callback=agent_step_callback, verbose=True ) ```_

Fehler Handling und Widerstandsfähigkeit

Retry Logic

```python from crewai import Task, Agent import time

def retry_callback(attempt, error): print(f"Task failed on attempt \\{attempt\\}: \\{error\\}") time.sleep(2 ** attempt) # Exponential backoff

resilient_task = Task( description="Perform web research with retry logic", agent=researcher, expected_output="Research findings", max_retries=3, retry_callback=retry_callback ) ```_

Fehlersuche

```python from crewai import Crew, Process

def error_handler(error, context): print(f"Error occurred: \\{error\\}") # Implement recovery logic if "rate_limit" in str(error).lower(): time.sleep(60) # Wait for rate limit reset return True # Retry return False # Don't retry

error_resilient_crew = Crew( agents=[researcher, analyst], tasks=[research_task, analysis_task], error_handler=error_handler, max_retries=3, verbose=2 ) ```_

Fallback Agents

```python from crewai import Agent, Task, Crew

Primary agent

primary_researcher = Agent( role='Primary Researcher', goal='Conduct comprehensive research', backstory='Expert researcher with specialized tools', tools=[advanced_search_tool, database_tool] )

Fallback agent with basic capabilities

fallback_researcher = Agent( role='Backup Researcher', goal='Conduct basic research when primary agent fails', backstory='Reliable researcher with basic tools', tools=[basic_search_tool] )

Task with fallback logic

research_with_fallback = Task( description="Conduct research with fallback support", agent=primary_researcher, fallback_agent=fallback_researcher, expected_output="Research findings" ) ```_

Leistungsoptimierung

Parallele Ausführung

```python from crewai import Crew, Process import asyncio

Async crew execution

async def run_crew_async(): crew = Crew( agents=[researcher1, researcher2, researcher3], tasks=[task1, task2, task3], process=Process.sequential, verbose=2 )

result = await crew.kickoff_async(inputs=\\\\{'topic': 'AI Trends'\\\\})
return result

Run multiple crews in parallel

async def run_multiple_crews(): crews = [create_crew(topic) for topic in ['AI', 'ML', 'NLP']] results = await asyncio.gather(*[crew.kickoff_async() for crew in crews]) return results ```_

Ressourcenmanagement

```python from crewai import Crew import threading

class ResourceManager: def init(self, max_concurrent_agents=5): self.semaphore = threading.Semaphore(max_concurrent_agents) self.active_agents = 0

def acquire_agent_slot(self):
    self.semaphore.acquire()
    self.active_agents += 1

def release_agent_slot(self):
    self.semaphore.release()
    self.active_agents -= 1

resource_manager = ResourceManager(max_concurrent_agents=3)

Crew with resource management

resource_managed_crew = Crew( agents=[researcher, analyst, writer], tasks=[research_task, analysis_task, writing_task], resource_manager=resource_manager, verbose=2 ) ```_

Caching und Optimierung

```python from crewai import Agent, Task from functools import lru_cache

Agent with caching capabilities

class CachedAgent(Agent): @lru_cache(maxsize=100) def cached_execution(self, task_description): return super().execute_task(task_description)

cached_researcher = CachedAgent( role='Cached Researcher', goal='Perform research with caching', backstory='Efficient researcher with caching capabilities' )

Task with caching

cached_task = Task( description="Research AI trends (cached)", agent=cached_researcher, expected_output="Cached research results", cache_results=True ) ```_

Integrationsmuster

Anwendungen in der Webtechnologie

```python from flask import Flask, request, jsonify from crewai import Crew, Agent, Task

app = Flask(name)

Initialize crew components

researcher = Agent( role='API Researcher', goal='Research topics via web API', backstory='Researcher accessible via web API' )

@app.route('/research', methods=['POST']) def research_endpoint(): data = request.json topic = data.get('topic')

# Create dynamic task
research_task = Task(
    description=f"Research the topic: \\\\{topic\\\\}",
    agent=researcher,
    expected_output="Research findings"
)

# Execute crew
crew = Crew(agents=[researcher], tasks=[research_task])
result = crew.kickoff()

return jsonify(\\\\{'result': result\\\\})

if name == 'main': app.run(debug=True) ```_

Hintergrund der Feierlichkeiten Aufgaben

```python from celery import Celery from crewai import Crew, Agent, Task

app = Celery('crewai_tasks')

@app.task def execute_crew_task(topic, agents_config, tasks_config): # Reconstruct agents and tasks from config agents = [create_agent_from_config(config) for config in agents_config] tasks = [create_task_from_config(config) for config in tasks_config]

# Execute crew
crew = Crew(agents=agents, tasks=tasks)
result = crew.kickoff(inputs=\\\\{'topic': topic\\\\})

return result

Usage

result = execute_crew_task.delay( topic="AI in Healthcare", agents_config=[researcher_config, analyst_config], tasks_config=[research_config, analysis_config] ) ```_

Datenbankintegration

```python from sqlalchemy import create_engine, Column, Integer, String, Text, DateTime from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker from crewai import Crew, Agent, Task import datetime

Base = declarative_base()

class CrewExecution(Base): tablename = 'crew_executions'

id = Column(Integer, primary_key=True)
crew_name = Column(String(100))
input_data = Column(Text)
output_data = Column(Text)
execution_time = Column(DateTime, default=datetime.datetime.utcnow)
status = Column(String(50))

Database-integrated crew

class DatabaseCrew(Crew): def init(self, args, kwargs): super().init(args, **kwargs) self.engine = create_engine('sqlite:///crew_executions.db') Base.metadata.create_all(self.engine) Session = sessionmaker(bind=self.engine) self.session = Session()

def kickoff(self, inputs=None):
    # Log execution start
    execution = CrewExecution(
        crew_name=self.__class__.__name__,
        input_data=str(inputs),
        status='running'
    )
    self.session.add(execution)
    self.session.commit()

    try:
        result = super().kickoff(inputs)
        execution.output_data = str(result)
        execution.status = 'completed'
    except Exception as e:
        execution.status = f'failed: \\\\{str(e)\\\\}'
        raise
    finally:
        self.session.commit()

    return result

```_

Best Practices

Agent Design Principles

  • *Single Verantwortung: Jeder Agent sollte eine klare, fokussierte Rolle haben
  • *Clear Goals: Definieren Sie spezifische, messbare Ziele für jeden Agenten
  • *Rich Backstories: Geben Sie detaillierten Kontext zur Verbesserung des Agentenverhaltens
  • *Appropriate Tools: Equip-Agenten mit für ihre Rollen relevanten Werkzeugen
  • *Delegationsstrategie: Benutze die Delegation gedanklich, um Komplexität zu vermeiden

Task-Organisation

  • *Beschreibungen: Detaillierte, eindeutige Aufgabenbeschreibungen schreiben
  • *Expected Outputs: Geben Sie genau an, welches Ausgabeformat erwartet wird
  • *Kontextabhängigkeiten: Deutlich definieren Aufgabenabhängigkeiten und Kontextfreigaben
  • Error Handling: Implementieren Sie robuste Fehlerbehandlungs- und Rückgewinnungsmechanismen
  • ** Leistungsüberwachung** Aufgabenausführung und Leistungsmetriken verfolgen

Mannschaftsorchester

  • Process Selection: Wählen Sie einen geeigneten Prozesstyp (Sequent, hierarchisch, parallel)
  • Memory Management: Verwenden Sie den Speicher strategisch für Kontextretention
  • *Resource Limits: Für die Ausführungszeit und Iterationen geeignete Grenzen festlegen
  • Monitoring: Durchführung umfassender Protokollierung und Überwachung
  • Testing: Vollständiges Testteam-Verhalten mit verschiedenen Eingängen

Leistungsoptimierung

  • *Beauftragte Spezialisierung: Erstellen Sie spezialisierte Agenten für bestimmte Domains
  • *Werkzeugoptimierung: Verwenden Sie effiziente Tools und minimieren Sie externe API-Anrufe
  • Caching: Implementieren Sie Caching für häufig aufgerufene Daten
  • Parallel Ausführung: Verändern Sie die parallele Verarbeitung gegebenenfalls
  • ** Ressourcenmanagement**: Computerressourcen überwachen und verwalten

Fehlerbehebung

Gemeinsame Themen

Agent Nicht antworten

```python

Debug agent configuration

agent = Agent( role='Debug Agent', goal='Test agent responsiveness', backstory='Agent for debugging purposes', verbose=True, # Enable verbose output max_iter=1, # Limit iterations for testing allow_delegation=False )

Test with simple task

test_task = Task( description="Say hello and confirm you are working", agent=agent, expected_output="Simple greeting message" ) ```_

Speicherprobleme

```python

Clear memory if needed

crew.memory.clear()

Check memory usage

print(f"Memory entities: \\{len(crew.memory.entities)\\}") print(f"Memory size: \\{crew.memory.get_memory_size()\\}") ```_

Probleme der Werkzeugintegration

```python

Test tool functionality

tool = SerperDevTool() try: result = tool.run("test query") print(f"Tool working: \\{result\\}") except Exception as e: print(f"Tool error: \\{e\\}") ```

Leistungsfragen

```python

Monitor execution time

import time

start_time = time.time() result = crew.kickoff() execution_time = time.time() - start_time print(f"Execution time: \\{execution_time\\} seconds")

Profile memory usage

import tracemalloc

tracemalloc.start() result = crew.kickoff() current, peak = tracemalloc.get_traced_memory() print(f"Current memory usage: \\{current / 1024 / 1024:.2f\\} MB") print(f"Peak memory usage: \\{peak / 1024 / 1024:.2f\\} MB") ```_

--

*Diese umfassende CrewAI-Cheat-Blatt bietet alles, was nötig ist, um anspruchsvolle multiagente KI-Systeme aufzubauen. Von grundlegender Einrichtung bis hin zu fortgeschrittenen Orchestrationsmustern, verwenden Sie diese Beispiele und Best Practices, um leistungsstarke KI-Anwendungen zu schaffen, die die gemeinsame Kraft mehrerer spezialisierter Agenten nutzen. *