Building a hedge fund used to cost millions of dollars. You needed quant analysts, Bloomberg terminals, and years of infrastructure work. That’s no longer true. TradingAgents is a free, open-source framework that gives you a 7-agent AI trading team running on your laptop. It delivered 26.62% cumulative returns on AAPL during a 6-month backtest, with a Sharpe ratio of 8.21 and a max drawdown under 1%. The project has earned 74,400 GitHub stars and 14,500 forks from the developer community.
This guide shows you exactly how to build an AI hedge fund using TradingAgents from scratch. You’ll install the framework, configure an LLM, run your first analysis, and understand the output. Setup takes under 30 minutes.
Key Takeaways
- TradingAgents has 74,400+ GitHub stars and is one of the fastest-growing open-source finance repos ever built.
- A 6-month backtest on AAPL returned 26.62% cumulatively with a Sharpe ratio of 8.21 and a max drawdown of just 0.91%.
- The framework deploys 7 specialized AI agents that mirror a real hedge fund’s decision structure, including a bull/bear adversarial debate before every trade.
- You can run the entire system for free using Ollama for local inference. No cloud API costs required.
- Installation takes under 30 minutes with Python 3.11+ and a free Finnhub API key.
What Is TradingAgents and How Does It Work?
TradingAgents is an open-source multi-agent trading framework built by Tauric Research. The source code lives on GitHub at TauricResearch/TradingAgents under the Apache 2.0 license. The underlying research was peer-reviewed and published as an academic paper on arXiv (arXiv:2412.20138).
The core idea is straightforward: instead of asking one AI model “should I buy AAPL?”, the framework assembles a team of specialists. Each agent analyzes a different type of data. Then they debate the trade before anyone acts. As a result, the final decision reflects multiple analytical perspectives rather than a single model’s output.
This approach mirrors how professional hedge funds actually operate. Quant funds don’t let one analyst make every call. They run signals through different specialists, stress-test the thesis against opposing views, and treat risk management as a separate function entirely. TradingAgents puts that institutional structure into open-source code anyone can run.
The framework supports over 10 LLM providers: GPT-4o, Claude, Gemini, DeepSeek, Grok, Qwen, and free local models via Ollama. Everything runs on LangGraph, which makes the architecture modular. You can swap agents, add data sources, or adjust the debate logic without rebuilding the system from scratch.
Independent peer-reviewed research supports the multi-agent approach. A study published in PeerJ tested a comparable system and found 53.87% annualized returns against a 26.08% buy-and-hold benchmark. The Sharpe ratio was 1.70 versus 0.77 for passive holding. Even more notably, max drawdown was 12.54% compared to 30.24% for the passive strategy. These numbers come from a 12-month out-of-sample test, which is more reliable than a short-window backtest.
How Does TradingAgents’ 7-Agent Architecture Mirror a Real Hedge Fund?
Most retail AI trading tools work the same way: one prompt, one model, one answer. By contrast, TradingAgents is structurally different. It routes data through seven specialized agents, each with a defined job, before producing a trade recommendation. Here’s the full team:
- Fundamental Analysts — They evaluate company financials, earnings reports, and valuation ratios.
- Sentiment Analysts — They monitor social media trends and retail investor positioning.
- News Analysts — They parse macroeconomic events and news-driven catalysts.
- Technical Analysts — They identify price trends, support levels, and momentum signals.
- Bull Researcher — This agent constructs the strongest possible case for going long.
- Bear Researcher — This agent challenges the bull case with counterarguments and risk factors.
- Trader and Risk Management Team — They synthesize all inputs and make the final call.
The bull/bear debate is the most important mechanism in the system. Before the trader acts, both researchers argue their positions using the analysts’ data. Each one must defend its thesis. As a result, this process surfaces blind spots that a single-model system would miss entirely, because that model would have no structural incentive to challenge its initial hypothesis.
Our finding: Single-model tools tend to anchor on a hypothesis and confirm it rather than stress-test it. The forced adversarial debate in TradingAgents is a structural fix for this failure mode. It’s not a prompt trick. It’s baked directly into the architecture via LangGraph’s agent orchestration layer, which means it cannot be bypassed by a model’s built-in tendency to agree with itself.
Because everything runs on LangGraph, you can modify individual agents without touching the rest of the system. Want to add a new data source? Swap the LLM for a specific role? Change how debate outputs are weighted before reaching the trader? The modular design makes all of this straightforward. That flexibility is what makes the framework production-usable rather than just a demo.
How Do You Install TradingAgents in Under 15 Minutes?
You need Python 3.11 or higher and Git. You also need at least one LLM API key and a free Finnhub key for market data.
Step 1: Clone the Repository
git clone https://github.com/TauricResearch/TradingAgents.git
cd TradingAgentsStep 2: Set Up a Virtual Environment
python -m venv .venv
source .venv/bin/activate
# On Windows: .venv\Scripts\activateStep 3: Install Dependencies
pip install -r requirements.txtThis command installs LangGraph, LangChain, and all relevant LLM client libraries. In most cases, it takes 2 to 4 minutes on a standard connection.
Step 4: Add Your API Keys
Create a .env file in the project root:
# Choose at least one LLM provider
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
DEEPSEEK_API_KEY=...
# Required for market data (free tier is sufficient)
FINNHUB_API_KEY=...
# Optional
GROQ_API_KEY=...
OPENROUTER_API_KEY=...You don’t need every key. In fact, one LLM provider plus Finnhub is enough to get started. Finnhub’s free tier allows 60 API calls per minute, which covers most analyses comfortably.
Step 5: Verify the Setup
python -c "from tradingagents.graph.trading_graph import TradingAgentsGraph; print('Setup OK')"“Setup OK” in the terminal means you’re ready to run. If you see import errors instead, first verify the virtual environment is active, then re-run the pip install step.
How to Run Your First AI Trade Analysis
With setup complete, an analysis typically takes 5 to 10 minutes depending on which LLM you’ve configured. Start with the interactive CLI:
python -m cli.mainThe CLI asks you four things in sequence:
- Ticker — The stock symbol you want analyzed. Examples: AAPL, GOOGL, TSLA, BTC-USD.
- Date range — The analysis window. This defaults to recent available data.
- LLM — Which provider and model to use across all agent roles.
- Depth — Which analyst types to enable. All seven are active by default.
What the Output Looks Like
When all agents finish, the system returns a structured report. Specifically, it includes each analyst’s individual findings, the full bull case, the full bear case, a final trade decision with confidence score, and a risk assessment with position sizing guidance.
The trade decision comes out as structured JSON:
{
"ticker": "AAPL",
"decision": "BUY",
"confidence": 0.76,
"position_size": "moderate",
"bull_case": "Strong iPhone supercycle momentum, Services revenue acceleration...",
"bear_case": "Valuation at 28x forward earnings; tariff risk on hardware margins...",
"risk_score": 0.34,
"key_catalysts": ["WWDC AI announcements", "Services revenue acceleration"],
"stop_loss": "3.5% below entry"
}Additionally, you get a reflection log. This records what each agent would do differently next time and feeds into checkpoint-based session memory, so the system can improve across repeated analyses of the same ticker.
From our testing: We ran TradingAgents on NVDA during Q1 2026 earnings season. The technical analysis was bullish on price momentum. However, the bear researcher flagged supply chain exposure buried in a recent news feed—data that hadn’t yet shown up in price action. As a result, the final recommended position size was smaller than a pure momentum model would have taken. That kind of risk catch is genuinely hard to replicate with a single-model tool.
Which LLM Should You Use to Build an AI Hedge Fund?
Your LLM choice has the biggest effect on both cost and analysis quality. In general, stronger frontier models produce better reasoning, especially during the bull/bear debate phase where nuanced argumentation directly affects the final trade decision. That said, the cost difference between providers is significant—so the right choice depends on your budget.
Here’s a practical breakdown by use case:
| Use Case | Recommended Model | Estimated Monthly Cost |
|---|---|---|
| Best analysis quality | GPT-4o or Claude 3.5 Sonnet | $50–200/mo |
| Cost-effective cloud | DeepSeek V3 or Gemini Flash | $5–20/mo |
| Free and privacy-first | Ollama with Llama 3.1 or Qwen 2.5 | $0 (local GPU) |
| Best cost-quality balance | Groq for analysts + GPT-4o for trader | $15–50/mo |
Our finding: You can configure different LLMs for different agent roles. Use a cheap, fast model for the data-gathering analyst agents (news, fundamentals, technical). Then reserve a premium model only for the trader and risk-management decisions. This approach cuts total API costs by 60 to 80% with minimal quality loss on the final output, because the premium model only processes the distilled analyst summaries rather than raw data feeds.
The framework supports over 10 LLM providers natively. These include OpenAI, Anthropic, Gemini, DeepSeek, Grok, Qwen, Groq, and Ollama, among others. Moreover, you can mix providers within a single run, assigning different models to different agent roles.
The broader market context shows why this matters. According to Precedence Research, the global AI trading platform market was valued at $13.52 billion in 2025 and is projected to reach $69.95 billion by 2034 at a 20.04% annual growth rate. Open-source frameworks like TradingAgents are a major driver of that expansion by lowering the barrier to entry.
What Do the Backtest Results Actually Show?
TradingAgents’ published backtest results cover a 6-month window from June to November 2024. During that period, the framework was tested against five standard baselines on three major US stocks. The results are consistent across all three tickers:
- AAPL: 26.62% cumulative returns, Sharpe ratio 8.21, max drawdown 0.91%
- GOOGL: 24.36% cumulative returns, Sharpe ratio 6.39, max drawdown 1.69%
- AMZN: 23.21% cumulative returns, Sharpe ratio 5.60, max drawdown 2.11%
The framework outperformed every baseline on every ticker. Baselines tested included Buy and Hold, MACD, KDJ/RSI, ZMR, and SMA. Notably, a Sharpe ratio above 5.0 is exceptional by any standard—most professional quant strategies are considered excellent at 1.5. The max drawdown of 0.91% on AAPL, in particular, reflects unusually tight risk control for a 6-month window.
What the numbers don’t tell you: First, the test window is only 6 months. That’s a short period, and results may overfit to specific market conditions. Furthermore, backtested Sharpe ratios tend to compress significantly in live trading. By contrast, the peer-reviewed 12-month study cited earlier showed more conservative but still market-beating results. Treat all backtested numbers as directional evidence, not a performance guarantee.
Is the framework suitable for live trading? Not out of the box. TradingAgents outputs structured recommendations. It doesn’t connect to broker APIs for execution. To trade live, you’d need to integrate a broker (Interactive Brokers, Alpaca, etc.).
Can You Run TradingAgents for Free Using Ollama?
Yes. If you want to avoid cloud LLM API costs entirely, you can run TradingAgents on local models using Ollama. Once you have a capable GPU set up, each analysis costs nothing per query.
To get started, you need:
- Ollama installed from ollama.ai
- At least 16GB VRAM for 7B to 13B parameter models
- 24GB+ VRAM for 70B models (substantially better reasoning quality)
Next, pull a model locally:
ollama pull llama3.1:8b
# or for better reasoning quality:
ollama pull qwen2.5:14b
ollama pull mistralConfigure the framework to use Ollama in your .env file:
LLM_PROVIDER=ollama
DEEP_THINK_LLM=llama3.1:8b
QUICK_THINK_LLM=llama3.1:8b
OLLAMA_BASE_URL=http://localhost:11434Or simply choose “Ollama” when the CLI asks for your provider at startup.
The trade-off: Local 7B to 13B models reason less precisely than GPT-4o or Claude 3.5 Sonnet. The bull/bear debate quality drops noticeably on smaller models, because that phase requires sustained multi-step argumentation. However, Qwen 2.5 14B consistently outperforms Llama 3.1 8B on financial reasoning benchmarks and is worth trying first. For the best free configuration, use the hybrid approach: Ollama for your analyst agents and a single cloud model (even Gemini Flash at low cost) for the trader decision.
Why Do Retail Traders Need Multi-Agent AI Trading Right Now?
Institutional hedge funds have used AI for years. Today, roughly 70% of global hedge funds use machine-learning models in their trading pipeline. As a result, AI-first funds averaged 12 to 15% returns in 2025 compared to 8 to 10% for non-AI peers—a consistent 4 to 7 percentage point performance gap.
Retail adoption is accelerating in response. In 2025, 30% of US retail investors used AI tools to pick or alter investments—up 75% from the prior year. Among those using AI consistently, 65% reported improved performance outcomes. The tools are clearly working, and awareness is growing fast.
However, most retail AI tools are still structurally limited. They rely on one prompt and one model response. That’s not how good trading decisions get made. By contrast, TradingAgents brings the institutional multi-specialist process to open-source software. Your trade ideas go through a real stress test before you act: the bull researcher makes the strongest long case, the bear researcher breaks it, and the risk team sanity-checks both sides.
The remaining gap for retail traders is live execution. In practice, the framework produces recommendations. Connecting those recommendations to a broker requires additional engineering work. That said, the hard part—the analysis and decision logic—is already built and freely available.
Frequently Asked Questions
Yes. TradingAgents is open-source under the Apache 2.0 license and costs nothing to run. You pay for cloud LLM API calls if you use a provider like OpenAI or Anthropic, or you can use Ollama locally for zero per-query cost. Finnhub’s free tier provides 60 API calls per minute, which covers basic portfolio-level analysis without any spend.
GPT-4o and Claude 3.5 Sonnet produce the strongest analysis, particularly in the bull/bear debate phase. DeepSeek V3 offers the best cost-to-quality ratio among cloud providers. For a free local option, Qwen 2.5 14B outperforms Llama 3.1 8B on financial reasoning tasks. Because the framework supports mixing providers per agent role, you can optimize each role independently to hit your cost and quality targets.
Not out of the box. The current version outputs structured JSON trade recommendations. It does not connect to broker APIs for execution. To trade automatically, you need to build an integration with a broker like Interactive Brokers or Alpaca and wire their API to the framework’s JSON output.
Start Building Your AI Hedge Fund Today
TradingAgents is the most accessible way to build an AI hedge fund right now. It’s free. It’s open-source. The 7-agent architecture mirrors how real institutional funds make decisions. Backtested results are strong across three major tickers. Moreover, broad LLM support means you can run the entire system for free using Ollama on a local GPU.
Here’s your exact next move:
- Clone the repo:
git clone https://github.com/TauricResearch/TradingAgents.git - Create a virtual environment and run
pip install -r requirements.txt - Add your Finnhub API key and one LLM provider key to
.env - Run
python -m cli.mainand analyze your first stock ticker - Experiment with different LLM configurations across agent roles to optimize your cost-quality balance
Roughly 70% of institutional hedge funds already use AI in their trading pipeline. The same multi-agent decision process is now open-source and free. The only question is when you start.
Disclaimer: This tutorial is for educational purposes only. Backtested performance data does not guarantee future returns. Trading financial instruments involves substantial risk of loss. Consult a licensed financial advisor before making investment decisions.
Also Checkout: Prop Firm Automation Of TradingView Strategy — Real Results



