Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 54 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# Environment variables
.env
.env.local
.env.development.local
.env.test.local
.env.production.local

# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg
.venv/
venv/
ENV/

# Node.js
node_modules/
npm-debug.log
yarn-debug.log
yarn-error.log

# IDE
.idea/
.vscode/
*.swp
*.swo

# Logs
logs
*.log

# OS specific
.DS_Store
Thumbs.db

# Application specific
nash_equilibrium_log_*.json
1 change: 1 addition & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
MIT License

Copyright (c) 2025 PhialsBasement
Copyright (c) 2025 Faramarz Hashemi - Nash Equilibrium Integration

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
107 changes: 99 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,17 +66,108 @@ The magic is in:
- Dynamic thinking depth


# NECoRT (Nash-Equilibrium Chain of Recursive Thoughts) 🧠🔄🎮

## Star History(THANK YOU SO MUCH)
## TL;DR: AI agents compete and collaborate to reach optimal equilibrium responses. Evolution meets Game Theory.

<a href="https://www.star-history.com/#PhialsBasement/Chain-of-Recursive-Thoughts&Timeline">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=PhialsBasement/Chain-of-Recursive-Thoughts&type=Timeline&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=PhialsBasement/Chain-of-Recursive-Thoughts&type=Timeline" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=PhialsBasement/Chain-of-Recursive-Thoughts&type=Timeline" />
</picture>
</a>
### What is NECoRT?

NECoRT extends the Chain of Recursive Thoughts (CoRT) framework by integrating Nash Equilibrium concepts from game theory. It creates a multi-agent ecosystem where AI instances:

1. Generate diverse responses to the same prompt
2. Evaluate each other's responses
3. Improve their responses based on group feedback
4. Converge on a stable equilibrium where no agent would unilaterally change their strategy

The result is responses that are not just recursive improvements but represent optimal consensus points where competing strategies reach equilibrium.

### How is this different from regular CoRT?

| Feature | CoRT | NECoRT |
|---------|------|--------|
| Thinking strategy | Single agent refining own thoughts | Multiple agents competing and evaluating |
| Improvement mechanism | Generate alternatives & pick best | Game theoretic utility optimization |
| Termination condition | Fixed rounds | Dynamic convergence to equilibrium |
| Theoretical foundation | Self-reflection | Nash Equilibrium in game theory |
| Output stability | Varies with each run | Converges to stable equilibria |

## The Nash Equilibrium Advantage

In game theory, a Nash Equilibrium is a state where no player can gain advantage by changing only their own strategy, given what others are doing. NECoRT applies this to AI reasoning by:

1. **Multiple Perspectives**: Creates a utility matrix of how agents rate each other's responses
2. **Strategic Improvements**: Agents learn from highest-rated responses
3. **Convergence Detection**: Automatically identifies when the system reaches equilibrium
4. **Optimal Selection**: Chooses the response that represents the best equilibrium point

## How to Use NECoRT

### Quick Start

```bash
# On Windows
start-necort.bat

# On Linux
pip install -r requirements.txt
cd frontend && npm install
cd ..
python ./necort_web.py

# In a separate terminal
cd frontend
npm start
```

### API Usage

```python
from nash_recursive_thinking import NashEquilibriumRecursiveChat

# Initialize with your API key
necort = NashEquilibriumRecursiveChat(
api_key="your_openrouter_api_key",
num_agents=3,
convergence_threshold=0.05
)

# Get an equilibrium-optimized response
result = necort.think_and_respond("Your complex question here")
print(result["response"])

# Examine the Nash Equilibrium process
print(f"Converged in {result['convergence_round']} rounds")
print(f"Final response from agent {result['final_response_agent']}")
```

## Technical Implementation

NECoRT implements:

1. **Utility Matrix Construction**: Each agent evaluates all other agents' responses
2. **Nash Equilibrium Detection**: Identifies response sets that represent stable equilibria
3. **Convergence Monitoring**: Tracks changes in utility matrix until stabilization
4. **Equilibrium Response Selection**: Picks optimal response from the equilibrium set

## Comparison to Other Methods

| Method | Strengths | Weaknesses |
|--------|-----------|------------|
| Standard LLM | Fast, single response | Limited reflection |
| Chain of Thought | Shows reasoning steps | Linear thought process |
| CoRT | Recursive improvement | Single perspective |
| NECoRT | Multi-agent equilibrium, stability, handles divergent ideas | More compute-intensive |

## Future Directions

- **Mixed Strategy Equilibria**: Allow probabilistic combinations of responses
- **Evolutionary Dynamics**: Implement replicator dynamics for response evolution
- **Coalition Formation**: Allow agent groups to form voting blocs
- **Subgame Perfection**: Extend to multi-stage reasoning games

---

*"Let your thoughts argue, evolve, and stabilize."*


### Contributing
Expand Down
113 changes: 113 additions & 0 deletions README_NECoRT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# NECoRT (Nash-Equilibrium Chain of Recursive Thoughts) 🧠🔄🎮

## TL;DR: AI agents compete and collaborate to reach optimal equilibrium responses. Evolution meets Game Theory.

### What is NECoRT?

NECoRT extends the Chain of Recursive Thoughts (CoRT) framework by integrating Nash Equilibrium concepts from game theory. It creates a multi-agent ecosystem where AI instances:

1. Generate diverse responses to the same prompt
2. Evaluate each other's responses
3. Improve their responses based on group feedback
4. Converge on a stable equilibrium where no agent would unilaterally change their strategy

The result is responses that are not just recursive improvements but represent optimal consensus points where competing strategies reach equilibrium.

### How is this different from regular CoRT?

| Feature | CoRT | NECoRT |
|---------|------|--------|
| Thinking strategy | Single agent refining own thoughts | Multiple agents competing and evaluating |
| Improvement mechanism | Generate alternatives & pick best | Game theoretic utility optimization |
| Termination condition | Fixed rounds | Dynamic convergence to equilibrium |
| Theoretical foundation | Self-reflection | Nash Equilibrium in game theory |
| Output stability | Varies with each run | Converges to stable equilibria |

## The Nash Equilibrium Advantage

In game theory, a Nash Equilibrium is a state where no player can gain advantage by changing only their own strategy, given what others are doing. NECoRT applies this to AI reasoning by:

1. **Multiple Perspectives**: Creates a utility matrix of how agents rate each other's responses
2. **Strategic Improvements**: Agents learn from highest-rated responses
3. **Convergence Detection**: Automatically identifies when the system reaches equilibrium
4. **Optimal Selection**: Chooses the response that represents the best equilibrium point

## How to Use NECoRT

### Quick Start

```bash
# On Windows
start-necort.bat

# On Linux
pip install -r requirements.txt
cd frontend && npm install
cd ..
python ./necort_web.py

# In a separate terminal
cd frontend
npm start
```

### API Usage

```python
from nash_recursive_thinking import NashEquilibriumRecursiveChat

# Initialize with your API key
necort = NashEquilibriumRecursiveChat(
api_key="your_openrouter_api_key",
num_agents=3,
convergence_threshold=0.05
)

# Get an equilibrium-optimized response
result = necort.think_and_respond("Your complex question here")
print(result["response"])

# Examine the Nash Equilibrium process
print(f"Converged in {result['convergence_round']} rounds")
print(f"Final response from agent {result['final_response_agent']}")
```

## Technical Implementation

NECoRT implements:

1. **Utility Matrix Construction**: Each agent evaluates all other agents' responses
2. **Nash Equilibrium Detection**: Identifies response sets that represent stable equilibria
3. **Convergence Monitoring**: Tracks changes in utility matrix until stabilization
4. **Equilibrium Response Selection**: Picks optimal response from the equilibrium set

## Comparison to Other Methods

| Method | Strengths | Weaknesses |
|--------|-----------|------------|
| Standard LLM | Fast, single response | Limited reflection |
| Chain of Thought | Shows reasoning steps | Linear thought process |
| CoRT | Recursive improvement | Single perspective |
| NECoRT | Multi-agent equilibrium, stability, handles divergent ideas | More compute-intensive |

## Future Directions

- **Mixed Strategy Equilibria**: Allow probabilistic combinations of responses
- **Evolutionary Dynamics**: Implement replicator dynamics for response evolution
- **Coalition Formation**: Allow agent groups to form voting blocs
- **Subgame Perfection**: Extend to multi-stage reasoning games

## Contributing

Contributions are welcome! Areas particularly in need of improvement:
- Optimization of Nash Equilibrium search algorithms
- UI improvements for visualizing agent interactions
- Integration with more LLM providers

## License

MIT License - See LICENSE file for details

---

*"Let your thoughts argue, evolve, and stabilize."*
Loading