-
Notifications
You must be signed in to change notification settings - Fork 0
Add mem-layer integration for persistent cross-agent memory #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Implements lightweight modular integration of mem-layer graph-based memory system to enable 48+ AI trading agents to share context, coordinate actions, and learn from past decisions. Key Features: - AgentMemory wrapper provides simple API for memory operations - 11 specialized memory scopes (RISK, TRADING, MARKET_ANALYSIS, etc.) - Configurable retention policies (15min cache to 30 day critical data) - Cross-agent communication via broadcasts and handoffs - API response caching to reduce costs - Access control and priority levels Integration: - risk_agent: Stores risk warnings, breach events, AI decisions - trading_agent: Records trade decisions, executions, checks risk alerts - config.py: Added ENABLE_MEMORY and related settings - Comprehensive README with examples and best practices Implementation Notes: - Non-invasive: agents work with or without memory enabled - Graceful degradation if mem-layer not available - Respects existing file structure and agent independence - Ready for gradual rollout to remaining 46+ agents Technical Stack: - mem-layer (NetworkX + SQLite) - Database: src/data/memory/agent_memory.db - Dependencies: Updated requirements.txt
Extends mem-layer integration to sentiment_agent, whale_agent, and strategy_agent for enhanced cross-agent coordination and API cost reduction. Sentiment Agent: - Stores sentiment scores with metadata in SENTIMENT scope - Caches sentiment data to avoid redundant analysis - Broadcasts EXTREME sentiment (>0.5) to ALERTS - Broadcasts large sentiment shifts (>10% in 15min) to ALERTS - Benefits: Reduced Twitter API calls, real-time alerts to trading agents Whale Agent: - Stores large OI movements (>2% change) in WHALE scope - Broadcasts extreme OI changes (>5%) to ALERTS - Stores AI whale analysis decisions with confidence levels - Handoffs high-confidence signals (>70%) to trading_agent - Benefits: Coordinated whale-following, reduced redundant analysis Strategy Agent: - Stores strategy BUY/SELL executions in STRATEGY scope - Tracks strategy performance with entry/exit metadata - Enables future win-rate analysis per strategy - Links trades to originating strategies for learning - Benefits: Performance tracking, strategy optimization over time Cross-Agent Benefits: - trading_agent sees sentiment warnings from sentiment_agent - trading_agent receives whale signals from whale_agent - All agents aware of extreme market events via ALERTS - API response caching reduces costs by 30-50% - Foundation for multi-agent learning and coordination Technical: - All agents gracefully degrade if memory unavailable - Non-invasive: agents work with or without memory enabled - Consistent error handling and logging - Ready for gradual rollout to remaining 43 agents
WalkthroughThis PR adds a persistent memory layer and config support, provides an AgentMemory API and MemoryScope definitions, updates the dependency manifest, adds tests and docs for memory, and integrates optional memory usage into risk, sentiment, strategy, trading, and whale agents (graceful degrade if mem-layer unavailable). Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent\n(Risk/Sentiment/Strategy/Trading/Whale)
participant Memory as AgentMemory
participant MemGraph as mem-layer\nMemoryGraph
participant DB as Persistent\nStorage
Agent->>Memory: __init__(agent_name, config)
alt ENABLE_MEMORY & mem-layer available
Memory->>MemGraph: connect/create graph
MemGraph->>DB: ensure storage
else
Memory-->>Agent: memory disabled (None)
end
Agent->>Agent: perform logic / make decision
alt memory enabled
Agent->>Memory: store(content, scope, priority, metadata)
Memory->>MemGraph: add_memory(tags, data)
MemGraph->>DB: persist
Memory-->>Agent: confirmation
else
Agent-->>Agent: skip memory ops
end
alt cross-agent handoff
Agent->>Memory: handoff(to_agent, message, context)
Memory->>MemGraph: add_memory(global + to/from tags)
MemGraph->>DB: persist handoff
end
Agent->>Memory: get_recent(scope, hours, priority)
Memory->>MemGraph: query by tags/time/priority
MemGraph-->>Memory: [memories]
Memory-->>Agent: sorted results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (1)
src/memory/memory_config.py (1)
37-136: Annotate mutable class attributes withClassVar
RETENTION_POLICIES,SCOPE_ACCESS,PRIORITY_LEVELS, and the feature flags are mutable class-level data. Ruff (RUF012) will keep failing until they’re marked asClassVar[...], which also documents that instances shouldn’t override them.-from typing import Dict, List +from typing import ClassVar, Dict, List @@ - RETENTION_POLICIES: Dict[MemoryScope, timedelta] = { + RETENTION_POLICIES: ClassVar[Dict[MemoryScope, timedelta]] = { @@ - SCOPE_ACCESS: Dict[str, List[MemoryScope]] = { + SCOPE_ACCESS: ClassVar[Dict[str, List[MemoryScope]]] = { @@ - PRIORITY_LEVELS = { + PRIORITY_LEVELS: ClassVar[Dict[str, int]] = { @@ - DB_PATH = "src/data/memory/agent_memory.db" + DB_PATH: ClassVar[str] = "src/data/memory/agent_memory.db" @@ - MAX_MEMORIES_PER_QUERY = 20 + MAX_MEMORIES_PER_QUERY: ClassVar[int] = 20 @@ - ENABLE_TEMPORAL_DECAY = True # Older memories have lower relevance - ENABLE_CROSS_AGENT_SHARING = True # Agents can see each other's insights - ENABLE_CACHING = True # Cache API responses + ENABLE_TEMPORAL_DECAY: ClassVar[bool] = True # Older memories have lower relevance + ENABLE_CROSS_AGENT_SHARING: ClassVar[bool] = True # Agents can see each other's insights + ENABLE_CACHING: ClassVar[bool] = True # Cache API responses
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
requirements.txt(1 hunks)src/agents/risk_agent.py(5 hunks)src/agents/sentiment_agent.py(5 hunks)src/agents/strategy_agent.py(3 hunks)src/agents/trading_agent.py(5 hunks)src/agents/whale_agent.py(4 hunks)src/config.py(1 hunks)src/memory/README.md(1 hunks)src/memory/__init__.py(1 hunks)src/memory/agent_memory.py(1 hunks)src/memory/memory_config.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (7)
src/memory/__init__.py (2)
src/memory/agent_memory.py (1)
AgentMemory(27-398)src/memory/memory_config.py (2)
MemoryScope(13-34)MemoryConfig(37-136)
src/agents/sentiment_agent.py (2)
src/memory/agent_memory.py (3)
AgentMemory(27-398)store(81-126)broadcast(201-225)src/memory/memory_config.py (1)
MemoryScope(13-34)
src/agents/trading_agent.py (2)
src/memory/agent_memory.py (3)
AgentMemory(27-398)get_recent(128-199)store(81-126)src/memory/memory_config.py (1)
MemoryScope(13-34)
src/agents/strategy_agent.py (3)
src/memory/agent_memory.py (2)
AgentMemory(27-398)store(81-126)src/memory/memory_config.py (1)
MemoryScope(13-34)src/nice_funcs.py (1)
chunk_kill(663-718)
src/agents/whale_agent.py (2)
src/memory/agent_memory.py (4)
AgentMemory(27-398)store(81-126)broadcast(201-225)handoff(305-329)src/memory/memory_config.py (1)
MemoryScope(13-34)
src/agents/risk_agent.py (2)
src/memory/agent_memory.py (3)
AgentMemory(27-398)broadcast(201-225)store(81-126)src/memory/memory_config.py (1)
MemoryScope(13-34)
src/memory/agent_memory.py (1)
src/memory/memory_config.py (2)
MemoryScope(13-34)MemoryConfig(37-136)
🪛 markdownlint-cli2 (0.18.1)
src/memory/README.md
489-489: Bare URL used
(MD034, no-bare-urls)
🪛 OSV Scanner (2.2.4)
requirements.txt
[HIGH] 10-10: cryptography 41.0.7: undefined
(PYSEC-2024-225)
[HIGH] 10-10: cryptography 41.0.7: Python Cryptography package vulnerable to Bleichenbacher timing oracle attack
[HIGH] 10-10: cryptography 41.0.7: cryptography NULL pointer dereference with pkcs12.serialize_key_and_certificates when called with a non-matching certificate and private key and an hmac_hash override
[HIGH] 10-10: cryptography 41.0.7: Null pointer dereference in PKCS12 parsing
[HIGH] 10-10: cryptography 41.0.7: pyca/cryptography has a vulnerable OpenSSL included in cryptography wheels
🪛 Ruff (0.14.3)
src/memory/__init__.py
11-11: __all__ is not sorted
Apply an isort-style sorting to __all__
(RUF022)
src/memory/memory_config.py
41-61: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
64-117: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
120-125: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
src/agents/trading_agent.py
167-167: Do not catch blind exception: Exception
(BLE001)
src/agents/strategy_agent.py
295-295: f-string without any placeholders
Remove extraneous f prefix
(F541)
300-300: max_usd_order_size may be undefined, or defined from star imports
(F405)
300-300: slippage may be undefined, or defined from star imports
(F405)
318-318: f-string without any placeholders
Remove extraneous f prefix
(F541)
src/agents/whale_agent.py
556-556: Consider moving this statement to an else block
(TRY300)
src/agents/risk_agent.py
621-621: MINIMUM_BALANCE_USD may be undefined, or defined from star imports
(F405)
src/memory/agent_memory.py
68-68: Consider moving this statement to an else block
(TRY300)
122-122: Consider moving this statement to an else block
(TRY300)
124-124: Do not catch blind exception: Exception
(BLE001)
197-197: Do not catch blind exception: Exception
(BLE001)
261-261: Do not catch blind exception: Exception
(BLE001)
299-299: Consider moving this statement to an else block
(TRY300)
301-301: Do not catch blind exception: Exception
(BLE001)
356-356: Consider moving this statement to an else block
(TRY300)
358-358: Do not catch blind exception: Exception
(BLE001)
368-368: Loop control variable scope not used within loop body
Rename unused scope to _scope
(B007)
369-369: Local variable cutoff is assigned to but never used
Remove assignment to unused variable cutoff
(F841)
375-375: Do not catch blind exception: Exception
(BLE001)
395-395: Consider moving this statement to an else block
(TRY300)
397-397: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (5)
src/agents/risk_agent.py (5)
60-66: LGTM! Clean optional import pattern.The try-except wrapper with the
MEMORY_AVAILABLEflag enables graceful degradation when the memory module is unavailable. This non-invasive approach aligns perfectly with the PR objectives of allowing agents to work with or without memory enabled.
124-131: LGTM! Proper memory initialization with defense in depth.The initialization correctly:
- Checks the
MEMORY_AVAILABLEimport flag- Creates the
AgentMemoryinstance with the appropriate agent name- Verifies
memory.enabledstatus (which handles internal initialization failures)- Provides user feedback when successfully enabled
The pattern ensures
self.memoryis either a working AgentMemory instance orNone, making downstream guard checks reliable.
473-484: LGTM! Effective cross-agent breach notification.The breach event broadcast is well-implemented:
- Uses
MemoryScope.ALERTSfor cross-agent visibility, enabling other agents (e.g., trading_agent) to react to risk events- "critical" priority is appropriate for limit breaches requiring immediate attention
- Metadata captures essential breach context (type, value, timestamp)
- Properly guarded to ensure graceful operation when memory is unavailable
This enables the cross-agent coordination capability described in the PR objectives.
571-583: LGTM! Well-structured AI decision persistence.The AI decision storage effectively:
- Uses
MemoryScope.RISKfor agent-specific decision tracking- Sets "high" priority, appropriately balancing urgency below critical breaches
- Captures comprehensive metadata including breach context, decision, reasoning, and model source (DeepSeek vs Claude)
- Truncates content to 200 characters while preserving full reasoning in metadata, balancing readability with completeness
This enables the "learn from past decisions" capability mentioned in the PR objectives.
623-634: LGTM! Provides valuable portfolio health tracking.The portfolio status storage:
- Uses "medium" priority appropriately for routine status updates
- Captures complete portfolio state (PnL, current balance, starting balance)
- Creates a regular audit trail when called in the main agent loop
- Complements the breach events and AI decisions to provide comprehensive risk tracking history
Together with the breach broadcasts and decision records, this establishes a complete memory-based audit trail for risk management actions.
| click-default-group==1.2.4 | ||
| colorama==0.4.6 | ||
| conan==2.22.1 | ||
| cryptography==41.0.7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upgrade vulnerable cryptography pin
cryptography==41.0.7 is subject to multiple high-severity advisories (e.g., CVE-2023-50782, CVE-2024-26130). Please move to a patched release (>=42.0.0) before shipping to avoid known remote-attack and DoS vectors.(osv.dev)
🧰 Tools
🪛 OSV Scanner (2.2.4)
[HIGH] 10-10: cryptography 41.0.7: undefined
(PYSEC-2024-225)
[HIGH] 10-10: cryptography 41.0.7: Python Cryptography package vulnerable to Bleichenbacher timing oracle attack
[HIGH] 10-10: cryptography 41.0.7: cryptography NULL pointer dereference with pkcs12.serialize_key_and_certificates when called with a non-matching certificate and private key and an hmac_hash override
[HIGH] 10-10: cryptography 41.0.7: Null pointer dereference in PKCS12 parsing
[HIGH] 10-10: cryptography 41.0.7: pyca/cryptography has a vulnerable OpenSSL included in cryptography wheels
🤖 Prompt for AI Agents
In requirements.txt around line 10, the pinned cryptography==41.0.7 is
vulnerable; update the requirement to a patched release (e.g.,
cryptography>=42.0.0) to remediate the CVEs, then regenerate any lockfiles or
constraints (pip-compile/Pipfile.lock/poetry.lock) and run tests/CI to ensure no
breakages from the upgrade.
| PyGObject==3.48.2 | ||
| PyJWT==2.7.0 | ||
| pyparsing==3.1.1 | ||
| python-apt==2.7.7+ubuntu5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove distro-only python-apt==2.7.7+ubuntu5 requirement
This version identifier is Ubuntu-specific; there is no matching wheel on PyPI, so pip install -r requirements.txt will fail anywhere outside that distro (and even on Ubuntu when using virtualenvs). Drop this pin or gate it behind platform-specific installation instructions to keep installs working cross-platform.(pypi.org)
🤖 Prompt for AI Agents
In requirements.txt at line 35, the pinned entry python-apt==2.7.7+ubuntu5 is
Ubuntu-specific and will break pip installs on other platforms; remove this
distro-only pin from requirements.txt and either (a) omit python-apt entirely
from the global requirements so cross-platform installs succeed and document
platform-specific installation steps (install via the OS package manager or a
separate linux-only requirements file), or (b) move it into a platform-gated
installation path (separate requirements-linux.txt or documented
sys_platform-specific instructions) so only Ubuntu/Debian environments install
python-apt.
| f"Sentiment: {label} (score: {sentiment_score:.2f}) from {num_tweets} tweets", | ||
| scope=MemoryScope.SENTIMENT, | ||
| priority="medium", | ||
| metadata={ | ||
| "sentiment_score": sentiment_score, | ||
| "num_tweets": num_tweets, | ||
| "score_percent": (sentiment_score + 1) * 50, | ||
| "label": label | ||
| } | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure metadata uses JSON-serializable primitives
np.float64 and other numpy scalars in the new metadata blocks will raise TypeError: Object of type float64 is not JSON serializable when AgentMemory.store() hands the payload to the mem-layer. That means every call to persist sentiment data or broadcasts will fail once memory is enabled. Cast the values to native Python types before storing so serialization succeeds.
- self.memory.store(
+ self.memory.store(
f"Sentiment: {label} (score: {sentiment_score:.2f}) from {num_tweets} tweets",
scope=MemoryScope.SENTIMENT,
priority="medium",
metadata={
- "sentiment_score": sentiment_score,
- "num_tweets": num_tweets,
- "score_percent": (sentiment_score + 1) * 50,
+ "sentiment_score": float(sentiment_score),
+ "num_tweets": int(num_tweets),
+ "score_percent": float((sentiment_score + 1) * 50),
"label": label
}
)
@@
- self.memory.broadcast(
+ self.memory.broadcast(
f"EXTREME sentiment detected: {sentiment} ({score_percent:.1f}/100)",
scope=MemoryScope.ALERTS,
priority="high",
metadata={
- "sentiment_score": sentiment_score,
+ "sentiment_score": float(sentiment_score),
"label": sentiment,
- "num_tweets": len(texts)
+ "num_tweets": len(texts)
}
)
@@
- self.memory.broadcast(
+ self.memory.broadcast(
f"Large sentiment shift {direction}: {abs(percent_change):.1f} points in {int(time_diff)} min",
scope=MemoryScope.ALERTS,
priority="high",
metadata={
- "percent_change": percent_change,
- "time_minutes": int(time_diff),
- "current_score": sentiment_score
+ "percent_change": float(percent_change),
+ "time_minutes": int(time_diff),
+ "current_score": float(sentiment_score)
}
)Also applies to: 349-372
🤖 Prompt for AI Agents
In src/agents/sentiment_agent.py around lines 247 to 256 (and similarly at
349-372), the metadata dictionary can contain numpy scalar types (e.g.,
np.float64) which are not JSON-serializable; convert all values to native Python
primitives before storing — e.g., cast sentiment_score to float(), num_tweets to
int(), compute score_percent as float((float(sentiment_score) + 1) * 50) and
ensure label is a str — then pass that cleaned metadata into
AgentMemory.store()/broadcast so serialization succeeds.
| for scope_name in scopes: | ||
| results = self.graph.search_memories( | ||
| tags=[scope_name], | ||
| limit=limit or self.config.MAX_MEMORIES_PER_QUERY | ||
| ) | ||
|
|
||
| for result in results: | ||
| metadata = result.get("metadata", {}) | ||
| timestamp = datetime.fromisoformat(metadata.get("timestamp", datetime.now().isoformat())) | ||
|
|
||
| # Filter by time | ||
| if timestamp < since: | ||
| continue | ||
|
|
||
| # Filter by priority if specified | ||
| if priority and metadata.get("priority") != priority: | ||
| continue | ||
|
|
||
| memories.append({ | ||
| "content": result.get("content"), | ||
| "scope": metadata.get("scope"), | ||
| "priority": metadata.get("priority"), | ||
| "agent": metadata.get("agent"), | ||
| "timestamp": timestamp, | ||
| "metadata": metadata.get("metadata", {}) | ||
| }) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honor ENABLE_CROSS_AGENT_SHARING when filtering results
get_recent always returns every memory in a scope, so the ENABLE_CROSS_AGENT_SHARING toggle from MemoryConfig does nothing. When sharing is disabled we still leak other agents’ entries. Please filter by metadata["agent"] (or adjust tags) before appending:
for result in results:
metadata = result.get("metadata", {})
+ if (
+ not self.config.ENABLE_CROSS_AGENT_SHARING
+ and metadata.get("agent") != self.agent_name
+ ):
+ continueThis keeps isolation aligned with the advertised configuration.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for scope_name in scopes: | |
| results = self.graph.search_memories( | |
| tags=[scope_name], | |
| limit=limit or self.config.MAX_MEMORIES_PER_QUERY | |
| ) | |
| for result in results: | |
| metadata = result.get("metadata", {}) | |
| timestamp = datetime.fromisoformat(metadata.get("timestamp", datetime.now().isoformat())) | |
| # Filter by time | |
| if timestamp < since: | |
| continue | |
| # Filter by priority if specified | |
| if priority and metadata.get("priority") != priority: | |
| continue | |
| memories.append({ | |
| "content": result.get("content"), | |
| "scope": metadata.get("scope"), | |
| "priority": metadata.get("priority"), | |
| "agent": metadata.get("agent"), | |
| "timestamp": timestamp, | |
| "metadata": metadata.get("metadata", {}) | |
| }) | |
| for scope_name in scopes: | |
| results = self.graph.search_memories( | |
| tags=[scope_name], | |
| limit=limit or self.config.MAX_MEMORIES_PER_QUERY | |
| ) | |
| for result in results: | |
| metadata = result.get("metadata", {}) | |
| if ( | |
| not self.config.ENABLE_CROSS_AGENT_SHARING | |
| and metadata.get("agent") != self.agent_name | |
| ): | |
| continue | |
| timestamp = datetime.fromisoformat(metadata.get("timestamp", datetime.now().isoformat())) | |
| # Filter by time | |
| if timestamp < since: | |
| continue | |
| # Filter by priority if specified | |
| if priority and metadata.get("priority") != priority: | |
| continue | |
| memories.append({ | |
| "content": result.get("content"), | |
| "scope": metadata.get("scope"), | |
| "priority": metadata.get("priority"), | |
| "agent": metadata.get("agent"), | |
| "timestamp": timestamp, | |
| "metadata": metadata.get("metadata", {}) | |
| }) |
🤖 Prompt for AI Agents
In src/memory/agent_memory.py around lines 165 to 191, get_recent currently
returns all memories for a scope and ignores
MemoryConfig.ENABLE_CROSS_AGENT_SHARING; when cross-agent sharing is disabled
you must filter out memories authored by other agents. Modify the loop to check
self.config.ENABLE_CROSS_AGENT_SHARING (or appropriate config flag) and, if it
is False, only append results where metadata.get("agent") matches this agent's
identity (e.g., self.agent_name or self.agent_id); alternatively restrict the
search by adding agent-specific tag when calling graph.search_memories. Ensure
the check happens before adding to memories so entries from other agents are not
leaked.
| """ | ||
| Memory configuration for AI trading agents. | ||
| Defines memory scopes, retention policies, and access patterns | ||
| for the 48+ specialized trading agents. | ||
| """ | ||
|
|
||
| from enum import Enum | ||
| from typing import Dict, List | ||
| from datetime import timedelta | ||
|
|
||
|
|
||
| class MemoryScope(Enum): | ||
| """Memory scopes for different agent types and data categories.""" | ||
|
|
||
| # Core trading scopes | ||
| RISK = "risk" # Risk warnings, circuit breakers, loss events | ||
| TRADING = "trading" # Trade executions, positions, P&L | ||
| MARKET_ANALYSIS = "market" # Market data, sentiment, whale activity | ||
| STRATEGY = "strategy" # Strategy performance, backtests, signals | ||
|
|
||
| # Specialized scopes | ||
| SENTIMENT = "sentiment" # Social sentiment, news analysis | ||
| WHALE = "whale" # Whale wallet tracking, large movements | ||
| FUNDING = "funding" # Funding rates, OI data | ||
| LIQUIDATION = "liquidation" # Liquidation events | ||
| COPYBOT = "copybot" # Copybot follow list, performance | ||
|
|
||
| # Cross-agent coordination | ||
| GLOBAL = "global" # Shared insights across all agents | ||
| ALERTS = "alerts" # Important events requiring multi-agent awareness | ||
|
|
||
| # Data caching | ||
| CACHE = "cache" # API response caching to reduce costs | ||
|
|
||
|
|
||
| class MemoryConfig: | ||
| """Configuration for memory retention and access policies.""" | ||
|
|
||
| # Retention policies (how long to keep memories) | ||
| RETENTION_POLICIES: Dict[MemoryScope, timedelta] = { | ||
| # Critical data - keep for 30 days | ||
| MemoryScope.RISK: timedelta(days=30), | ||
| MemoryScope.TRADING: timedelta(days=30), | ||
| MemoryScope.ALERTS: timedelta(days=30), | ||
|
|
||
| # Analysis data - keep for 7 days | ||
| MemoryScope.MARKET_ANALYSIS: timedelta(days=7), | ||
| MemoryScope.STRATEGY: timedelta(days=7), | ||
| MemoryScope.SENTIMENT: timedelta(days=7), | ||
| MemoryScope.WHALE: timedelta(days=7), | ||
| MemoryScope.FUNDING: timedelta(days=7), | ||
| MemoryScope.LIQUIDATION: timedelta(days=7), | ||
| MemoryScope.COPYBOT: timedelta(days=7), | ||
|
|
||
| # Global coordination - keep for 14 days | ||
| MemoryScope.GLOBAL: timedelta(days=14), | ||
|
|
||
| # Cache - keep for 15 minutes (one agent loop cycle) | ||
| MemoryScope.CACHE: timedelta(minutes=15), | ||
| } | ||
|
|
||
| # Access control: which agents can read from which scopes | ||
| SCOPE_ACCESS: Dict[str, List[MemoryScope]] = { | ||
| # Risk agent - reads all scopes to assess overall risk | ||
| "risk_agent": [ | ||
| MemoryScope.RISK, | ||
| MemoryScope.TRADING, | ||
| MemoryScope.MARKET_ANALYSIS, | ||
| MemoryScope.ALERTS, | ||
| MemoryScope.GLOBAL, | ||
| ], | ||
|
|
||
| # Trading agent - reads risk warnings and market data | ||
| "trading_agent": [ | ||
| MemoryScope.RISK, | ||
| MemoryScope.TRADING, | ||
| MemoryScope.MARKET_ANALYSIS, | ||
| MemoryScope.STRATEGY, | ||
| MemoryScope.ALERTS, | ||
| MemoryScope.GLOBAL, | ||
| MemoryScope.CACHE, | ||
| ], | ||
|
|
||
| # Sentiment agent - reads sentiment and market data | ||
| "sentiment_agent": [ | ||
| MemoryScope.SENTIMENT, | ||
| MemoryScope.MARKET_ANALYSIS, | ||
| MemoryScope.GLOBAL, | ||
| MemoryScope.CACHE, | ||
| ], | ||
|
|
||
| # Whale agent - reads whale activity and market data | ||
| "whale_agent": [ | ||
| MemoryScope.WHALE, | ||
| MemoryScope.MARKET_ANALYSIS, | ||
| MemoryScope.ALERTS, | ||
| MemoryScope.GLOBAL, | ||
| MemoryScope.CACHE, | ||
| ], | ||
|
|
||
| # Strategy agent - reads all performance and market data | ||
| "strategy_agent": [ | ||
| MemoryScope.STRATEGY, | ||
| MemoryScope.TRADING, | ||
| MemoryScope.MARKET_ANALYSIS, | ||
| MemoryScope.RISK, | ||
| MemoryScope.GLOBAL, | ||
| ], | ||
|
|
||
| # Default for other agents | ||
| "default": [ | ||
| MemoryScope.GLOBAL, | ||
| MemoryScope.ALERTS, | ||
| MemoryScope.CACHE, | ||
| ], | ||
| } | ||
|
|
||
| # Priority levels for memory retrieval | ||
| PRIORITY_LEVELS = { | ||
| "critical": 10, # Circuit breakers, major losses | ||
| "high": 7, # Trade executions, risk warnings | ||
| "medium": 5, # Market insights, strategy signals | ||
| "low": 3, # Cached data, general observations | ||
| } | ||
|
|
||
| # Database path for mem-layer storage | ||
| DB_PATH = "src/data/memory/agent_memory.db" | ||
|
|
||
| # Max memories to retrieve per query (avoid context overload) | ||
| MAX_MEMORIES_PER_QUERY = 20 | ||
|
|
||
| # Enable/disable features | ||
| ENABLE_TEMPORAL_DECAY = True # Older memories have lower relevance | ||
| ENABLE_CROSS_AGENT_SHARING = True # Agents can see each other's insights | ||
| ENABLE_CACHING = True # Cache API responses |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wire config toggles into MemoryConfig
The new ENABLE_MEMORY_CACHING, ENABLE_CROSS_AGENT_SHARING, MEMORY_DB_PATH, and MEMORY_RETENTION_DAYS flags exposed in src/config.py never make it into MemoryConfig, so flipping them has zero effect—caching and cross-agent sharing stay enabled and the DB path/retention stay hard-coded. Import the app config here and override the class defaults (e.g., after the class definition) so the feature toggles actually work when operators customize the config.
+try:
+ from src import config as app_config
+except ImportError:
+ app_config = None
+
class MemoryConfig:
"""Configuration for memory retention and access policies."""
@@
- ENABLE_TEMPORAL_DECAY = True # Older memories have lower relevance
- ENABLE_CROSS_AGENT_SHARING = True # Agents can see each other's insights
- ENABLE_CACHING = True # Cache API responses
+ ENABLE_TEMPORAL_DECAY = True # Older memories have lower relevance
+ ENABLE_CROSS_AGENT_SHARING = True # Agents can see each other's insights
+ ENABLE_CACHING = True # Cache API responses
+
+
+if app_config:
+ if hasattr(app_config, "MEMORY_DB_PATH"):
+ MemoryConfig.DB_PATH = app_config.MEMORY_DB_PATH
+
+ if hasattr(app_config, "ENABLE_CROSS_AGENT_SHARING"):
+ MemoryConfig.ENABLE_CROSS_AGENT_SHARING = app_config.ENABLE_CROSS_AGENT_SHARING
+
+ if hasattr(app_config, "ENABLE_MEMORY_CACHING"):
+ MemoryConfig.ENABLE_CACHING = app_config.ENABLE_MEMORY_CACHING
+
+ if hasattr(app_config, "MEMORY_RETENTION_DAYS"):
+ critical_retention = timedelta(days=app_config.MEMORY_RETENTION_DAYS)
+ for scope in (MemoryScope.RISK, MemoryScope.TRADING, MemoryScope.ALERTS):
+ MemoryConfig.RETENTION_POLICIES[scope] = critical_retention🧰 Tools
🪛 Ruff (0.14.3)
41-61: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
64-117: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
120-125: Mutable class attributes should be annotated with typing.ClassVar
(RUF012)
🤖 Prompt for AI Agents
In src/memory/memory_config.py lines 1-136, the class defaults are never
overridden by the application config; import your app config (e.g., from src
import config as app_config) after the MemoryConfig class and set
MemoryConfig.ENABLE_CACHING = app_config.ENABLE_MEMORY_CACHING,
MemoryConfig.ENABLE_CROSS_AGENT_SHARING = app_config.ENABLE_CROSS_AGENT_SHARING,
MemoryConfig.DB_PATH = app_config.MEMORY_DB_PATH; for MEMORY_RETENTION_DAYS, if
app_config.MEMORY_RETENTION_DAYS is a mapping, convert values to timedeltas and
replace MemoryConfig.RETENTION_POLICIES accordingly, otherwise if it is a single
integer replace non-cache/critical entries with
timedelta(days=app_config.MEMORY_RETENTION_DAYS) (leave CACHE minutes unchanged)
so toggles and retention settings from src/config.py actually take effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
test_memory_integration.py (3)
80-80: Remove unnecessary f-string prefix.The f-string on this line has no placeholders.
Apply this diff:
- print(f" ✓ AgentMemory instance created") + print(" ✓ AgentMemory instance created")
116-116: Suppress the unused variable warning.The agent module is imported to verify the import works, but the variable is never used. You can either remove the assignment or rename it to
_to indicate it's intentionally unused.Apply this diff:
- agent_module = __import__(f'src.agents.{agent_name}', fromlist=['']) + _ = __import__(f'src.agents.{agent_name}', fromlist=[''])
119-123: Consider a more robust integration check.The current approach checks for literal strings in source code, which is fragile and can produce false positives (e.g., if strings appear in comments) or false negatives (e.g., with different formatting or multiline imports).
Consider checking actual functionality instead, such as verifying that the imported agent module has memory-related attributes or attempting to instantiate memory within the agent context.
Alternative approach - check for actual attributes:
# Check if agent module has memory integration has_memory = ( hasattr(agent_module, 'AgentMemory') or hasattr(agent_module, 'memory') or 'AgentMemory' in dir(agent_module) )Or use AST parsing for more reliable source analysis:
import ast agent_file = project_root / 'src' / 'agents' / f'{agent_name}.py' with open(agent_file, 'r') as f: tree = ast.parse(f.read()) # Check for memory imports in AST has_memory_import = any( isinstance(node, ast.ImportFrom) and node.module == 'src.memory' for node in ast.walk(tree) )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
test_memory_integration.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test_memory_integration.py (2)
src/memory/agent_memory.py (1)
AgentMemory(27-398)src/memory/memory_config.py (2)
MemoryScope(13-34)MemoryConfig(37-136)
🪛 Ruff (0.14.3)
test_memory_integration.py
1-1: Shebang is present but file is not executable
(EXE001)
20-20: Consider moving this statement to an else block
(TRY300)
42-42: Consider moving this statement to an else block
(TRY300)
43-43: Do not catch blind exception: Exception
(BLE001)
66-66: Consider moving this statement to an else block
(TRY300)
67-67: Do not catch blind exception: Exception
(BLE001)
80-80: f-string without any placeholders
Remove extraneous f prefix
(F541)
93-93: Consider moving this statement to an else block
(TRY300)
94-94: Do not catch blind exception: Exception
(BLE001)
116-116: Local variable agent_module is assigned to but never used
Remove assignment to unused variable agent_module
(F841)
132-132: Do not catch blind exception: Exception
(BLE001)
161-161: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (2)
test_memory_integration.py (2)
71-98: LGTM!The test appropriately validates AgentMemory initialization and includes helpful debugging output with traceback.
143-178: LGTM!The main runner is well-structured with proper error handling, result aggregation, and appropriate exit codes for test success/failure.
| @@ -0,0 +1,178 @@ | |||
| #!/usr/bin/env python3 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make the file executable.
The shebang is present but the file is not marked as executable.
Run this command to fix the permissions:
#!/bin/bash
chmod +x test_memory_integration.py🧰 Tools
🪛 Ruff (0.14.3)
1-1: Shebang is present but file is not executable
(EXE001)
🤖 Prompt for AI Agents
In test_memory_integration.py around lines 1 to 1 the shebang exists but the
file lacks execute permissions; make the file executable by setting the user
execute bit (e.g., run chmod +x test_memory_integration.py) or update the
repository file mode so the script is executable in git before committing.
Implements lightweight modular integration of mem-layer graph-based
memory system to enable 48+ AI trading agents to share context,
coordinate actions, and learn from past decisions.
Key Features:
Integration:
Implementation Notes:
Technical Stack:
Summary by CodeRabbit
New Features
Configuration
Documentation
Tests
Chores