Skip to content

Conversation

d4mr
Copy link
Member

@d4mr d4mr commented Oct 11, 2025

Summary by CodeRabbit

  • New Features

    • Added Solana support: POST /solana/transaction to queue/send Versioned transactions with chain, commitment, priority fee, compute limits, idempotency, and webhook hooks.
    • New Solana execution pipeline: signer integration, RPC client cache, Redis-backed transaction storage & locking, worker with retries, resubmission, and webhook notifications.
  • Improvements

    • Richer Solana RPC error mapping to HTTP statuses.
    • OpenAPI and config now expose Solana RPC endpoints and worker sizing.
  • Chores

    • Consolidated dependencies under workspace and added Solana/AWS/serde-related libraries.

Copy link

coderabbitai bot commented Oct 11, 2025

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.86% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title “add solana transaction support” succinctly summarizes the primary feature of enabling Solana transaction capabilities across the codebase, matching the extensive module, configuration, and execution logic changes in this PR. It is concise and directly related to the main change without introducing noise or ambiguity. This clarity ensures reviewers and future readers immediately grasp the new Solana transaction functionality introduced.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch pb/solana

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
executors/src/webhook/envelope.rs (1)

330-365: Stage override not reflected in headers/logs (uses default stage)

queue_webhook_envelope always uses Self::stage_name() for User-Agent and logs, which disagrees with envelopes queued via stage_override (e.g., queue_success_webhook_with_stage). Use envelope.stage_name to avoid confusion.

Apply this diff:

         headers: Some(
             [
                 ("Content-Type".to_string(), "application/json".to_string()),
                 (
                     "User-Agent".to_string(),
-                    format!("{}/{}", Self::executor_name(), Self::stage_name()),
+                    format!("{}/{}", Self::executor_name(), envelope.stage_name.as_str()),
                 ),
             ]
             .into_iter()
             .collect(),
         ),
@@
         tracing::info!(
             transaction_id = job.job.transaction_id(),
             executor = Self::executor_name(),
-            stage = Self::stage_name(),
+            stage = %envelope.stage_name,
             event = ?envelope.event_type,
             notification_id = envelope.notification_id,
             "Queued webhook notification"
         );
🧹 Nitpick comments (19)
twmq/src/job.rs (1)

187-193: Clarify time units and consider a newtype to prevent sec/ms confusion

  • Update docs to “Unix epoch seconds (UTC)” on created_at, processed_at, and finished_at.
  • Optional: introduce struct EpochSeconds(u64) and use EpochSeconds/Option<EpochSeconds> for these fields.
-    /// Timestamp in seconds
+    /// Unix epoch seconds (UTC)
     pub created_at: u64,
-    /// Timestamp in seconds
+    /// Unix epoch seconds (UTC)
     pub processed_at: Option<u64>,
-    /// Timestamp in seconds
+    /// Unix epoch seconds (UTC)
     pub finished_at: Option<u64>,
core/src/signer.rs (1)

568-605: Verify credential support strategy for Solana.

The implementation currently only supports Vault-based signing for Solana transactions, returning ValidationError for IAW, AWS KMS, and PrivateKey credentials. While this is acceptable for an initial implementation, consider:

  1. Whether IAW support for Solana is planned (currently returns error at line 589)
  2. If AWS KMS support is feasible (Solana uses Ed25519, need to verify KMS support)
  3. Whether PrivateKey signing should be prioritized for development/testing scenarios

Would you like me to investigate whether AWS KMS supports Ed25519 signing for potential future implementation?

eip7702-core/Cargo.toml (1)

8-15: Confirm workspace features match previous pins (tokio, serde, tracing, etc.)

Switching to workspace = true is good, but ensure the workspace’s [workspace.dependencies] enables required features (e.g., tokio macros/rt-multi-thread, serde derive, tracing features) previously set here. Otherwise, you may see missing trait impls or runtime behavior changes.

thirdweb-core/Cargo.toml (1)

15-23: Reqwest/utoipa/tracing via workspace: verify feature flags

Good consolidation. Please verify the workspace enables the expected features:

  • reqwest: chosen TLS backend (prefer rustls), json, gzip/brotli if used.
  • utoipa: any extras (e.g., axum_extras, openapi_extensions) needed by routes.
  • tracing: log level filters or features as before.

Prevents subtle runtime/compile feature mismatches.

executors/src/solana_executor/rpc_cache.rs (1)

31-37: Doc says TTL but none configured; also unused Duration import

Either configure TTL/TTI on the cache or update docs/imports. Suggest adding a conservative idle TTL.

Apply this diff to configure TTI and use the existing Duration import:

-    /// Create a new RPC cache with specified TTL and max capacity
+    /// Create a new RPC cache with max capacity and idle TTL
     pub fn new(urls: SolanaRpcUrls) -> Self {
-        let cache = Cache::new(3);
+        let cache = Cache::builder()
+            .max_capacity(3)
+            .time_to_idle(Duration::from_secs(600))
+            .build();
 
         Self { cache, urls }
     }

Based on learnings

executors/src/webhook/envelope.rs (2)

180-246: Duplicate queuing logic; prefer using the helper to avoid drift

queue_webhook_with_custom_payload re-implements envelope->payload->job building. Consider reusing queue_webhook_envelope after making it accept a stage argument or derive stage from the envelope for headers/logs (see previous comment). Reduces duplication and keeps behavior consistent.


163-163: Avoid unwrap on timestamp conversion

Using .try_into().unwrap() can panic in edge cases. Prefer safe conversion or store timestamps as i64 to match chrono’s API.

Example:

let ts_i64 = chrono::Utc::now().timestamp();
let timestamp = u64::try_from(ts_i64).unwrap_or_default(); // or propagate an error

Also applies to: 199-199, 266-266, 302-302

server/configuration/server_base.yaml (1)

19-25: Endpoints Verified; Consider Adding Local Configuration
Devnet and mainnet-beta URLs match the official Solana documentation.

  • No changes needed for existing endpoints.
  • Optional: add a local config (e.g., http://localhost:8899, ws://localhost:8900) for development/testing.
server/src/http/error.rs (1)

87-99: Solana RPC error-to-HTTP mapping looks correct

Mappings align with EVM patterns and appropriate HTTP semantics (e.g., TransactionError → 400, transport → 502, unknown → 503). One optional consideration: large simulation logs in RpcError data can bloat responses; consider truncating/redacting in production responses to keep payloads lean.

server/src/config.rs (2)

15-21: Fix type name typo: SolanRpcConfigData → SolanaRpcConfigData

Spelling inconsistency hurts readability and discoverability. Rename the type and its uses in this file.

- pub struct SolanRpcConfigData {
+ pub struct SolanaRpcConfigData {
     pub http_url: String,
     pub ws_url: String,
 }
 
-fn default_local_rpc_config() -> SolanRpcConfigData {
-    SolanRpcConfigData {
+fn default_local_rpc_config() -> SolanaRpcConfigData {
+    SolanaRpcConfigData {
         http_url: "http://127.0.0.1:8899".to_string(),
         ws_url: "ws://127.0.0.1:8900".to_string(),
     }
 }
 
 pub struct SolanaConfig {
-    pub devnet: SolanRpcConfigData,
-    pub mainnet: SolanRpcConfigData,
+    pub devnet: SolanaRpcConfigData,
+    pub mainnet: SolanaRpcConfigData,
     #[serde(default = "default_local_rpc_config")]
-    pub local: SolanRpcConfigData,
+    pub local: SolanaRpcConfigData,
 }

Also applies to: 23-29, 30-34


36-53: Consider a default for solana_executor_workers

If older configs are used, deserialization will fail without this new field. Either ensure all env/base YAMLs define it or add a default here.

Option A (docs/config only): Confirm all config files set solana_executor_workers.

Option B (code): Provide a default.

 pub struct QueueConfig {
     pub webhook_workers: usize,
     ...
-    pub solana_executor_workers: usize,
+    #[serde(default = "default_solana_executor_workers")]
+    pub solana_executor_workers: usize,
 }
+
+fn default_solana_executor_workers() -> usize { 100 }
server/src/queue/manager.rs (2)

275-307: Unify queue name with a constant (like other queues)

For consistency and easier reuse, define a SOLANA_EXECUTOR_QUEUE_NAME constant alongside others.

-const EOA_EXECUTOR_QUEUE_NAME: &str = "eoa_executor";
+const EOA_EXECUTOR_QUEUE_NAME: &str = "eoa_executor";
+const SOLANA_EXECUTOR_QUEUE_NAME: &str = "solana_executor";
@@
-        let solana_executor_queue_name =
-            get_queue_name_for_namespace(&queue_config.execution_namespace, "solana_executor");
+        let solana_executor_queue_name =
+            get_queue_name_for_namespace(&queue_config.execution_namespace, SOLANA_EXECUTOR_QUEUE_NAME);

439-447: Add Solana queue to stats for monitoring completeness

Stats omit solana_executor. Consider adding to QueueStats for observability parity.

core/src/execution_options/solana.rs (1)

93-105: Consider validating Percentile bounds (0–100)

percentile: u8 allows >100. Add validation (deserialize guard or runtime check) to prevent invalid configurations.

executors/src/solana_executor/worker.rs (4)

232-232: Remove or implement commented-out code.

The queued_at_ms calculation is commented out. If this metric is not needed, remove the comment. If it will be used in the future, consider adding a TODO comment explaining its purpose.

Apply this diff to remove the unused code:

-        // let queued_at_ms = job.job.created_at * 1000;
         let data = &job.job.data;

543-869: Consider refactoring this complex method.

The execute_transaction method is 326 lines with deeply nested conditionals. Consider extracting sub-flows into helper methods for improved readability and testability:

  • check_confirmation_status (lines 573-720)
  • send_new_transaction (lines 723-868)
  • validate_blockhash_and_resubmit (lines 643-705)

Example refactor outline:

async fn execute_transaction(...) -> JobResult<...> {
    self.verify_lock(lock, transaction_id).await?;
    
    let stored_attempt = self.fetch_stored_attempt(transaction_id, lock).await?;
    
    if let Some(attempt) = stored_attempt {
        return self.check_confirmation_or_resubmit(rpc_client, &attempt, ...).await;
    }
    
    self.send_new_transaction(rpc_client, job_data, lock, ...).await
}

632-641: Verify minimum wait time before resubmission.

The 30-second minimum wait before resubmitting (min_wait_before_resubmit_ms = 30_000) assumes transactions take time to propagate. However, if a transaction is dropped immediately by the RPC (e.g., due to low priority fees), this could delay legitimate retries.

Consider:

  • Making this configurable per chain
  • Reducing the wait time for known quick-failure scenarios
  • Adding metrics to track how often transactions are confirmed before 30s

907-961: LGTM with note: Error classification relies on message parsing.

The is_send_error_retryable function provides comprehensive error classification. However, the string matching for permanent errors (lines 928-936) could be brittle if RPC error messages change.

Consider:

  • Documenting known RPC error message formats
  • Adding logging when classifications are made based on message matching
  • Periodically reviewing this logic as Solana RPC evolves
executors/src/solana_executor/storage.rs (1)

396-435: Consider adding integration tests for storage operations.

The current tests only cover serialization. Consider adding integration tests (perhaps in a separate test file with Redis testcontainers) to verify:

  • Lock acquisition and release
  • Concurrent lock attempts
  • Lock expiry behavior
  • Atomic storage operations under race conditions
  • Lua script correctness

Example test outline:

#[tokio::test]
async fn test_concurrent_lock_acquisition() {
    let redis = /* setup test redis */;
    let storage = SolanaTransactionStorage::new(redis, None);
    
    let lock1 = storage.try_acquire_lock("tx1").await.unwrap();
    let lock2_result = storage.try_acquire_lock("tx1").await;
    
    assert!(matches!(lock2_result, Err(LockError::AlreadyLocked)));
}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0311adc and eda0f2c.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (32)
  • Cargo.toml (1 hunks)
  • aa-core/Cargo.toml (1 hunks)
  • aa-types/Cargo.toml (1 hunks)
  • core/Cargo.toml (1 hunks)
  • core/src/error.rs (5 hunks)
  • core/src/execution_options/mod.rs (1 hunks)
  • core/src/execution_options/solana.rs (1 hunks)
  • core/src/signer.rs (1 hunks)
  • eip7702-core/Cargo.toml (1 hunks)
  • executors/Cargo.toml (1 hunks)
  • executors/src/lib.rs (1 hunks)
  • executors/src/solana_executor/mod.rs (1 hunks)
  • executors/src/solana_executor/rpc_cache.rs (1 hunks)
  • executors/src/solana_executor/storage.rs (1 hunks)
  • executors/src/solana_executor/worker.rs (1 hunks)
  • executors/src/webhook/envelope.rs (2 hunks)
  • server/Cargo.toml (1 hunks)
  • server/configuration/server_base.yaml (1 hunks)
  • server/src/config.rs (3 hunks)
  • server/src/execution_router/mod.rs (3 hunks)
  • server/src/http/error.rs (2 hunks)
  • server/src/http/routes/mod.rs (1 hunks)
  • server/src/http/routes/solana_transaction.rs (1 hunks)
  • server/src/http/server.rs (1 hunks)
  • server/src/main.rs (2 hunks)
  • server/src/queue/manager.rs (8 hunks)
  • solana-core/Cargo.toml (1 hunks)
  • solana-core/src/lib.rs (1 hunks)
  • solana-core/src/transaction.rs (1 hunks)
  • thirdweb-core/Cargo.toml (1 hunks)
  • twmq/Cargo.toml (1 hunks)
  • twmq/src/job.rs (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (13)
server/src/main.rs (1)
server/src/config.rs (1)
  • config (149-149)
server/src/http/routes/solana_transaction.rs (2)
core/src/execution_options/solana.rs (1)
  • as_str (32-38)
server/src/http/server.rs (1)
  • new (47-112)
core/src/execution_options/solana.rs (1)
server/src/config.rs (1)
  • as_str (165-171)
server/src/queue/manager.rs (3)
core/src/signer.rs (2)
  • new (211-216)
  • new (561-566)
twmq/src/queue.rs (1)
  • builder (212-214)
executors/src/solana_executor/worker.rs (1)
  • webhook_queue (211-213)
solana-core/src/transaction.rs (4)
executors/src/solana_executor/rpc_cache.rs (1)
  • new (33-37)
executors/src/solana_executor/storage.rs (2)
  • new (29-42)
  • new (129-131)
server/src/queue/manager.rs (1)
  • new (52-318)
executors/src/solana_executor/worker.rs (1)
  • signature (182-188)
server/src/http/server.rs (1)
server/src/http/routes/solana_transaction.rs (1)
  • send_solana_transaction (35-67)
executors/src/solana_executor/storage.rs (3)
executors/src/solana_executor/worker.rs (3)
  • signature (182-188)
  • from (153-158)
  • from (162-166)
server/src/queue/manager.rs (1)
  • new (52-318)
executors/src/metrics.rs (1)
  • current_timestamp_ms (236-238)
core/src/signer.rs (3)
server/src/queue/manager.rs (1)
  • new (52-318)
core/src/error.rs (7)
  • from (434-455)
  • from (459-496)
  • from (500-514)
  • from (518-522)
  • from (812-816)
  • from (820-824)
  • from (828-832)
executors/src/solana_executor/worker.rs (1)
  • signature (182-188)
executors/src/solana_executor/worker.rs (5)
executors/src/webhook/envelope.rs (4)
  • webhook_options (113-113)
  • transaction_id (117-117)
  • transaction_id (121-123)
  • webhook_queue (128-128)
core/src/error.rs (9)
  • from (434-455)
  • from (459-496)
  • from (500-514)
  • from (518-522)
  • from (812-816)
  • from (820-824)
  • from (828-832)
  • to_engine_solana_error (570-570)
  • to_engine_solana_error (575-679)
executors/src/solana_executor/storage.rs (4)
  • from (116-118)
  • new (29-42)
  • new (129-131)
  • redis (384-384)
twmq/src/job.rs (4)
  • id (221-223)
  • data (225-227)
  • new (216-218)
  • new (257-263)
executors/src/metrics.rs (1)
  • current_timestamp_ms (236-238)
server/src/execution_router/mod.rs (3)
core/src/execution_options/mod.rs (2)
  • transaction_id (184-186)
  • chain_id (180-182)
executors/src/solana_executor/worker.rs (1)
  • transaction_id (67-69)
core/src/execution_options/solana.rs (1)
  • as_str (32-38)
core/src/error.rs (2)
executors/src/solana_executor/worker.rs (1)
  • to_engine_solana_error (391-393)
core/src/chain.rs (2)
  • chain_id (31-31)
  • chain_id (74-76)
executors/src/solana_executor/rpc_cache.rs (1)
server/src/queue/manager.rs (1)
  • new (52-318)
executors/src/webhook/envelope.rs (5)
twmq/src/lib.rs (2)
  • job (157-162)
  • serde_json (780-780)
executors/src/solana_executor/worker.rs (3)
  • webhook_options (61-63)
  • stage_name (201-203)
  • executor_name (205-207)
executors/src/eip7702_executor/confirm.rs (3)
  • webhook_options (50-52)
  • stage_name (149-151)
  • executor_name (145-147)
executors/src/external_bundler/send.rs (4)
  • webhook_options (64-66)
  • stage_name (229-231)
  • executor_name (225-227)
  • serde_json (627-627)
executors/src/webhook/mod.rs (1)
  • webhook_options (441-460)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: test
  • GitHub Check: coverage
🔇 Additional comments (26)
aa-core/Cargo.toml (1)

8-14: LGTM! Workspace dependency consolidation.

The transition to workspace-based dependency resolution for tokio, serde, and tracing is consistent with the broader workspace refactoring across the repository. This centralizes version management and ensures consistency.

twmq/Cargo.toml (1)

7-20: LGTM! Appropriate feature flags for queue system.

The workspace consolidation is correct, and the feature flags are appropriate:

  • tokio with macros feature for runtime dependencies
  • tokio with full feature for comprehensive testing in dev-dependencies
aa-types/Cargo.toml (1)

7-12: LGTM! Clean workspace consolidation.

All dependencies correctly transitioned to workspace-based resolution.

core/Cargo.toml (1)

7-30: LGTM! Solana dependencies correctly integrated.

The workspace consolidation is consistent, and the new Solana dependencies (solana-sdk, solana-commitment-config, solana-client, and engine-solana-core) are appropriately added to support the new Solana transaction functionality.

Cargo.toml (3)

9-9: LGTM! New Solana support crate added.

The solana-core workspace member is correctly added to support Solana transaction functionality.


24-100: Significant dependency surface expansion.

This PR introduces a comprehensive set of new dependencies across multiple categories (Solana, AWS, serialization, web frameworks, caching, etc.). While the versions appear reasonable and up-to-date, this significantly expands the attack surface and maintenance burden.

Key observations:

  • Solana ecosystem (lines 24-31): version 3.0 for core crates
  • AWS integration (lines 33-36): recent stable versions
  • Serialization stack (lines 38-43): standard and well-maintained
  • Web/API stack (lines 57-67): axum 0.8.4, utoipa 5.4.0 (per learnings, these are current stable versions)
  • Moka caching (line 70): 0.12.10 (learnings mention 0.12.11 is latest, but this is a minor patch difference)

Based on learnings, ensure your migration from moka <0.12 to 0.12.x includes:

  • Enabling the "sync" feature if using synchronous caches
  • Updating method calls to .await methods that became async in 0.12
  • Reviewing notification delivery semantics if using eviction listeners

17-22: Verify compatibility of alloy upgrade and vault branch change

  • Alloy bumped from 1.0.23 to 1.0.36 and is used throughout the codebase; configure a default Rust toolchain (e.g., rustup default stable) and run cargo check and cargo test to catch any breaking changes.
  • Vault deps now point to the main branch; confirm that branch is stable and compatible with alloy 1.0.36.
server/Cargo.toml (1)

8-48: LGTM! Consistent workspace consolidation.

All dependencies correctly transitioned to workspace-based resolution, maintaining existing feature flags where appropriate.

core/src/signer.rs (1)

552-566: LGTM! Well-structured Solana signer.

The SolanaSigner struct and constructor follow the same pattern as EoaSigner, maintaining consistency across the codebase by reusing VaultClient and IAWClient.

executors/src/lib.rs (1)

5-5: LGTM!

The Solana executor module is properly exported, following the established pattern for other executor modules in this crate.

server/configuration/server_base.yaml (1)

32-32: LGTM!

The worker count of 100 is consistent with other executor workers in the configuration.

core/src/execution_options/mod.rs (1)

12-12: LGTM!

The Solana execution options module is properly exported alongside other execution option modules.

server/src/http/routes/mod.rs (1)

8-8: LGTM!

The Solana transaction route module is properly exported alongside other route modules.

server/src/http/server.rs (1)

66-68: LGTM!

The Solana transaction route is properly registered using the standard routes! macro pattern, consistent with other HTTP routes in the router.

solana-core/src/lib.rs (1)

1-3: LGTM!

The public API surface is clean and minimal, exposing only the essential transaction-related types. This follows best practices for library crate design.

server/src/main.rs (2)

74-84: LGTM!

The QueueManager initialization is properly updated to include the Solana configuration, following the established pattern for other queue components.


98-112: LGTM!

The ExecutionRouter construction correctly includes the solana_executor_queue, properly wiring the Solana execution pipeline into the router. The pattern is consistent with other executor queues.

executors/src/solana_executor/mod.rs (1)

1-7: Module wiring and re-exports look good

Clear, minimal public surface exposing worker, cache, and storage. No issues.

server/src/http/routes/solana_transaction.rs (1)

18-67: Endpoint looks solid

Extractor usage, logging, error mapping to ApiEngineError, and 202 Accepted response are consistent with existing patterns.

core/src/error.rs (1)

21-50: Well-structured Solana error modeling and conversion

Comprehensive mapping from solana_client::ClientErrorKind with structured data and safe redaction of request URL. Fits EngineError ergonomics.

Optional: ensure large simulation logs won’t overwhelm API responses; if needed, truncate logs at HTTP layer.

Also applies to: 52-109, 568-680

executors/src/solana_executor/worker.rs (4)

122-129: Intentional design: TransactionSent as a retryable "error".

Using TransactionSent as an error variant to trigger NACK and webhook notification is an intentional design pattern in this executor. The is_send_success() method confirms this is by design. This approach leverages the framework's retry mechanism to continue polling for confirmation after a successful send.


269-387: LGTM: Webhook handlers correctly implement cleanup and notification patterns.

The lifecycle hooks properly clean up storage, update the transaction registry, and queue appropriate webhooks. The special handling of TransactionSent in on_nack to send a success webhook with stage="send" is a good design for tracking transaction submission separately from confirmation.


420-435: Verify percentile calculation edge cases.

The percentile index calculation ((percentile as f64 / 100.0) * fee_history.len() as f64).round() could produce unexpected results for edge cases:

  • Empty fee history (len=0)
  • Single element (len=1) with various percentiles

The fallback logic handles out-of-bounds gracefully, but ensure the calculation matches the intended percentile semantics (e.g., 0th percentile = min, 100th percentile = max).

Consider adding a test to verify percentile calculation:

#[test]
fn test_percentile_calculation() {
    let fee_history = vec![10, 20, 30, 40, 50];
    // Test 0th, 50th, 75th, 100th percentiles
    // Verify behavior with empty and single-element vectors
}

727-736: Review job attempt limit logic for edge cases.

The condition stored_attempt.is_none() && job_attempt_number > MAX_SEND_ATTEMPTS_WITHOUT_TRANSACTION only limits retries when there's no stored attempt. This might not cover scenarios where:

  • Redis failures prevented storing attempts but transactions were sent
  • Attempts were stored but later expired from Redis
  • Lock was repeatedly lost and regained

Consider tracking total send attempts independently or documenting why this specific check is sufficient.

executors/src/solana_executor/storage.rs (2)

9-43: LGTM: Well-designed attempt data model.

The SolanaTransactionAttempt structure properly captures all necessary state for tracking transaction attempts. Using DisplayFromStr for Signature and Hash ensures human-readable serialization while maintaining type safety.


381-386: Verify delete_attempt should not require lock.

The delete_attempt method doesn't verify lock ownership before deleting. This is called from:

  • on_success and on_fail hooks (after lock is released)
  • Error paths in execute_transaction (while holding lock)

Ensure this is intentional and document why lock verification isn't needed here. If multiple workers could race on deletion, consider whether this could cause issues.

Comment on lines +78 to +84
/// Commitment level for transaction confirmation
/// Options: "processed", "confirmed", "finalized"
/// Default: "finalized"
#[serde(default)]
pub commitment: CommitmentLevel,

/// Priority fee for the transaction. If omitted, no priority fee will be used. If provided, engine will add the compute budget instruction to the transaction.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc inconsistency: mentions “processed” but enum lacks it

CommitmentLevel supports Confirmed and Finalized only. Update the comment or add a Processed variant.

-    /// Options: "processed", "confirmed", "finalized"
+    /// Options: "confirmed", "finalized"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Commitment level for transaction confirmation
/// Options: "processed", "confirmed", "finalized"
/// Default: "finalized"
#[serde(default)]
pub commitment: CommitmentLevel,
/// Priority fee for the transaction. If omitted, no priority fee will be used. If provided, engine will add the compute budget instruction to the transaction.
/// Commitment level for transaction confirmation
/// Options: "confirmed", "finalized"
/// Default: "finalized"
#[serde(default)]
pub commitment: CommitmentLevel,
🤖 Prompt for AI Agents
In core/src/execution_options/solana.rs around lines 78 to 84, the doc comment
mentions "processed" but the CommitmentLevel enum only supports Confirmed and
Finalized; either remove "processed" from the comment and list only "confirmed"
and "finalized" (and keep the Default: "finalized") or, if RPC semantics require
"processed", add a Processed variant to the CommitmentLevel enum (with
appropriate serde attributes, tests, and ensure default/validation logic handles
it). Ensure the comment and enum stay consistent and update any related serde
defaults or usages accordingly.

Comment on lines +7 to +37
hex = { workspace = true }
alloy = { workspace = true, features = ["serde"] }
thirdweb-core = { version = "0.1.0", path = "../thirdweb-core" }
hmac = "0.12.1"
reqwest = "0.12.15"
serde = "1.0.219"
serde_json = "1.0.140"
sha2 = "0.10.9"
thiserror = "2.0.12"
tracing = "0.1.41"
hmac = { workspace = true }
reqwest = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
serde_with = { workspace = true }
sha2 = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
twmq = { version = "0.1.0", path = "../twmq" }
engine-aa-types = { version = "0.1.0", path = "../aa-types" }
engine-eip7702-core = { version = "0.1.0", path = "../eip7702-core" }
engine-core = { version = "0.1.0", path = "../core" }
engine-aa-core = { version = "0.1.0", path = "../aa-core" }
rand = "0.9.1"
uuid = { version = "1.17.0", features = ["v4"] }
chrono = "0.4.41"
tokio = { version = "1.45.0", features = ["full"] }
futures = "0.3.31"
moka = { version = "0.12.10", features = ["future"] }
prometheus = "0.13.4"
lazy_static = "1.5.0"
rand = { workspace = true }
uuid = { workspace = true }
chrono = { workspace = true }
tokio = { workspace = true, features = ["full"] }
futures = { workspace = true }
moka = { workspace = true }
prometheus = { workspace = true }
lazy_static = { workspace = true }
engine-solana-core = { version = "0.1.0", path = "../solana-core" }
solana-sdk = { workspace = true }
solana-client = { workspace = true }
solana-transaction-status = { workspace = true }
solana-commitment-config = { workspace = true }
spl-memo-interface = { workspace = true }
base64 = { workspace = true }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Verify workspace dependency definitions.

All dependencies have been converted to workspace-based resolution. Ensure that the root workspace Cargo.toml defines all these dependencies with appropriate versions, especially the newly added Solana-related crates (lines 31-37).

Run the following script to verify the workspace dependencies:


🏁 Script executed:

#!/bin/bash
# Description: Verify all workspace dependencies are defined in root Cargo.toml

# List of dependencies to check
deps=(
  "hex"
  "alloy"
  "hmac"
  "reqwest"
  "serde"
  "serde_json"
  "serde_with"
  "sha2"
  "thiserror"
  "tracing"
  "rand"
  "uuid"
  "chrono"
  "tokio"
  "futures"
  "moka"
  "prometheus"
  "lazy_static"
  "engine-solana-core"
  "solana-sdk"
  "solana-client"
  "solana-transaction-status"
  "solana-commitment-config"
  "spl-memo-interface"
  "base64"
)

echo "Checking workspace dependencies in root Cargo.toml..."
for dep in "${deps[@]}"; do
  if ! rg -q "^\s*${dep}\s*=" Cargo.toml; then
    echo "WARNING: $dep not found in workspace Cargo.toml"
  fi
done

echo "Done checking dependencies."

Length of output: 143


engine-solana-core missing in root workspace dependencies
Add engine-solana-core under [workspace.dependencies] in the root Cargo.toml to ensure the executor builds correctly.

🤖 Prompt for AI Agents
In executors/Cargo.toml around lines 7 to 37, the review notes that
engine-solana-core is referenced by path here but is not declared in the root
workspace's [workspace.dependencies], which can break builds; add
engine-solana-core = { version = "0.1.0", path = "path/to/engine-solana-core" }
(matching other engine entries) to the root Cargo.toml's
[workspace.dependencies] section so the workspace resolver can find and use the
crate.

Comment on lines +81 to +103
impl Drop for TransactionLock {
fn drop(&mut self) {
// Best effort release on drop (fire and forget)
let redis = self.redis.clone();
let lock_key = self.lock_key.clone();
let lock_value = self.lock_value.clone();

tokio::spawn(async move {
let script = r#"
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
"#;

let _: Result<i32, _> = redis::Script::new(script)
.key(&lock_key)
.arg(&lock_value)
.invoke_async(&mut redis.clone())
.await;
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add error handling to Drop implementation.

The Drop implementation spawns a Tokio task for cleanup without error handling. If tokio::spawn fails (e.g., runtime shutting down), the lock won't be released, potentially causing deadlocks.

Consider these options:

Option 1: Log failures (recommended for Drop):

 impl Drop for TransactionLock {
     fn drop(&mut self) {
         let redis = self.redis.clone();
         let lock_key = self.lock_key.clone();
         let lock_value = self.lock_value.clone();
 
-        tokio::spawn(async move {
+        if let Err(e) = tokio::spawn(async move {
             let script = r#"
                 if redis.call("get", KEYS[1]) == ARGV[1] then
                     return redis.call("del", KEYS[1])
                 else
                     return 0
                 end
             "#;
 
             let _: Result<i32, _> = redis::Script::new(script)
                 .key(&lock_key)
                 .arg(&lock_value)
                 .invoke_async(&mut redis.clone())
                 .await;
-        });
+        }) {
+            tracing::warn!("Failed to spawn lock cleanup task: {}", e);
+        }
     }
 }

Option 2: Use blocking release (if acceptable):
Replace async spawn with blocking Redis call in Drop (may impact performance).

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
executors/src/solana_executor/storage.rs around lines 81 to 103: the Drop impl
currently fires-and-forgets an async release without any error handling; update
it so the spawned task logs any Redis/script invocation errors (capture the
Result from invoke_async and process it with the crate logger), and also handle
the case where spawning an async task is not possible (detect no active Tokio
runtime using Handle::try_current or equivalent) and in that case perform a
synchronous/blocking Redis release as a fallback and log its outcome; ensure all
error paths emit clear log messages including the error details.

Ok(None) => {
// Step 4: Not confirmed yet, check if blockhash is still valid
let time_since_sent = crate::metrics::current_timestamp_ms().saturating_sub(attempt.sent_at);
let min_wait_before_resubmit_ms = 30_000; // Wait at least 30 seconds before resubmitting
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract magic numbers into named constants.

Several magic numbers appear throughout the code:

  • Line 633: 30_000 (min wait before resubmit)
  • Line 235, 361, 825: 600 (10 minutes expiry in storage.rs)
  • Line 169 (storage.rs): 120 (lock expiry)

Extract these into named constants with clear documentation of their purpose and rationale.

Apply this diff to add constants:

+const MIN_WAIT_BEFORE_RESUBMIT_MS: u64 = 30_000; // Wait at least 30s before resubmitting
+const ATTEMPT_EXPIRY_SECONDS: i64 = 600; // 10 minutes
+const LOCK_EXPIRY_SECONDS: i64 = 120; // 2 minutes to prevent deadlocks
+
 const CONFIRMATION_RETRY_DELAY: Duration = Duration::from_millis(200);

Also applies to: 754-769, 235-235, 825-825

🤖 Prompt for AI Agents
In executors/src/solana_executor/worker.rs around line 633 (and also update
occurrences at 754-769) and in executors/src/solana_executor/storage.rs at lines
169, 235 and 825, replace the inline numeric literals with descriptive named
constants: define a constant MIN_WAIT_BEFORE_RESUBMIT_MS = 30_000 with a doc
comment explaining it is the minimum backoff before resubmitting a transaction;
define constants for STORAGE_LOCK_EXPIRY_SECS = 120 and
STORAGE_ENTRY_EXPIRY_SECS = 600 with comments stating they represent lock expiry
and storage entry expiry durations respectively; add these constants at the top
of each module (or in a shared constants module if appropriate), update all
usages to reference the constants, and ensure units are clear in names and docs
(e.g., _MS or _SECS) so behavior and rationale are documented.

Comment on lines +512 to +527
let transaction_id = request.idempotency_key.clone();
let chain_id = request.execution_options.chain_id;
let signer_address = request.execution_options.signer_address;

let transaction = SolanaTransactionOptions {
instructions: request.instructions,
execution_options: request.execution_options,
};

let job_data = SolanaExecutorJobData {
transaction_id: transaction_id.clone(),
transaction,
signing_credential,
webhook_options: request.webhook_options,
};

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix partial move: request is used after moving fields (compile error)

You move request.instructions and request.execution_options, then use request.webhook_options. This is a partial move of request, which makes later access invalid.

Apply this refactor to destructure once and avoid partial moves:

-        let transaction_id = request.idempotency_key.clone();
-        let chain_id = request.execution_options.chain_id;
-        let signer_address = request.execution_options.signer_address;
-
-        let transaction = SolanaTransactionOptions {
-            instructions: request.instructions,
-            execution_options: request.execution_options,
-        };
+        let engine_core::execution_options::solana::SendSolanaTransactionRequest {
+            idempotency_key: transaction_id,
+            instructions,
+            execution_options,
+            webhook_options,
+        } = request;
+        let chain_id = execution_options.chain_id;
+        let signer_address = execution_options.signer_address;
+        let transaction = SolanaTransactionOptions {
+            instructions,
+            execution_options,
+        };
@@
-        let job_data = SolanaExecutorJobData {
+        let job_data = SolanaExecutorJobData {
             transaction_id: transaction_id.clone(),
             transaction,
             signing_credential,
-            webhook_options: request.webhook_options,
+            webhook_options,
         };

Also applies to: 528-546

Comment on lines +17 to +23
utoipa = { workspace = true, features = [
"macros",
"chrono",
"uuid",
"axum_extras",
"preserve_order",
] }
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

Verify if all utoipa features are necessary.

The configuration enables five features: macros, chrono, uuid, axum_extras, and preserve_order. While macros and axum_extras are likely essential for API documentation with Axum, verify whether chrono, uuid, and preserve_order are actually used in this crate to minimize compile-time dependencies.

Run the following script to check for usage of these features in the crate:


🏁 Script executed:

#!/bin/bash
# Check for chrono usage
echo "=== Checking for chrono usage ==="
rg -n --type=rust 'chrono::' solana-core/

# Check for uuid usage
echo "=== Checking for uuid usage ==="
rg -n --type=rust 'uuid::' solana-core/

# Check if preserve_order is beneficial (check for HashMap/Map usage)
echo "=== Checking for HashMap/Map usage that might benefit from preserve_order ==="
rg -n --type=rust 'HashMap|BTreeMap|serde_json::Map' solana-core/

Length of output: 143


🏁 Script executed:

#!/bin/bash
echo "=== Searching entire repository for chrono usage ==="
rg -n --type=rust 'chrono::|DateTime|Utc' .

echo "=== Searching entire repository for uuid usage ==="
rg -n --type=rust 'uuid::|Uuid' .

echo "=== Searching entire repository for Map usage ==="
rg -n --type=rust 'HashMap<|BTreeMap<|serde_json::Map' .

Length of output: 4630


Remove unused utoipa features in solana-core
The chrono, uuid and preserve_order flags aren’t referenced anywhere under solana-core/src; drop them from the utoipa feature list in solana-core/Cargo.toml.

🤖 Prompt for AI Agents
In solana-core/Cargo.toml around lines 17 to 23, the utoipa dependency declares
unused features ("chrono", "uuid", "preserve_order"); remove those three feature
names from the feature list so only the actually used features remain (e.g.,
keep "macros" and "axum_extras" if still needed), and ensure the resulting TOML
remains syntactically valid (no trailing commas, proper brackets).

Comment on lines +76 to +89
/// Optional recent blockhash (if not provided, will be fetched)
#[serde_as(as = "DisplayFromStr")]
pub recent_blockhash: solana_sdk::hash::Hash,

/// Compute budget limit (compute units)
/// If not provided, the transaction will use default compute budget
#[serde(skip_serializing_if = "Option::is_none")]
pub compute_unit_limit: Option<u32>,

/// Compute budget price (micro-lamports per compute unit)
/// This is the priority fee - higher values increase transaction priority
#[serde(skip_serializing_if = "Option::is_none")]
pub compute_unit_price: Option<u64>,
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

recent_blockhash field is unused; method takes a separate param; docs say “Optional” but field is non-Option

This is inconsistent and error-prone. Either remove the struct field or use it. Suggest using the struct’s field and removing the method parameter; also fix the doc comment.

Apply this diff and update call sites:

@@
-    /// Optional recent blockhash (if not provided, will be fetched)
-    #[serde_as(as = "DisplayFromStr")]
-    pub recent_blockhash: solana_sdk::hash::Hash,
+    /// Recent blockhash used to build the transaction
+    #[serde_as(as = "DisplayFromStr")]
+    pub recent_blockhash: solana_sdk::hash::Hash,
@@
-    pub fn to_versioned_transaction(
-        &self,
-        payer: Pubkey,
-        recent_blockhash: solana_sdk::hash::Hash,
-    ) -> Result<VersionedTransaction, SolanaTransactionError> {
+    pub fn to_versioned_transaction(
+        &self,
+        payer: Pubkey,
+    ) -> Result<VersionedTransaction, SolanaTransactionError> {
@@
-        let message = v0::Message::try_compile(
+        let message = v0::Message::try_compile(
             &payer,
             &instructions,
             &[], // No address lookup tables
-            recent_blockhash,
+            self.recent_blockhash,
         )

Also applies to: 126-131, 150-155


🏁 Script executed:

#!/bin/bash
set -e
echo "Recent_blockhash context:"
rg -n -C3 'recent_blockhash' solana-core/src/transaction.rs || true
echo
echo "to_versioned_transaction signature:"
rg -n -C3 'pub fn to_versioned_transaction' solana-core/src/transaction.rs || true
echo
echo "Call sites of to_versioned_transaction:"
rg -n 'to_versioned_transaction\(' .

Length of output: 1418


🏁 Script executed:

#!/bin/bash
set -e
echo "SolanaTransaction struct definition:"
rg -n -C5 'struct SolanaTransaction' solana-core/src/transaction.rs

Length of output: 464


Align recent_blockhash field and method usage
The recent_blockhash field on SolanaTransaction is never used (the method still takes it as a parameter), and its doc claims it’s optional even though the type is non-Option. Either remove the extra parameter and use self.recent_blockhash in to_versioned_transaction, or make the field an Option<Hash> and update the docs accordingly.

🤖 Prompt for AI Agents
In solana-core/src/transaction.rs around lines 76 to 89, the recent_blockhash
field is documented as optional but typed as a non-Option and is not used by
to_versioned_transaction; change recent_blockhash to
Option<solana_sdk::hash::Hash> and update its doc and serde attributes (keep
serde_as DisplayFromStr and add skip_serializing_if = "Option::is_none"), then
update to_versioned_transaction to read self.recent_blockhash and handle the
None case (either fetch a recent blockhash or return an error) instead of taking
a separate recent_blockhash parameter so the struct field and method usage are
consistent.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
solana-core/Cargo.toml (1)

17-23: Trim unused utoipa features.

chrono, uuid, and preserve_order aren’t exercised anywhere in this crate, so they only bloat build time and dependency surface. Drop them from the feature list.

 utoipa = { workspace = true, features = [
   "macros",
-  "chrono",
-  "uuid",
   "axum_extras",
-  "preserve_order",
 ] }
solana-core/src/transaction.rs (1)

76-135: Use the recent_blockhash field you expose.

SolanaTransaction advertises an optional recent_blockhash, yet the method ignores it and forces callers to pass a separate argument. Any blockhash supplied via the struct is silently dropped. Please either remove the field or wire it through the builder so the API behaves as documented.

@@
-    /// Optional recent blockhash (if not provided, will be fetched)
-    #[serde_as(as = "DisplayFromStr")]
-    pub recent_blockhash: solana_sdk::hash::Hash,
+    /// Optional recent blockhash (if not provided, caller must fetch one)
+    #[serde_as(as = "Option<DisplayFromStr>")]
+    #[serde(skip_serializing_if = "Option::is_none")]
+    pub recent_blockhash: Option<solana_sdk::hash::Hash>,
@@
-    pub fn to_versioned_transaction(
-        &self,
-        payer: Pubkey,
-        recent_blockhash: solana_sdk::hash::Hash,
-    ) -> Result<VersionedTransaction, SolanaTransactionError> {
+    pub fn to_versioned_transaction(
+        &self,
+        payer: Pubkey,
+    ) -> Result<VersionedTransaction, SolanaTransactionError> {
+        let recent_blockhash = self.recent_blockhash.ok_or_else(|| SolanaTransactionError::InvalidBlockhash {
+            error: "missing recent_blockhash; fetch one before building the transaction".into(),
+        })?;
🧹 Nitpick comments (2)
core/src/signer.rs (2)

568-574: Enhance documentation for this complex method.

The doc comment is too brief. Consider documenting:

  • Why transaction parameter is mutable
  • Behavior for each credential type
  • Signature injection mechanism
  • Error conditions (signer not found, unsupported credentials)

Example enhancement:

/// Sign a Solana transaction using the appropriate credential type
///
/// # Arguments
/// * `transaction` - The transaction to sign (mutable as signature is injected in-place)
/// * `from` - The public key of the signer
/// * `credentials` - The signing credentials (currently only Vault is supported)
///
/// # Returns
/// The signed transaction with the signature injected at the appropriate index
///
/// # Errors
/// * `ValidationError` if the signer is not found in the transaction's account keys
/// * `ValidationError` if using an unsupported credential type (IAW, AWS KMS, PrivateKey)
/// * `VaultError` if Vault signing fails

588-604: Standardize error messages for unsupported credential types.

The error messages are inconsistent:

  • Lines 590, 596: "does not support" (definitive)
  • Line 602: "not yet implemented" (implies future support)

For clarity, use consistent wording. If these features are planned, use "not yet implemented" for all. If they're not planned, use "does not support" for all.

Example (if all are unplanned):

 SigningCredential::PrivateKey(_) => {
-    // Private key signing for Solana would require a different signer type
     return Err(EngineError::ValidationError {
-        message: "Private key signing not yet implemented for Solana".to_string(),
+        message: "Private key does not support Solana transaction signing".to_string(),
     });
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eda0f2c and a21ae89.

📒 Files selected for processing (3)
  • core/src/signer.rs (1 hunks)
  • solana-core/Cargo.toml (1 hunks)
  • solana-core/src/transaction.rs (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
core/src/signer.rs (3)
server/src/queue/manager.rs (1)
  • new (52-318)
thirdweb-core/src/iaw/mod.rs (4)
  • new (185-201)
  • sign_transaction (361-427)
  • from (54-58)
  • from (62-66)
core/src/error.rs (7)
  • from (434-455)
  • from (459-496)
  • from (500-514)
  • from (518-522)
  • from (812-816)
  • from (820-824)
  • from (828-832)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: coverage
  • GitHub Check: test
🔇 Additional comments (4)
core/src/signer.rs (4)

552-557: LGTM!

The struct definition follows the same pattern as EoaSigner (lines 204-207), maintaining consistency across the codebase. The public fields are acceptable given the established pattern.


559-566: LGTM!

The constructor follows the same pattern as EoaSigner::new (lines 211-216), maintaining consistency in the codebase.


607-625: Bug fix confirmed - signature assignment now works correctly!

The critical bug flagged in the previous review has been resolved. The signature assignment on line 625 now occurs unconditionally after the resize check, ensuring the signature is properly injected regardless of whether the array needed resizing.

Current flow (correct):

  • If signer_index >= signatures.len(): resize array (lines 618-623), then assign signature (line 625)
  • If signer_index < signatures.len(): skip resize, assign signature (line 625)

Both branches correctly assign the signature. Well done!


575-587: Consider removing the transaction.clone() in core/src/signer.rs:579.
If VaultClient::sign_solana_transaction can accept a &Transaction instead of taking ownership, update its signature and call site to avoid the clone.

@d4mr d4mr merged commit 6c601d8 into main Oct 11, 2025
4 of 5 checks passed
@d4mr d4mr deleted the pb/solana branch October 11, 2025 10:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant