diff --git a/.github/ISSUE_TEMPLATE/suggest_faq_item.yaml b/.github/ISSUE_TEMPLATE/suggest_faq_item.yaml
index cb07e9acd..86deece12 100644
--- a/.github/ISSUE_TEMPLATE/suggest_faq_item.yaml
+++ b/.github/ISSUE_TEMPLATE/suggest_faq_item.yaml
@@ -6,7 +6,7 @@ body:
- type: markdown
attributes:
value: |
- Before submitting this suggestion, be sure to read our expectations for [FAQ content](https://docs.optimism.io/contribute/style-guide#faqs).
For an example FAQ guide with question+answer pairs, see [Security Model FAQ](https://docs.optimism.io/security/faq#security-model-faq).
+ Before submitting this suggestion, be sure to read our expectations for [FAQ content](https://docs.optimism.io/connect/contribute/style-guide#faqs).
For an example FAQ guide with question+answer pairs, see [Security Model FAQ](https://docs.optimism.io/stack/security/faq#faq).
- type: markdown
id: project_info
attributes:
diff --git a/.github/ISSUE_TEMPLATE/suggest_troubleshooting_item.yaml b/.github/ISSUE_TEMPLATE/suggest_troubleshooting_item.yaml
index 16b983b60..95ebb7d80 100644
--- a/.github/ISSUE_TEMPLATE/suggest_troubleshooting_item.yaml
+++ b/.github/ISSUE_TEMPLATE/suggest_troubleshooting_item.yaml
@@ -6,7 +6,7 @@ body:
- type: markdown
attributes:
value: |
- Before submitting this suggestion, be sure to read our expectations for [troubleshooting content](https://docs.optimism.io/contribute/style-guide#troubleshooting-guides).
For an example troubleshooting guide with problem+solution pairs, see [Troubleshooting: L2 Rollup](https://docs.optimism.io/operators/chain-operators/management/troubleshooting).
+ Before submitting this suggestion, be sure to read our expectations for [troubleshooting content](https://docs.optimism.io/connect/contribute/style-guide#troubleshooting-guides).
For an example troubleshooting guide with problem+solution pairs, see [Troubleshooting: L2 Rollup](https://docs.optimism.io/operators/chain-operators/management/troubleshooting).
- type: markdown
id: project_info
attributes:
diff --git a/.github/ISSUE_TEMPLATE/suggest_tutorial.yaml b/.github/ISSUE_TEMPLATE/suggest_tutorial.yaml
index 890af9b74..d854928d5 100644
--- a/.github/ISSUE_TEMPLATE/suggest_tutorial.yaml
+++ b/.github/ISSUE_TEMPLATE/suggest_tutorial.yaml
@@ -6,7 +6,7 @@ body:
- type: markdown
attributes:
value: |
- We'll consider [our defined content types](https://docs.optimism.io/contribute/style-guide#content-types/) when reviewing the tutorial, so please take a look there first.
+ We'll consider [our defined content types](https://docs.optimism.io/connect/contribute/style-guide#content-types) when reviewing the tutorial, so please take a look there first.
- type: markdown
id: tutorial_info
attributes:
diff --git a/pages/app-developers/tutorials/bridging/standard-bridge-custom-token.mdx b/pages/app-developers/tutorials/bridging/standard-bridge-custom-token.mdx
index 95c67c2ca..673bd2c9d 100644
--- a/pages/app-developers/tutorials/bridging/standard-bridge-custom-token.mdx
+++ b/pages/app-developers/tutorials/bridging/standard-bridge-custom-token.mdx
@@ -50,7 +50,7 @@ You will need to get some ETH on both of these testnets.
You can use [this faucet](https://sepoliafaucet.com/) to get ETH on Sepolia.
- You can use the [Superchain Faucet](https://console.optimism.io/faucet?utm_source=op-docs&utm_medium=docs) to get ETH on OP Sepolia.
+ You can use the [Superchain Faucet](https://console.optimism.io/faucet?utm_source=op-docs\&utm_medium=docs) to get ETH on OP Sepolia.
## Add OP Sepolia to your wallet
@@ -135,5 +135,5 @@ This is exactly what this tutorial was meant to demonstrate.
## Add to the Superchain Token List
The [Superchain Token List](https://github.com/ethereum-optimism/ethereum-optimism.github.io#readme) is a common list of tokens deployed on chains within the Optimism Superchain.
-This list is used by services like the [Superchain Bridges UI](https://app.optimism.io/bridge?utm_source=op-docs&utm_medium=docs).
+This list is used by services like the [Superchain Bridges UI](https://app.optimism.io/bridge?utm_source=op-docs\&utm_medium=docs).
If you want your OP Mainnet token to be included in this list, take a look at the [review process and merge criteria](https://github.com/ethereum-optimism/ethereum-optimism.github.io#review-process-and-merge-criteria).
diff --git a/pages/app-developers/tutorials/bridging/standard-bridge-standard-token.mdx b/pages/app-developers/tutorials/bridging/standard-bridge-standard-token.mdx
index 1fa7db3bd..31b973271 100644
--- a/pages/app-developers/tutorials/bridging/standard-bridge-standard-token.mdx
+++ b/pages/app-developers/tutorials/bridging/standard-bridge-standard-token.mdx
@@ -27,7 +27,7 @@ Tokens created by this factory contract are compatible with the Standard Bridge
If you want to include specialized logic within your L2 token, see the tutorial on [Bridging Your Custom ERC-20 Token Using the Standard Bridge](./standard-bridge-custom-token) instead.
-The Standard Bridge **does not** support [**fee on transfer tokens**](https://github.com/d-xo/weird-erc20#fee-on-transfer) or [**rebasing tokens**](https://github.com/d-xo/weird-erc20#balance-modifications-outside-of-transfers-rebasingairdrops) because they can cause bridge accounting errors.
+ The Standard Bridge **does not** support [**fee on transfer tokens**](https://github.com/d-xo/weird-erc20#fee-on-transfer) or [**rebasing tokens**](https://github.com/d-xo/weird-erc20#balance-modifications-outside-of-transfers-rebasingairdrops) because they can cause bridge accounting errors.
## About OptimismMintableERC20s
@@ -47,8 +47,8 @@ This tutorial explains how to create a bridged ERC-20 token on OP Sepolia.
You will need to get some ETH on both of these testnets.
-You can use [this faucet](https://sepoliafaucet.com) to get ETH on Sepolia.
-You can use the [Superchain Faucet](https://console.optimism.io/faucet?utm_source=op-docs&utm_medium=docs) to get ETH on OP Sepolia.
+ You can use [this faucet](https://sepoliafaucet.com) to get ETH on Sepolia.
+ You can use the [Superchain Faucet](https://console.optimism.io/faucet?utm_source=op-docs\&utm_medium=docs) to get ETH on OP Sepolia.
## Get an L1 ERC-20 token address
@@ -63,43 +63,41 @@ Once you have an L1 ERC-20 token, you can use the [`OptimismMintableERC20Factory
All tokens created by the factory implement the `IOptimismMintableERC20` interface and are compatible with the Standard Bridge system.
+ {Add a private key to your environment
}
-{Add a private key to your environment
}
+ You'll need a private key in order to sign transactions.
+ Set your private key as an environment variable with the `export` command.
+ Make sure this private key corresponds to an address that has ETH on OP Sepolia.
-You'll need a private key in order to sign transactions.
-Set your private key as an environment variable with the `export` command.
-Make sure this private key corresponds to an address that has ETH on OP Sepolia.
+ ```bash
+ export TUTORIAL_PRIVATE_KEY=0x...
+ ```
-```bash
-export TUTORIAL_PRIVATE_KEY=0x...
-```
+ {Add an OP Sepolia RPC URL to your environment
}
-{Add an OP Sepolia RPC URL to your environment
}
+ You'll need an RPC URL in order to connect to OP Sepolia.
+ Set your RPC URL as an environment variable with the `export` command.
-You'll need an RPC URL in order to connect to OP Sepolia.
-Set your RPC URL as an environment variable with the `export` command.
+ ```bash file=/public/tutorials/standard-bridge-standard-token.sh#L1 hash=a1c505198f7753f7d2114f4a018d7889
+ ```
-```bash file=/public/tutorials/standard-bridge-standard-token.sh#L1 hash=a1c505198f7753f7d2114f4a018d7889
-```
+ {Add your L1 ERC-20 token address to your environment
}
-{Add your L1 ERC-20 token address to your environment
}
+ You'll need to know the address of your L1 ERC-20 token in order to create a bridged representation of it on OP Sepolia.
+ Set your L1 ERC-20 token address as an environment variable with the `export` command.
-You'll need to know the address of your L1 ERC-20 token in order to create a bridged representation of it on OP Sepolia.
-Set your L1 ERC-20 token address as an environment variable with the `export` command.
+ ```bash file=/public/tutorials/standard-bridge-standard-token.sh#L3-L4 hash=c505f3eb6ddd80d8fbdddf4d7b17852a
+ ```
-```bash file=/public/tutorials/standard-bridge-standard-token.sh#L3-L4 hash=c505f3eb6ddd80d8fbdddf4d7b17852a
-```
+ {Deploy your L2 ERC-20 token
}
-{Deploy your L2 ERC-20 token
}
-
-You can now deploy your L2 ERC-20 token using the [`OptimismMintableERC20Factory`](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/universal/OptimismMintableERC20Factory.sol).
-Use the `cast` command to trigger the deployment function on the factory contract.
-This example command creates a token with the name "My Standard Demo Token" and the symbol "L2TKN".
-The resulting L2 ERC-20 token address is printed to the console.
-
-```bash file=/public/tutorials/standard-bridge-standard-token.sh#L6 hash=1ecfdc6106e0c5179b182d66b5171c2c
-```
+ You can now deploy your L2 ERC-20 token using the [`OptimismMintableERC20Factory`](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/universal/OptimismMintableERC20Factory.sol).
+ Use the `cast` command to trigger the deployment function on the factory contract.
+ This example command creates a token with the name "My Standard Demo Token" and the symbol "L2TKN".
+ The resulting L2 ERC-20 token address is printed to the console.
+ ```bash file=/public/tutorials/standard-bridge-standard-token.sh#L6 hash=1ecfdc6106e0c5179b182d66b5171c2c
+ ```
## Bridge some tokens
@@ -110,5 +108,5 @@ Check out the tutorial on [Bridging ERC-20 tokens with viem](./cross-dom-bridge-
## Add to the Superchain Token List
The [Superchain Token List](https://github.com/ethereum-optimism/ethereum-optimism.github.io#readme) is a common list of tokens deployed on chains within the Optimism Superchain.
-This list is used by services like the [Superchain Bridges UI](https://app.optimism.io/bridge?utm_source=op-docs&utm_medium=docs).
+This list is used by services like the [Superchain Bridges UI](https://app.optimism.io/bridge?utm_source=op-docs\&utm_medium=docs).
If you want your OP Mainnet token to be included in this list, take a look at the [review process and merge criteria](https://github.com/ethereum-optimism/ethereum-optimism.github.io#review-process-and-merge-criteria).
diff --git a/pages/notices/upgrade-14.mdx b/pages/notices/upgrade-14.mdx
index ab5c2ead2..0e42e8dc7 100644
--- a/pages/notices/upgrade-14.mdx
+++ b/pages/notices/upgrade-14.mdx
@@ -58,8 +58,7 @@ Key changes:
* Extended syscall support for multi-threading
* Improved exception handling for unrecognized syscalls
-After this upgrade, the on-chain implementation of the fault proof VM will be [MIPS64.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/cannon/MIPS64.sol) instead of [MIPS.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/cannon/MIPS.sol).
-
+After this upgrade, the on-chain implementation of the fault proof VM will be [DeployMIPS.s.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/src/cannon/MIPS64.sol) instead of [MIPS.s.sol](https://github.com/ethereum-optimism/optimism/blob/develop/packages/contracts-bedrock/scripts/deploy/DeployMIPS.s.sol).
### Operator Fee
This introduces two new rollup operator configured scalars:
diff --git a/pages/operators/chain-operators/deploy.mdx b/pages/operators/chain-operators/deploy.mdx
index a707b873d..ee408564e 100644
--- a/pages/operators/chain-operators/deploy.mdx
+++ b/pages/operators/chain-operators/deploy.mdx
@@ -30,11 +30,13 @@ This section provides information on OP Stack genesis creation, deployment overv
-
+
+
+
diff --git a/pages/operators/chain-operators/deploy/_meta.json b/pages/operators/chain-operators/deploy/_meta.json
index 0a9c15516..da35b2252 100644
--- a/pages/operators/chain-operators/deploy/_meta.json
+++ b/pages/operators/chain-operators/deploy/_meta.json
@@ -4,6 +4,7 @@
"genesis": "Chain artifacts creation",
"validate-deployment": "Validate your contract deployment",
"sequencer-node": "Spinning up the sequencer",
- "proposer-setup-guide": "Spinning up the proposer"
+ "proposer-setup-guide": "Spinning up the proposer",
+ "spin-batcher": "Spinning up the batcher"
}
diff --git a/pages/operators/chain-operators/deploy/spin-batcher.mdx b/pages/operators/chain-operators/deploy/spin-batcher.mdx
new file mode 100644
index 000000000..ec16cd85d
--- /dev/null
+++ b/pages/operators/chain-operators/deploy/spin-batcher.mdx
@@ -0,0 +1,449 @@
+---
+title: Spinning up the batcher
+lang: en-US
+description: Learn how to set up and configure an OP Stack batcher to submit L2 transaction batches to L1.
+content_type: tutorial
+topic: batcher-setup
+personas:
+ - chain-operator
+categories:
+ - testnet
+ - mainnet
+ - op-batcher
+ - batch-submission
+ - l2-to-l1-data
+ - transaction-batching
+is_imported_content: 'false'
+---
+
+import { Callout, Steps } from 'nextra/components'
+
+# Spinning up the batcher
+
+After you have spun up your [sequencer](/operators/chain-operators/deploy/sequencer-node), you need to configure a batcher to submit L2 transaction batches to L1. The batcher is a critical component that ensures L2 transaction data is available on L1 for data availability and enables users to reconstruct the L2 state.
+
+This guide assumes you already have a functioning sequencer and the necessary L1 contracts deployed using [`op-deployer`](/operators/chain-operators/tools/op-deployer). If you haven't set up your sequencer yet, please refer to the \[sequencer guide]\(add this later) first.
+
+## Understanding the batcher's role
+
+The batcher (`op-batcher`) serves as a crucial component that bridges your L2 chain data to L1. Its primary responsibilities include:
+
+* **Batch submission**: Collecting L2 transactions and submitting them as batches to L1
+* **Data availability**: Ensuring L2 transaction data is available on L1 for verification
+* **Cost optimization**: Compressing and efficiently packing transaction data to minimize L1 costs
+* **Channel management**: Managing data channels for optimal batch submission timing
+
+The batcher reads transaction data from your sequencer and submits compressed batches to the `BatchInbox` contract on L1.
+
+## Prerequisites
+
+Before setting up your batcher, ensure you have:
+
+**Running infrastructure:**
+
+* An operational sequencer node
+* Access to a L1 RPC endpoint
+
+**Network information:**
+
+* Your L2 chain ID and network configuration
+* L1 network details (chain ID, RPC endpoints)
+* `BatchInbox` contract address from your deployment
+
+## Software installation
+
+### Finding the current stable releases
+
+To ensure you're using the latest compatible versions of OP Stack components, always check the official [releases page](https://github.com/ethereum-optimism/optimism/releases).
+
+Look for the latest `op-batcher/v*` release that's compatible with your sequencer setup.
+
+
+ This guide uses `op-batcher/v1.13.1` which is compatible with op-node/v1.13.3 and op-geth/v1.101511.0 from the sequencer setup.
+ Always check the [release notes](https://github.com/ethereum-optimism/optimism/releases) for compatibility information.
+
+
+### Build from source
+
+Clone and build op-batcher:
+
+```bash
+# If you don't already have the optimism repository from the sequencer setup
+git clone https://github.com/ethereum-optimism/optimism.git
+cd optimism
+
+# Checkout the latest release tag
+git checkout op-batcher/v1.13.1
+
+# Build op-batcher
+cd op-batcher
+just
+
+# Binary will be available at ./bin/op-batcher
+```
+
+### Verify installation
+
+Run this command to verify the installation:
+
+```bash
+./bin/op-batcher --version
+```
+
+### Docker alternative (For containerized environments)
+
+If you prefer containerized deployment, you can use the official Docker images.
+
+### Complete Docker setup guide
+
+
+ Complete Docker setup guide
+
+ If you choose the Docker approach, you'll need to:
+
+ 1. **Set up directory structure and copy configuration files:**
+
+ ```bash
+ # Create your batcher working directory
+ mkdir ~/batcher-node
+ cd ~/batcher-node
+
+ # Copy configuration files from op-deployer output
+ # Note: Adjust the path if your .deployer directory is located elsewhere
+ cp ~/.deployer/state.json .
+
+ # Extract the BatchInbox address
+ BATCH_INBOX_ADDRESS=$(cat state.json | jq -r '.opChainDeployments[0].systemConfigProxyAddress')
+ echo "BatchInbox Address: $BATCH_INBOX_ADDRESS"
+ ```
+
+ 2. **Create environment variables file:**
+
+ ```bash
+ # Create .env file with your actual values
+ cat > .env << 'EOF'
+ # L1 Configuration - Replace with your actual RPC URLs
+ L1_RPC_URL=https://sepolia.infura.io/v3/YOUR_ACTUAL_INFURA_KEY
+
+ # L2 Configuration - Should match your sequencer setup
+ L2_RPC_URL=http://sequencer-node:8545
+ ROLLUP_RPC_URL=http://sequencer-node:8547
+
+ # Contract addresses - Extract from your op-deployer output
+ BATCH_INBOX_ADDRESS=YOUR_ACTUAL_BATCH_INBOX_ADDRESS
+
+ # Private key - Replace with your actual private key
+ BATCHER_PRIVATE_KEY=0xYOUR_ACTUAL_PRIVATE_KEY
+
+ # Batcher configuration
+ POLL_INTERVAL=1s
+ SUB_SAFETY_MARGIN=6
+ NUM_CONFIRMATIONS=1
+ SAFE_ABORT_NONCE_TOO_LOW_COUNT=3
+ RESUBMISSION_TIMEOUT=30s
+ MAX_CHANNEL_DURATION=25
+
+ # RPC configuration
+ BATCHER_RPC_PORT=8548
+ EOF
+ ```
+
+ **Important**: Replace ALL placeholder values (`YOUR_ACTUAL_*`) with your real configuration values.
+
+ 3. **Create docker-compose.yml:**
+
+
+ This configuration assumes your sequencer is running in a Docker container named `sequencer-node` on the same `op-stack` network.
+ Make sure your sequencer is running before starting the batcher.
+
+
+ ```yaml
+ version: '3.8'
+
+services:
+ op-batcher:
+ image: us-docker.pkg.dev/oplabs-tools-artifacts/images/op-batcher:v1.13.1
+ volumes:
+ - .:/workspace
+ working_dir: /workspace
+ ports:
+ - "8548:8548"
+ env_file:
+ - .env
+ command:
+ - "op-batcher"
+ - "--l2-eth-rpc=${L2_RPC_URL}"
+ - "--rollup-rpc=${ROLLUP_RPC_URL}"
+ - "--poll-interval=${POLL_INTERVAL}"
+ - "--sub-safety-margin=${SUB_SAFETY_MARGIN}"
+ - "--num-confirmations=${NUM_CONFIRMATIONS}"
+ - "--safe-abort-nonce-too-low-count=${SAFE_ABORT_NONCE_TOO_LOW_COUNT}"
+ - "--resubmission-timeout=${RESUBMISSION_TIMEOUT}"
+ - "--rpc.addr=0.0.0.0"
+ - "--rpc.port=${BATCHER_RPC_PORT}"
+ - "--rpc.enable-admin"
+ - "--max-channel-duration=${MAX_CHANNEL_DURATION}"
+ - "--l1-eth-rpc=${L1_RPC_URL}"
+ - "--private-key=${BATCHER_PRIVATE_KEY}"
+ - "--batch-type=1"
+ - "--data-availability-type=blobs"
+ - "--compress"
+ - "--log.level=info"
+ restart: unless-stopped
+ ```
+
+ 4. **Start the batcher service:**
+
+ ```bash
+ # Make sure your sequencer network exists
+ docker network create op-stack 2>/dev/null || true
+
+ # Start the batcher
+ docker-compose up -d
+
+ # View logs
+ docker-compose logs -f op-batcher
+ ````
+
+ 5. **Verify batcher is running:**
+
+ ```bash
+ # Check batcher RPC is responding
+ curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"admin_startBatcher","params":[],"id":1}' \
+ http://localhost:8548
+
+ # Check container status
+ docker-compose ps
+ ```
+
+ 6. **Final directory structure:**
+
+ ```bash
+ ~/batcher-node/
+ ├── state.json # Copied from ~/.deployer/
+ ├── .env # Environment variables
+ └── docker-compose.yml # Docker configuration
+ ```
+
+
+
+ The rest of this guide assumes you're using the **build-from-source** approach.
+ If you chose Docker, refer to the collapsible section.
+
+
+## Configuration setup
+
+### 1. Organize your workspace
+
+Create your batcher working directory:
+
+```bash
+# Create batcher directory at the same level as your sequencer
+mkdir batcher-node
+cd batcher-node
+
+# Create scripts directory
+mkdir scripts
+```
+
+Your final directory structure should look like:
+
+```bash
+~/
+├── optimism/ # Contains op-batcher binary
+├── sequencer-node/ # Your sequencer setup
+├── proposer-node/ # Your proposer setup
+├── .deployer/ # From op-deployer
+│ └── state.json
+└── batcher-node/ # Your batcher working directory
+ ├── state.json # Copied from .deployer
+ ├── .env
+ └── scripts/
+ └── start-batcher.sh
+```
+
+### 2. Extract `BatchInbox` address
+
+Extract the `BatchInbox` contract address from your op-deployer output:
+
+```bash
+# Navigate to batcher directory
+cd ~/batcher-node
+
+# Copy the deployment state file from op-deployer
+# Update the path if your .deployer directory is located elsewhere
+cp ../.deployer/state.json .
+
+# Extract the BatchInbox address
+BATCH_INBOX_ADDRESS=$(cat state.json | jq -r '.opChainDeployments[0].systemConfigProxyAddress')
+echo "BatchInbox Address: $BATCH_INBOX_ADDRESS"
+```
+
+
+ The batcher submits transaction batches to the `BatchInbox` contract on L1. This contract is responsible for accepting and storing L2 transaction data.
+
+
+### 3. Set up environment variables
+
+Create your `.env` file with the actual values:
+
+```bash
+# Create .env file with your actual values
+# L1 Configuration - Replace with your actual RPC URL
+L1_RPC_URL=https://sepolia.infura.io/v3/YOUR_ACTUAL_INFURA_KEY
+
+# L2 Configuration - Should match your sequencer setup
+L2_RPC_URL=http://localhost:8545
+ROLLUP_RPC_URL=http://localhost:8547
+
+# Contract addresses - Extract from your op-deployer output
+BATCH_INBOX_ADDRESS=YOUR_ACTUAL_BATCH_INBOX_ADDRESS
+
+# Private key - Replace with your actual private key
+BATCHER_PRIVATE_KEY=0xYOUR_ACTUAL_PRIVATE_KEY
+
+# Batcher configuration
+POLL_INTERVAL=1s
+SUB_SAFETY_MARGIN=6
+NUM_CONFIRMATIONS=1
+SAFE_ABORT_NONCE_TOO_LOW_COUNT=3
+RESUBMISSION_TIMEOUT=30s
+MAX_CHANNEL_DURATION=25
+
+# RPC configuration
+BATCHER_RPC_PORT=8548
+```
+
+**Important**: Replace ALL placeholder values (`YOUR_ACTUAL_*`) with your real configuration values!
+
+### 4. Get your private key
+
+Get a private key from your wallet that will be used for submitting batches to L1. This account needs sufficient ETH to pay for L1 gas costs.
+
+
+ The batcher account needs to be funded with ETH on L1 to pay for batch submission transactions. Monitor this account's balance regularly as it will consume ETH for each batch submission.
+
+
+## Batcher configuration
+
+Create `scripts/start-batcher.sh`:
+
+```bash
+#!/bin/bash
+
+source .env
+
+# Path to the op-batcher binary we built
+../optimism/op-batcher/bin/op-batcher \
+ --l2-eth-rpc=$L2_RPC_URL \
+ --rollup-rpc=$ROLLUP_RPC_URL \
+ --poll-interval=$POLL_INTERVAL \
+ --sub-safety-margin=$SUB_SAFETY_MARGIN \
+ --num-confirmations=$NUM_CONFIRMATIONS \
+ --safe-abort-nonce-too-low-count=$SAFE_ABORT_NONCE_TOO_LOW_COUNT \
+ --resubmission-timeout=$RESUBMISSION_TIMEOUT \
+ --rpc.addr=0.0.0.0 \
+ --rpc.port=$BATCHER_RPC_PORT \
+ --rpc.enable-admin \
+ --max-channel-duration=$MAX_CHANNEL_DURATION \
+ --l1-eth-rpc=$L1_RPC_URL \
+ --private-key=$BATCHER_PRIVATE_KEY \
+ --batch-type=1 \
+ --data-availability-type=blobs \
+ --compress \
+ --log.level=info
+```
+
+### Batcher parameters explained
+
+* **`--poll-interval`**: How frequently the batcher checks for new L2 blocks to batch
+* **`--sub-safety-margin`**: Number of confirmations to wait before considering L1 transactions safe
+* **`--max-channel-duration`**: Maximum time (in L1 blocks) to keep a channel open
+* **`--batch-type`**: Type of batch encoding (1 for span batches, 0 for singular batches)
+* **`--data-availability-type`**: Whether to use blobs or calldata for data availability
+* **`--compress`**: Enable compression to reduce L1 data costs
+
+## Starting the batcher
+
+### 1. Verify prerequisites
+
+Ensure your sequencer and rollup node are running:
+
+
+ ### Test L1 connectivity
+
+ ```bash
+ # Note: Make sure you have exported these environment variables to your current shell session:
+ # export L1_RPC_URL="https://sepolia.infura.io/v3/YOUR_KEY"
+ # export L2_RPC_URL="http://localhost:8545"
+ # export ROLLUP_RPC_URL="http://localhost:8547"
+
+ curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
+ $L1_RPC_URL
+ ```
+
+ ### Test L2 connectivity
+
+ ```bash
+ curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \
+ $L2_RPC_URL
+ ```
+
+ ### Test rollup node connectivity
+
+ ```bash
+ curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"optimism_syncStatus","params":[],"id":1}' \
+ $ROLLUP_RPC_URL
+ ```
+
+
+### 2. Start the batcher
+
+```bash
+# Make the script executable
+chmod +x scripts/start-batcher.sh
+
+# Start the batcher
+./scripts/start-batcher.sh
+```
+
+## Verification
+
+Verify your batcher is working correctly:
+
+### Check batcher status
+
+```bash
+# Check batcher RPC is responding
+curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"admin_startBatcher","params":[],"id":1}' \
+ http://localhost:8548
+
+# Monitor batch submission activity (check L1 for recent transactions from your batcher address)
+# Replace with your actual batcher address
+curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"eth_getTransactionCount","params":["0xYOUR_BATCHER_ADDRESS","latest"],"id":1}' \
+ $L1_RPC_URL
+
+# Check if your batcher address has enough ETH for gas
+curl -X POST -H "Content-Type: application/json" \
+ --data '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0xYOUR_BATCHER_ADDRESS","latest"],"id":1}' \
+ $L1_RPC_URL
+```
+
+
+ For detailed cost analysis and optimization strategies, refer to the [Fee calculation tools](/operators/chain-operators/tools/fee-calculator).
+
+
+## Next steps
+
+* For detailed parameter documentation, see the [batcher configuration reference](/operators/chain-operators/configuration/batcher).
+* For monitoring and metrics setup, check the [chain monitoring guide](/operators/chain-operators/tools/chain-monitoring).
+* For cost optimization strategies, refer to the [Fee calculation tools](/operators/chain-operators/tools/fee-calculator).
+* Consider setting up the [op-challenger](/operators/chain-operators/tutorials/dispute-games) for a complete fault proof system.
+
+Your batcher is now operational and will continuously submit L2 transaction batches to L1!
diff --git a/pages/operators/chain-operators/tools/op-deployer.mdx b/pages/operators/chain-operators/tools/op-deployer.mdx
index 6abf5eb8c..0c450d04c 100644
--- a/pages/operators/chain-operators/tools/op-deployer.mdx
+++ b/pages/operators/chain-operators/tools/op-deployer.mdx
@@ -355,6 +355,16 @@ op-deployer inspect deploy-config --workdir .deployer # outputs the
op-deployer inspect l2-semvers --workdir .deployer # outputs the semvers for all L2 chains
```
+## Upgrade usage
+
+The `upgrade` command in `op-deployer` simplifies the process of upgrading existing OP Stack chains from one version to the next. This functionality works similar to database migrations - each upgrade command upgrades a chain from exactly one previous version to the next version in sequence.
+
+Unlike the `bootstrap` or `apply` commands, the `upgrade` command doesn't directly interact with the chain. Instead, it generates calldata that you can then execute using tools like `cast`, Gnosis SAFE, or other wallet management systems you use for L1 operations. This approach provides flexibility in how you execute the upgrade while maintaining security through your existing operational procedures.
+
+Chains that are several versions behind the latest can be upgraded by running multiple upgrade commands in sequence, with each command handling one version increment. The upgrade process requires you to be using the standard OP Contracts Manager and the standard shared SuperchainConfig contract for compatibility.
+
+For detailed instructions on using the upgrade command, including configuration examples and step-by-step procedures, see the [upgrade documentation](https://docs.optimism.io/stack/smart-contracts/op-deployer-upgrade#using-upgrade).
+
## Bootstrap usage
`op-deployer` provides a set of bootstrap commands specifically designed for initializing a new superchain target on an L1 network. These commands are essential when you're setting up a completely new superchain target environment rather than deploying a new chain on an existing superchain.
diff --git a/words.txt b/words.txt
index ef1614cb0..19a182f51 100644
--- a/words.txt
+++ b/words.txt
@@ -1,7 +1,7 @@
-accountqueue
ACCOUNTQUEUE
-accountslots
+accountqueue
ACCOUNTSLOTS
+accountslots
ACDC
ADDI
ADDIU
@@ -9,58 +9,58 @@ ADDU
airgap
Allnodes
allocs
-alphanet
Alphanet
-alphanets
+alphanet
Alphanets
+alphanets
altda
ANDI
Ankr
Apeworx
Arweave
authrpc
-autorelay
Autorelay
+autorelay
autorelayer
basefee
bcde
-betanet
Betanet
-betanets
+betanet
Betanets
+betanets
BGEZ
BGTZ
Biconomy
BLEZ
-blobpool
BLOBPOOL
+blobpool
blobspace
Blockdaemon
blockhash
blocklists
-blocklogs
BLOCKLOGS
-blockprofilerate
+blocklogs
BLOCKPROFILERATE
+blockprofilerate
Blockscout
-blockspace
Blockspace
+blockspace
blocktime
-blocktimes
Blocktimes
-bloomfilter
+blocktimes
BLOOMFILTER
+bloomfilter
BLTZ
Bootcamp
bootnode
-bootnodes
-Bootnodes
BOOTNODES
+Bootnodes
+bootnodes
bottlenecked
-brotli
Brotli
-callouts
+brotli
Callouts
+callouts
CCIP
cdef
Celestia
@@ -73,66 +73,66 @@ chaosnet
Chugsplash
Clabby
codebases
-collateralized
Collateralized
+collateralized
compr
Comprensive
-computependingblock
COMPUTEPENDINGBLOCK
+computependingblock
confs
corsdomain
counterfactually
-crosschain
Crosschain
+crosschain
Crossmint
daserver
-datacap
DATACAP
-datadir
+datacap
DATADIR
-delegatecall
+datadir
Defi
Defillama's
-devnet
+delegatecall
Devnet
-devnets
+devnet
Devnets
+devnets
devs
direnv
-disabletxpoolgossip
DISABLETXPOOLGOSSIP
-discv
+disabletxpoolgossip
Discv
+discv
DIVU
Drand
dripcheck
Drippie
Eigen
EIPs
-enabledeprecatedpersonal
ENABLEDEPRECATEDPERSONAL
+enabledeprecatedpersonal
enginekind
-erigon
Erigon
-etherbase
+erigon
ETHERBASE
+etherbase
Ethernity
Ethernow
-ethstats
ETHSTATS
-evmtimeout
+ethstats
EVMTIMEOUT
+evmtimeout
executability
exfiltrate
-exitwhensynced
EXITWHENSYNCED
+exitwhensynced
extensibly
-extradata
EXTRADATA
+extradata
Farcaster
Faultproof
-fdlimit
FDLIMIT
+fdlimit
Flashblocks
Flashbots
forkable
@@ -141,51 +141,51 @@ FPVM
FPVMs
Fraxtal
Funct
-gascap
GASCAP
+gascap
gaslessly
-gcmode
GCMODE
+gcmode
Gelato
gifs
-globalqueue
GLOBALQUEUE
-globalslots
+globalqueue
GLOBALSLOTS
+globalslots
gokzg
growthepie
hardfork
hardforks
-healthcheck
HEALTHCHECK
+healthcheck
healthchecks
-historicalrpc
HISTORICALRPC
-historicalrpctimeout
+historicalrpc
HISTORICALRPCTIMEOUT
-holesky
-Holesky
+historicalrpctimeout
HOLESKY
+Holesky
+holesky
IERC
-ignoreprice
IGNOREPRICE
+ignoreprice
Immunefi
-inator
Inator
-influxdbv
+inator
INFLUXDBV
+influxdbv
initcode
-ipcdisable
IPCDISABLE
+ipcdisable
ipcfile
-ipcpath
IPCPATH
+ipcpath
IPFS
JALR
-journalremotes
JOURNALREMOTES
-jspath
+journalremotes
JSPATH
+jspath
jwtsecret
Keccak
leveldb
@@ -194,34 +194,34 @@ Lisk
logfile
logfmt
Mainnets
-maxage
MAXAGE
-maxbackups
+maxage
MAXBACKUPS
-maxpeers
+maxbackups
MAXPEERS
-maxpendpeers
+maxpeers
MAXPENDPEERS
-maxprice
+maxpendpeers
MAXPRICE
-memprofilerate
+maxprice
MEMPROFILERATE
-merkle
+memprofilerate
Merkle
+merkle
MFHI
MFLO
Mgas
Minato
-minfreedisk
MINFREEDISK
-minsuggestedpriorityfee
+minfreedisk
MINSUGGESTEDPRIORITYFEE
+minsuggestedpriorityfee
Mintable
Mintplex
MIPSEVM
Mitigations
-monitorism
Monitorism
+monitorism
Moralis
Mordor
mountpoint
@@ -231,144 +231,144 @@ MTHI
MTLO
MULT
multiaddr
-multichain
Multichain
+multichain
multiclient
multisigs
MULTU
nethermind
-netrestrict
NETRESTRICT
-networkid
+netrestrict
NETWORKID
-newpayload
+networkid
NEWPAYLOAD
+newpayload
nextra
-nocompaction
NOCOMPACTION
-nodekey
+nocompaction
NODEKEY
-nodekeyhex
+nodekey
NODEKEYHEX
+nodekeyhex
nodename
Nodies
-nodiscover
NODISCOVER
-nolocals
+nodiscover
NOLOCALS
-noprefetch
+nolocals
NOPREFETCH
-nopruning
+noprefetch
NOPRUNING
-nosyncserve
+nopruning
NOSYNCSERVE
+nosyncserve
Numba
NVME
-offchain
Offchain
+offchain
opchaina
opchainb
-opcm
OPCM
+opcm
Openfort
oplabs
opnode's
outfile
outperformance
pcscdpath
-pectra
Pectra
+pectra
Pectra's
-peerstore
Peerstore
+peerstore
peerstores
-permissioned
Permissioned
+permissioned
permissioning
-permissionless
Permissionless
+permissionless
permissionlessly
Perps
Peta
Pimlico
POAP
POAPs
-pprof
PPROF
-precommitments
+pprof
Precommitments
+precommitments
preconfigured
predeploy
-predeployed
Predeployed
-predeploys
+predeployed
Predeploys
+predeploys
prefunded
-preimage
Preimage
-preimages
+preimage
PREIMAGES
+preimages
preinstall
-preinstalls
Preinstalls
-prestate
+preinstalls
Prestate
+prestate
prestates
PREVRANDAO
-pricebump
PRICEBUMP
-pricelimit
+pricebump
PRICELIMIT
+pricelimit
productionize
productionized
Protip
Proxied
-proxyd
Proxyd
+proxyd
Pyth
Pyth's
QRNG
-quicknode
Quicknode
+quicknode
quickstarts
rebalancing
reemit
Reemitting
-regenesis
Regenesis
+regenesis
Reimagine
-rejournal
REJOURNAL
-remotedb
+rejournal
REMOTEDB
+remotedb
Reown
Reown's
replayability
replayor
reposts
reproven
-requiredblocks
REQUIREDBLOCKS
+requiredblocks
rollouts
-rollups
Rollups
+rollups
Routescan
rpckind
-rpcprefix
RPCPREFIX
+rpcprefix
rpcs
RPGF
-runbooks
Runbooks
+runbooks
RWAs
safedb
Schnorr
-sepolia
-Sepolia
SEPOLIA
+Sepolia
+sepolia
seqnr
-sequencerhttp
SEQUENCERHTTP
+sequencerhttp
serv
signup
SLLV
@@ -377,16 +377,16 @@ SLTIU
SLTU
smartcard
snapshotlog
-snapsync
Snapsync
+snapsync
Solana
Soneium
soyboy
Spearbit
SRAV
SRLV
-stablecoins
Stablecoins
+stablecoins
statefulset
structs
subcomponents
@@ -395,21 +395,21 @@ subheaders
subsecond
SUBU
Sunnyside
-superchain
-Superchain
SUPERCHAIN
+Superchain
+superchain
Superchain's
superchainerc
Superlend
Superloans
Superscan
Superseed
-supersim
Supersim
-syncmode
+supersim
SYNCMODE
-synctarget
+syncmode
SYNCTARGET
+synctarget
syscalls
SYSCON
thirdweb
@@ -423,8 +423,8 @@ Twei
txfeecap
txmgr
txns
-txpool
TXPOOL
+txpool
txproxy
txproxyd
uncensorable
@@ -435,21 +435,21 @@ Unprotect
unsubmitted
UPNP
upstreaming
-verkle
VERKLE
-vhosts
+verkle
VHOSTS
-viem
+vhosts
Viem
-viem's
+viem
Viem's
-vmdebug
+viem's
VMDEBUG
-vmodule
+vmdebug
VMODULE
+vmodule
xlarge
XORI
ZKPs
ZKVM
-zora
Zora
+zora