Skip to content

Commit 66039cc

Browse files
committed
core: increase the size limit of header RLP for chain accessor
**Problem** Chain skeleton sync (batched headers sync) is impossible with the current block size. Once the new stale peer is trying to sync its headers, the following errors occurs during skeleton filling (some useless logs are omitted): ``` DEBUG[02-20|14:00:46.499] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=1 fromnum=4 skip=0 reverse=false DEBUG[02-20|14:00:46.908] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=1 fromnum=2 skip=0 reverse=false DEBUG[02-20|14:00:47.153] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=1 fromnum=1 skip=0 reverse=false DEBUG[02-20|14:00:47.426] Found common ancestor peer=8205ce1a number=0 hash=000000..000000 DEBUG[02-20|14:00:47.426] Directing header downloads peer=8205ce1a origin=1 TRACE[02-20|14:00:47.427] Fetching skeleton headers peer=8205ce1a count=192 from=1 DEBUG[02-20|14:00:47.427] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=128 fromnum=192 skip=191 reverse=false DEBUG[02-20|14:00:47.427] Downloading receipts origin=1 DEBUG[02-20|14:00:47.427] Downloading block bodies origin=1 DEBUG[02-20|14:00:48.255] Filling up skeleton from=1 TRACE[02-20|14:00:48.256] Requesting new batch of headers peer=8205ce1a from=1 DEBUG[02-20|14:00:48.256] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=192 fromnum=1 skip=0 reverse=false ... TRACE[02-20|14:00:48.657] Skeleton filling not accepted peer=8205ce1a7d69b885 from=1 DEBUG[02-20|14:00:48.657] Failed to deliver retrieved headers peer=8205ce1a err="delivery not accepted" TRACE[02-20|14:00:48.657] Requesting new batch of headers peer=8205ce1a from=193 DEBUG[02-20|14:00:48.657] Fetching batch of headers id=8205ce1a7d69b885 conn=staticdial count=192 fromnum=193 skip=0 reverse=false ... ``` So there's a "delivery not accepted" error on attempt to build the skeleton from received batch of received headers. Extended logs give us that every batch of blocks contain only one header (the last from requested batch): ``` TRACE[02-20|14:00:48.656] Delivering headers accepted=false len(headers)=1 headers[0].Hash=5bb77b..06997d headers[0].Number=192 MaxHeaderFetch=192 ``` So based on this information it's impossible to build the skeleton **Explaination** The problem is traced down to the node's peer. During headers retrieval peer fetches the last block of the batch from its cache and tries to retrieve the rest of them from ancients. Given the old maximum RLP header size constraint of 700 bytes per header, the peer can't retrieve the whole set of requested headers (MaxHeaderFetch, 192 headers each at least of 1006 bytes). Since the number of retrieved headers doesn't match the requested one, chain accessor returns only the first header that was retrieved from cache: ``` // read remaining from ancients max := count * 700 data, err := db.AncientRange(ChainFreezerHeaderTable, i+1-count, count, max) if err == nil && uint64(len(data)) == count { // the data is on the order [h, h+1, .., n] -- reordering needed for i := range data { rlpHeaders = append(rlpHeaders, data[len(data)-1-i]) } } return rlpHeaders ``` **Solution** Since `ReadHeaderRange` function specifies that the caller should limit the number of requested headers to prevent DoS attack, it's safe to remove the maximum bytes constraint. **Additional notes** After the fix I found that almost the same code change is present in the original Geth commit ported to NeoX node implementation in #130: 447945e. And thus, the problem should not be reproducible on the post-sync NeoX node. Signed-off-by: Anna Shaleva <[email protected]>
1 parent 81ea1ce commit 66039cc

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

core/rawdb/accessors_chain.go

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -334,8 +334,7 @@ func ReadHeaderRange(db ethdb.Reader, number uint64, count uint64) []rlp.RawValu
334334
return rlpHeaders
335335
}
336336
// read remaining from ancients
337-
max := count * 700
338-
data, err := db.AncientRange(ChainFreezerHeaderTable, i+1-count, count, max)
337+
data, err := db.AncientRange(ChainFreezerHeaderTable, i+1-count, count, 0)
339338
if err == nil && uint64(len(data)) == count {
340339
// the data is on the order [h, h+1, .., n] -- reordering needed
341340
for i := range data {

0 commit comments

Comments
 (0)