Mempool Archive

Blocknative actively maintains the most comprehensive historical dataset of mempool transaction events within the Ethereum ecosystem. This collection contains transaction detection events since November 1st, 2019.

  • Blocknative logs all mempool transactions from nodes in multiple geographical regions for the Ethereum mainnet blockchain.

  • It is updated daily at 13 UTC time with a typical update containing 11M events for a 0.012TB size, though the heaviest days on the network can be as large as 41M events and 0.3TB size.

  • This uninterrupted dataset covers major scenarios the network has encountered over the years, including massive surges in traffic, huge gas spikes, bidding wars, the launch of MEV-boost, the price of ETH collapsing, EIP-1559, Black Thursday, and major hacks.

  • This data covers 27 data fields, such as gas details, input data, time pending in the mempool, failure reasons, and regional timestamps for each instance seen by our global network of nodes.

  • Our self-operated infrastructure provides the earliest detection times from North America, Asia, and Europe.

Getting Started

Each date has been partitioned into its own folder named in YYYYMMDD format. Within each date partition, there are 24 files, named by two digit hour (ie 02.csv.gz) that the transaction event was detected in. These files are tab delimited gzipped csvs.

For example, if you wanted to access transactions on June 16th, 2023 from 12pm-1pm, your URL would be: archive.blocknative.com/20230616/12.csv.gz

How to download

Query, download and store the data slices locally using the steps below:

 curl https://archive.blocknative.com/YYYYMMDD/HH.csv.gz

Fetching a full day of data

Here is a script you can use to download all slices in a day on your computer. Just modify with the DATE

#!/bin/bash

# Set the date
DATE="YYYYMMDD"
DOMAIN="https://archive.blocknative.com/"
BASE_URL="${DOMAIN}${DATE}/"

# Initialize a variable to track successful downloads
SUCCESSFUL_DOWNLOADS=0

# Loop through each hour (00 to 23)
for HOUR in {00..23}; do
    # Construct the URL for the current hour's data and Filename
    if [ $HOUR -lt 10 ]; then
        URL="${BASE_URL}0${HOUR}.csv.gz"
        FILENAME="0${HOUR}.csv.gz"
    else
        URL="${BASE_URL}${HOUR}.csv.gz"
        FILENAME="${HOUR}.csv.gz"
    fi

    # Initialize a variable to keep track of retries
    RETRIES=0

    # Loop to handle retries on 404, 429, and 504 responses
    while true; do
        # Download the data and check the response status code
        HTTP_STATUS=$(curl -o "$FILENAME" -w "%{http_code}" "$URL")

        # Check the status code and print a message
        if [ "$HTTP_STATUS" -eq 200 ]; then
            echo "Downloaded $FILENAME"
            ((SUCCESSFUL_DOWNLOADS++))
            break  # Exit the retry loop on success
        elif [ "$HTTP_STATUS" -eq 429 ] || [ "$HTTP_STATUS" -eq 504 ]; then
            echo "Received $HTTP_STATUS. Retrying in 1 second..."
            sleep 1  # Wait for 1 second before retrying
            ((RETRIES++))
            if [ $RETRIES -ge 3 ]; then
                echo "Retry limit reached. Exiting."
                exit 1
            fi
        elif [ "$HTTP_STATUS" -eq 404 ]; then
            echo "File not found (404). Exiting for $FILENAME."
            break  # Exit the retry loop for 404
        else
            echo "Error downloading $FILENAME - Status code: $HTTP_STATUS"
            rm "$FILENAME"  # Remove the empty file
            break  # Exit the retry loop on other errors
        fi
    done
done

if [ "$SUCCESSFUL_DOWNLOADS" -eq 24 ]; then
    echo "All slices downloaded successfully!"
else
    echo "Some slices were not downloaded successfully."
fi

Save this script to a file, for example, download_slices.sh, and make it executable using the following command:

chmod +x download_slices.sh

Then, run the script by executing:

./download_slices.sh

Fetching on a custom range

Here is a script you can use to (1) download all hourly slices in a specific range of days on your computer, or (2) all specific hourly slices on a specific day.

Options:

  1. -date-range: for downloading full hourly slices for all days within this range (both dates inclusive). Format: YYYYMMDD-YYYYMMDD

    • For date range: ./download_mempool.sh --date-range YYYYMMDD-YYYYMMDD

  2. -hour-range: for downloading data for specific hours on a particular day. Format: YYYYMMDD:HH-HH

    • For hour range: ./download_mempool.sh --hour-range YYYYMMDD:HH-HH


#!/bin/bash

# Fetch arguments
while [[ $# -gt 0 ]]; do
    key="$1"
    case $key in
        --date-range)
            DATE_RANGE="$2"
            shift; shift
            ;;
        --hour-range)
            HOUR_RANGE="$2"
            shift; shift
            ;;
        *)
            shift
            ;;
    esac
done

DOMAIN="https://archive.blocknative.com/"
SUCCESSFUL_DOWNLOADS=0

download_data() {
    local DATE=$1
    local HOUR_START=$2
    local HOUR_END=$3
    local BASE_URL="${DOMAIN}${DATE}/"

    for HOUR in $(seq -w $HOUR_START $HOUR_END); do
        URL="${BASE_URL}${HOUR}.csv.gz"
        FILENAME="${DATE}_${HOUR}.csv.gz"
        RETRIES=0

        while true; do
            HTTP_STATUS=$(curl -o "$FILENAME" -w "%{http_code}" "$URL")

            if [ "$HTTP_STATUS" -eq 200 ]; then
                echo "Downloaded $FILENAME"
                ((SUCCESSFUL_DOWNLOADS++))
                break
            elif [ "$HTTP_STATUS" -eq 429 ] || [ "$HTTP_STATUS" -eq 504 ]; then
                echo "Received $HTTP_STATUS. Retrying in 1 second..."
                sleep 1
                ((RETRIES++))
                if [ $RETRIES -ge 3 ]; then
                    echo "Retry limit reached. Exiting."
                    exit 1
                fi
            elif [ "$HTTP_STATUS" -eq 404 ]; then
                echo "File not found (404). Exiting for $FILENAME."
                break
            else
                echo "Error downloading $FILENAME - Status code: $HTTP_STATUS"
                rm "$FILENAME"
                break
            fi
        done
    done
}

# Date Range Mode
if [ ! -z "$DATE_RANGE" ]; then
    IFS='-' read -ra DATES <<< "$DATE_RANGE"
    START_DATE=${DATES[0]}
    END_DATE=${DATES[1]}

    for DATE in $(seq -w $START_DATE $END_DATE); do
        download_data $DATE 00 23
    done
fi

# Hour Range Mode
if [ ! -z "$HOUR_RANGE" ]; then
    IFS=':' read -ra PARTS <<< "$HOUR_RANGE"
    DATE=${PARTS[0]}
    IFS='-' read -ra HOURS <<< "${PARTS[1]}"
    HOUR_START=${HOURS[0]}
    HOUR_END=${HOURS[1]}

    download_data $DATE $HOUR_START $HOUR_END
fi

if [ "$SUCCESSFUL_DOWNLOADS" -gt 0 ]; then
    echo "All slices downloaded successfully!"
else
    echo "Some slices were not downloaded successfully."
fi

Save this script to a file, for example, download_mempool.sh, and make it executable using the following command:

chmod +x download_mempool.sh

Then, run the script by executing the command specified above the script.

Data Schema

Blocknative logs all mempool transactions from nodes in multiple geographical regions for the Ethereum mainnet blockchain. The Archive contains historic events for all transactions:

  • entering the mempool

  • denied entry into the mempool (rejection with reason)

  • exiting the mempool (eviction with reason)

  • replacing existing mempool transaction (speedup or cancel)

  • finalized on chain (confirmed or failed)

The number of times a transaction appears in the Archive corresponds to the number of status changes it undergoes. The detecttime field indicates the time when the status change was first observed.

Below you can find the complete schema for the data:

Frequently Asked Questions

What attribution must I provide when using the Blocknative Data Archive?

The archive is publicly available according to open data standards and licenses datasets under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International.

2.1 Attribution — End Users must give appropriate credit, provide a link to the license, and indicate if changes were made. End Users may do so in any reasonable manner, but not in any way that suggests the licensor endorses End Users or their use. 2.2 NonCommercial — End Users may not use the material for commercial purposes. 2.3 ShareAlike — If End Users remix, transform, or build upon the material, End Users must distribute their contributions under the same license as the original.

Please use the following as a guideline for attribution:

  1. Papers: Data provided by Blocknative

If you have any questions please reach out to us on Discord.

What format is the data?

The data is stored in hourly slices with file format *.csv.gz The data is tab delimited.

How many nodes are gathering mempool data?

We run highly redundant node infrastructure in each region to ensure strong uptime.

How can I identify on-chain transactions?

On-chain transaction have a confirmed status.

SELECT *
FROM mempool_archive
WHERE status = 'confirmed'

How can I identify private transactions?

A private transaction does not have a pending event. timepending is determined from the difference between a transaction's pending event and confirmed event.

SELECT *
FROM mempool_archive
WHERE timepending = 0
AND status = 'confirmed'

What is the difference between dropreason and rejectionreason?

A dropped transaction might have been valid but deemed less important or lower-priority. A rejected transaction is one that is fundamentally flawed or invalid according to the Ethereum protocol rules.

A drop reason could be that there isn't enough ETH in the EOA to cover gas fees. A rejection reason could be incorrect transaction signatures. Dropped transactions existed in the mempool, but are dropped to make room for incoming transactions. Rejected transactions never make it to the mempool.

Last updated