Introducing Faster Ethereum Logs and Events
An important part of writing a good decentralized application on Ethereum is knowing when state inside a smart contract changes. That’s why Ethereum has always had “logs and events”, two names for the same thing: the ability for your smart contract to emit data during a transaction.
An important part of writing a good decentralized application on Ethereum is knowing when state inside a smart contract changes. That’s why Ethereum has always had “logs and events”, two names for the same thing: the ability for your smart contract to emit data during a transaction.
However, the popularity of this mechanism has led to some interesting problems:
- As the number of events published by DApps has grown, the indexing strategy used by Ethereum clients has begun to burst at the seams.
- Likewise, DApp developers have become more and more reliant on speedy logs access, especially at app startup.
- As more DApps have gained popularity over time, so too have the number of
eth_getLogs
requests handled by Infura, sometimes topping out at over 100 million a day!
And, as many of our developers have noticed, these factors have all led to a gradual slowdown in eth_getLogs
response times from Infura.
Today, we are very happy to announce the rollout of our new logs and events infrastructure.
Why have logs and events sometimes been so slow?
Before I get into the new architecture, let’s talk about what we’ve tried in the past. Previously, our logs and event pipeline was largely handled by go-ethereum (“geth”) clients. Geth is very good at handling the request volume of a single user, and does very admirably at handling one hundred million requests a day, but there are a couple of reasons it has trouble scaling.
The main reason is that the Ethereum clients have always been built for single-user use, and the way logs and events are accessed inside geth speaks to that. I won’t get into the specifics of geth’s use of bloom filters, but the key point is that even though eth_getLogs
can filter across many dimensions, at the lowest levels inside geth, logs are really only accessible by block number. If a user queries "give me all of the events for my contract from the beginning of Ethereum until now" a couple of things would happen:
- The geth node would compare the bloom filter of every block with the log filter.
- For every block that is a potential match (and bloom filters often include false positives), the transaction receipts for that block would be loaded.
- Finally, the logs generated by said receipts would be compared against the filter one by one.
Even on an otherwise unloaded Ethereum node, a big query like this can take anywhere from hundreds of milliseconds to a couple of seconds to complete.
Luckily, geth has an in-memory caching system, so adding a cache for transaction receipts helps to alleviate some of that pressure, but too many queries for different blocks still lead to cache contention.
To avoid cache contention, our next step was to segment traffic into two groups. Since most log requests are for the most recent blocks, and all those blocks share the same cache, we segmented traffic into two “buckets”:
- If your
eth_getLogs
request covered a small number of recent blocks, it went to the "near head" segment. - Otherwise, your request went to the general
eth_getLogs
pool.
By grouping logs requests near head to the same set of Ethereum nodes, cache contention was greatly reduced. This helps a lot with overall response times (average response times dropped from over a second to under 100 milliseconds), but still does not address the “long tail” of requests languishing in the general pool. Something else had to be done.
Logs and Events Caching
Today we’re happy to announce the general availability of that “something else”: real-time logs and events caching. Now, when you send an eth_getLogs
request to Infura, the RPC is actually handled by an off-chain index of Ethereum logs and events rather than directly by an Ethereum node. This index is backed by a traditional database, which allows us to index and query on more data, without the added overhead of false positives experienced with a bloom filter.
Because these databases are tuned for real-world eth_getLogs
queries, we've been able to reduce the infrastructure footprint for servicing eth_getLogs
by over 90%, and we can continue to offer access to this RPC to all users.
Furthermore, this new architecture addresses a major issue w/ load-balanced responses: inconsistency between different Ethereum nodes. We are excited to finally resolve that issue for eth_getLogs
. This new caching layer will provide a consistent view into log data and reacts as necessary to chain re-org events in realtime.
Next steps
We hope to share more details about the implementation of this caching layer in a later post. Furthermore, we are working hard on adding more features. For example, soon we will be using the same index to power eth_subscribe
for logs like we've done for "newHeads" on our web socket API endpoint. As we roll out this caching feature to more RPCs it will help to alleviate any inconsistencies seen in repeated RPC responses.
We will continue to iterate and optimize the Ethereum logs and events experience for you, please share your feedback on our community site.