I'm testing EdgeDB locally, my host is a decent macbook pro and the database runs in docker:
version: "3.7"
services:
  edgedb-server:
    image: edgedb/edgedb
    ports:
      - "5656:5656"
      - "8888:8888"
    volumes:
      - type: bind
        source: /Users/dima.tisnek/edgedb-data
        target: /var/lib/edgedb/data
        consistency: delegated
I've created a table with ~20 columns, 10 str, 3 bool, 2 int16, 3 datetime (mostly populated); and 2 MULTI str (not populated).
I've loaded 35k rows, total JSON data size 18MB.
I'm testing read throughput using this function:
async def main():
    c = await edgedb.async_connect("edgedb://edgedb@localhost")
    d = await c.fetchall("""
        SELECT User {
            domain,
            username,
            # 16 more columns
        };
    """)
    logging.warning("got %s records", len(d))
And I'm getting ~1.1s for 35k rows. That's 30k rows/s or <20MB/s.
Is this slow? Is this fast?
To be fair, I've recently discovered that production AWS dynamodb tops out at 1MB/s in such setup (amazon blog post) so EdgeDB wins ten-fold. At the same time I kinda recall running a MySQL/InnoDB server and thinking about performance in millions or rows/s a decade ago. So EdgeDB seems slow maybe thirty-fold?
I reproduced the benchmark with a few changes: 1) I only measured the actual query runtime (connection time excluded); 2) EdgeDB server was running directly on a Linux host, not in Docker.
My result:
35038 records in 0.286s: 122314 records/s
To compare, I loaded the same dataset directly into Postgres and ran a similar query with psycopg2. The result was nearly identical:
35038 records in 0.285s: 122986 records/s
This is not surprising, since once the query is compiled to SQL, the I/O overhead of EdgeDB over raw Postgres is negligible. Additionally, to stress-test the server better you need multiple concurrent clients, as we've done in our benchmarks.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With