7

I'm currently working on a prototype in C# that utilises CQRS and event sourcing and I've hit a performance bottleneck in my projections to an SQL database.

My first prototype was built with Entity Framework 6, code first. This choice was made primarily to get going and because the read side would benefit from LINQ.

Every (applicable) event is consumed by multiple projections, which either create or update the corresponding entity.

Such a projection currently look like this:

public async Task HandleAsync(ItemPlacedIntoStock @event)
{
    var bookingList = new BookingList();
    bookingList.Date = @event.Date;
    bookingList.DeltaItemQuantity = @event.Quantity;
    bookingList.IncomingItemQuantity = @event.Quantity;
    bookingList.OutgoingItemQuantity = 0;
    bookingList.Item = @event.Item;
    bookingList.Location = @event.Location;
    bookingList.Warehouse = @event.Warehouse;

    using (var repository = new BookingListRepository())
    {
        repository.Add(bookingList);
        await repository.Save();
    }
}

This isn't very well performing, most likely for the reason that I call DbContext.SaveChanges() in the IRepository.Save() method. One for each event.

What options should I explore next? I don't want to spent days chasing ideas that might prove to be only marginally better.

I currently see the following options:

  • Stick with EF, but batch process the events (i.e. new/save context every X number of events) as long as the projection is running behind.
  • Try to do more low-level SQL, for example with ADO.NET.
  • Don't use SQL to store the projections (i.e. use NoSQL)

I expect to see millions of events because we plan to source a large legacy application and migrate data in the form of events. New projections will also be added often enough so the processing speed is an actual issue.

Benchmarks:

  • The current solution (EF, save after every event) processes ~200 events per second (per projection). It does not scale directly with the number of active projections (i.e. N projections process less than N * 200 events/second).
  • When the projections aren't saving the context, the number of events/second increases marginally (less than double)
  • When the projections don't do anything (single return statement), the processing speed of my prototype pipeline is ~30.000 events/second globally

Updated benchmarks

  • Single-threaded inserts via ADO.NET TableAdapter (new DataSet and new TableAdapter on each iteration): ~2.500 inserts/second. Did not test with projection pipeline but standalone
  • Single-threaded inserts via ADO.NET TableAdapter that does not SELECT after inserting: ~3.000 inserts/second
    • Single-threaded ADO.NET TableAdapter batch-insert of 10.000 rows (single dataset, 10.000 rows in-memory): >10.000 inserts/second (my sample size and window was too small)
urbanhusky
  • 1,336
  • 13
  • 35
  • I'm no .NET expert, but does something like `async repository.Save();` exist? I mean, if you don't need what `repository.Save();` returns, "eventually" saving it should work for you, shouldn't it? – gtramontina Jun 01 '16 at 14:56
  • Merely committing the unit of work is peanuts compared to all the tracking that goes in within it. I don't really gain much speed if I don't even write to SQL at all (when using EntityFramework). – urbanhusky Jun 01 '16 at 18:18

3 Answers3

5

I've seen performance improvements of several orders of magnitude, even with Entity Framework, when batching the commits and improving my overall projection engine.

  • Each projection is a separate subscription on the Event Store. This allows each projection to run at its maximum speed. Theoretical maximum of my pipeline on my machine was 40.000 events per second (possibly more, I ran out of events to sample with)
  • Each projection maintains a queue of events and deserialises the json to POCOs. Multiple deserialisations per projection run in parallel. Also switched to json.net from data contract serialisation.
  • Each projection supports the notion of a unit of work. The unit of work is committed after processing 1000 events or if the deserialisation-queue is empty (i.e. I am either at the head position or experienced a buffer underrun). This means that a projection commits more often if it is only a few events behind.
  • Made use of async TPL processing with interleaving of fetching, queueing, processing, tracking and committing.

This was achieved by using the following technologies and tools:

  • The ordered, queued and parallel deserialisation into POCOs is done via a TPL DataFlow TransformBlock with a BoundedCapacity somewhere over 100. Maximum degree of parallelism was Environment.ProcessorCount (i.e. 4 or 8). I saw a massive increase in performance with a queue size of 100-200 vs. 10: from 200-300 events to 10.000 events per second. This most likely means that a buffer of 10 was causing too many underruns and thus committed the unit of work too often.
  • Processing is dispatched asynchronously from a linked ActionBlock
  • Each time an event is deserialised, I increment a counter for pending events
  • Each time an event is processed, I increment a counter for processed events
  • The unit of work is committed after 1000 processed events, or whenever the deserialisation buffer runs out (number of pending events = number of processed events). I reduce both counters by the number of processed events. I don't reset them to 0 because other threads might have increased the number of pending events.

The values of a batch size of 1000 events and queue size of 200 are the result of experimentation. This also shows further options for improvement by tweaking these values for each projection independently. A projection that adds a new row for every event slows down considerably when using a batch size of 10.000 - while other projections that merely update a few entities benefit from a larger batch size.

The deserialisation queue size is also vital for good performance.

So, TL;DR:

Entity framework is fast enough to handle up to 10.000 modifications per second - on parallel threads, each. Utilise your unit of work and avoid committing every single change - especially in CQRS, where the projection is the only thread making any changes to the data. Properly interleave parallel tasks, don't just blindly async everything.

urbanhusky
  • 1,336
  • 13
  • 35
3

As the author of Projac, I suggest you have a look at what it has to offer, and steal what feels appropriate. I built it specifically because LINQ/EF are poor choices on the read model/projection side ...

Yves Reynhout
  • 2,982
  • 17
  • 23
2

Saving one record at a time to SQL Server is always going to be poorly performing. You have two options;

  1. Table Variable Parameters

Use a table variable to save multiple records to a stored procedure in a single call

  1. ADO Bulk Copy

Use the Bulk Insert ADO library to bulk copy the data in

Neither of which will benefit from being in EF apart from connection handling.

I would do neither if your data is simple key-value pairs; using an RDBMS is probably not a good fit. Probably Mongo\Raven or other flat data store would be better performing.

PhillipH
  • 6,182
  • 1
  • 15
  • 25
  • For pure inserts, bulk operations (or just batch processing) might help - but I also might have updates of existing data, where each operation needs an additional fetch... – urbanhusky May 24 '16 at 14:36
  • If you want to stick to SQL server just use ADO Bulk Insert to a temporary table and then execute a SQL Merge command on the temporary table to the destination. This will be the fastest performing way of doing an insert/update bulk. – PhillipH May 24 '16 at 14:40
  • So essentially it means that I have to batch my operations first - i.e. as long as I have more than one event to process, I should prepare the data in-memory and regularly flush it to whatever store I use. If I don't somehow batch process the events, I will never be able to do any form of bulk operations (or DataAdapter/TableAdapter batches). I'm still not entirely sure how I could improve the performance when I have to edit existing data (aside from batch processing). – urbanhusky May 25 '16 at 06:54
  • Yes - batching is the key to performance - it eliminates network latency, and reduces the impact of locking and logging on the SQL server side. We must remember SQL Server is a set based data store - if our operations are a "Set of one" then its going to operate poorly compared to a larger Set (within reason). If you absolutely need to carry our per-record operations (without batching) then your code looks optimal. – PhillipH May 25 '16 at 19:27