EF Core 2.1 vs NHibernate 5.1: DDD perspective

That was probably a long wait for those of you who follow my blog. But, better late than never, so here it is: another comparison of Entity Framework and NHibernate, in which I bash EF Core and present it as an unbiased review. Just kidding, I do try to be unbiased here to the best of my skills.

This article renders the previous ones obsolete (which they already were at this point anyway).

EF Core vs NHibernate: Preface

EF Core has made a lot of progress and it took me quite a while to catch up with it (this, and totally not my procrastination was the reason for the delay in publishing the comparison). And although I spent quite some time researching the topic, I could still miss something. If you see anything I’ve written is incorrect or incomplete, please, share in the comments below. I’ll update the article and then delete your comment to seem as though I knew this from the very beginning.

So, what’s the deal here? We’ll compare EF Core and NHibernate from the Domain-Driven Design (DDD) perspective. And when it comes to domain modeling, the two most important things you need to focus on are:

  • Encapsulation, and
  • Separation of concerns.

Encapsulation stands for protecting data integrity. You do that by preventing clients of a class from setting its internals into an invalid or inconsistent state. The main rule here is that the domain model must maintain its invariants at all times. In Object-Oriented Programming, the two major techniques that help you achieve encapsulation are:

  • Information hiding, and
  • Bundling data and operations together.


That’s the essence of encapsulation. BTW, I wrote a whole course about building encapsulated domain models which I regard as one of my best Pluralsight courses, definitely check it out: Refactoring from Anemic Domain Model Towards a Rich One.

Separation of concerns (also known as Persistence Ignorance and Domain Model Isolation) stands for stripping your domain model from all non-domain-related stuff. The reasoning behind this principle is that you can only keep track of so many things at a time. The domain model itself is usually quite complicated already. The more you isolate it from concerns that don’t relate to domain knowledge, the better. And because Domain Model is the heart of any (line-of-business) application, separation of concerns applied to this specific area pays for itself manyfold.

Given the DDD focus of this comparison, I will only consider features that affect encapsulation and separation of concerns.

For those of you who might wonder why bother with this at all and not just separate the domain model into domain and persistence models and keep the domain model encapsulated this way: it doesn’t work out well. In complex applications, the amount of effort required to build a separate persistence model doesn’t justify the improvements in terms of purity. The effort is too large, the benefits are too small.

The only use case where it’s reasonable is with legacy databases. Trying to bridge the gap between such database’s structure and the domain model is almost impossible, so you are pretty much forced into building a separate persistence model. In all other cases, consider relying on the plain ORM and accepting its shortcomings if any. Here I wrote about it in more detail: Having the domain model separated from the persistence model.

Alright, let’s start.

#1: Referencing a related entity

We’ll start with something simple – referencing a related entity. Let’s say you have a many-to-one relationship between Student and Course. You can implement this relationship directly by introducing a foreign key into your domain model:

But that would violate the separation of concerns principle. You’re introducing a concern that has nothing to do with the domain you are working on. Replace the Id with a proper member:

To read more about it, see Link to an aggregate: reference or Id?.

The Entity Framework team has introduced a nice feature, Shadow Properties, that allows you to do that. And EF Core 2.1 has brought back Lazy Loading that allows you not to worry about manually fetching the related entities beforehand.

The code above works just fine in EF Core without additional configuration. It automatically creates a shadow property with a foreign key (FavoriteCourseId) that doesn’t show up in the domain model. All you deal with is the nice and clean navigation property.

There is one small drawback with EF Core’s implementation here, though. When you refer to the Id of the navigation property, like this:

it triggers loading of the related object (favorite course). It’s not ideal because this Id is already loaded alongside with the student’s other data. But you don’t often refer to entities’ Ids like that, so it’s a minor issue.

Needless to say, NHibernate allows you to do this as well.

I must add that, in the future, the EF team plans to rewrite the lazy loading mechanism. The current implementation (2.1) relies on dynamic proxies, just as EF4, EF5, EF6, and NHibernate do. Which means the underlying class is not a true POCO and must adhere to the requirements from the ORM. They aren’t that big but still annoying. In particular: you need to declare the class’ properties as virtual and have a parameter-less constructor (which can be made non-public). The future implementation will use Roslyn to weave assemblies as part of the build process.

We’ll see how it goes. If implemented, it’s going to be an exciting news as this would push the separation of concerns even further.

For now, the score of #1 Referencing a related entity is:
EF Core vs NHibernate – 0.9 : 1.

#2: Working with disconnected graphs of objects

The work with graphs of objects always was a weak spot in Entity Framework. Before EF Core, if you were to add a new entity to the context, EF would mark all its children as added as well. This was a big pain as you could have objects in the graph that already existed in the database. Trying to insert such objects resulted in an exception. For example, this code didn’t work in EF6:

EF tried to insert course along with student. And of course, because the course was already in the database, this code threw an exception.

To fix the problem, you had to manually indicate the state of the course entity:

In this particular example, it’s not that bad. But when you have complex aggregates, this starts to get overwhelming really fast. People even came up with a generalized mechanism for determining the state of domain objects and communicating that state to EF before each SaveChanges.

Thankfully, the team has fixed this and in version 2.1 the code above works just fine without the need to tell EF the state of the object. If the object has a default (zero) Id, it’s marked as added. Otherwise – as modified.

But here’s the catch. It doesn’t work with disconnected (detached) objects. If the root entity has any children that are detached from the context, you are back to square one: you have to manually guide EF Core through which of the objects are new and which are not.

And this use case comes up a lot if you work with rich domain models. The enumeration pattern, representing reference data as code – all that uses domain objects outside the scope of a particular database context.

This code throws an exception:

To avoid the error, you must set the state of courseCached like in good old days.

Not sure why the EF team can’t apply the same mechanism to detached objects but let’s hope the fix won’t take them another couple years.

Of course, NHibernate did the automatic state resolution from the very beginning, so nothing particularly interesting to talk about here.

The score of #2: Working with disconnected graphs of objects is:
EF Core vs NHibernate – 0 : 1.

#3: Mapping backing fields

The ability to map to backing fields was introduced in EF Core 1.1. On the surface, it looks like a huge win for EF and one could only wonder why it took them so long to roll it out.

However, with a closer inspection, the situation here is much worse. This feature is unusable outside the narrow scope of simple use cases.

But let’s be coherent. I’ll write about what works first, describe some small issues around this implementation, and then continue with why it is unusable from the Domain-Driven Design perspective.

Alright, so the biggest benefit of mapping directly to backing fields is the ability to encapsulate collections in your domain model. That is something that wasn’t possible in EF6, at least not without some serious violation of separation of concerns.

Here’s an example:

The Enrollments collection’s property here is read-only (has no setter), which it always should be, and the collection itself is read-only too, to prohibit the clients of this code from altering it. All modifications to the collection should be done using AddEnrollment and DeleteEnrollment methods on the Student class itself.

The explicit mapping is somewhat awkward but the good news is you can omit it. The mapping works out of the box, by convention. Here’s the explicit version:

So it looks like we have a nice degree of encapsulation here.

Not so fast.

One (small) problem with this feature is that EF Core requires the navigation property’s type has to be a sub-type of the backing field’s.

For example, this won’t work:

because ICollection doesn’t inherit from IReadOnlyList.

This is completely unnecessary. You shouldn’t be worrying about the property at all when binding to the backing field.

Another small-ish issue here is that EF Core allows you to bind to a concrete collection type (List in our case). This shouldn’t be allowed. Binding to concrete collections means that you won’t be able to intercept calls to it, such as

and won’t be able to translate such calls to the corresponding SQL queries like

Instead, you will always have to load the full collection into the memory first.

But that’s just nitpicking on my part. I don’t remember any use cases off the top of my head where such optimizations would be useful.

And that brings us to the main issue: EF Core doesn’t intercept calls to backing fields.

Let’s take an example. Let’s say that there’s an invariant in your domain model: no student can have more than 5 enrollments. How would you implement it? Well, with something like this:

This way, you introduce a precondition in AddEnrollment which protects the integrity of the domain model.

The problem here is that this check always passes, no matter how many enrollments there are already. That’s because the enrollments collection is not initialized until you explicitly call the Enrollments navigation property.

It’s actually not an issue with the backing fields themselves but rather with the approach to lazy loading the EF team has chosen early on. If you worked with NHibernate, you might remember that it requires you to mark all public members (including methods) as virtual, not only navigation properties like in EF Core. That is done for this exact reason – to intercept any calls to the backing fields and initialize the collections before you start using them.

In other words, NHibernate requires you to make AddEnrollment virtual to override its runtime behavior: it initializes _enrollments before passing the control to your code. EF Core doesn’t do that.

This problem comes from the very first version of Entity Framework, and it’s astonishing that the EF Core team hasn’t done anything about it since then given that Oren Eini wrote about it almost 10 years ago. Sometimes, it seems as though the EF team is forbidden from looking at competitors’ (read: NHibernate’s) code bases and reading their authors’ blogs.

Now. One way to avoid this problem within AddEnrollment is to call the Count property on the navigation property, not the underlying field:

Fair enough. Although, it’s unclear why we call the property in one case and the backing field in the other.

However, you won’t be able to work around this issue in this scenario:

The enrollments collection is always empty here. To fix the issue you need to introduce a dirty hack:

Needless to say, this is something that’s too easy to overlook. This code also violates the separation of concerns principles: it mixes the domain concerns with an ORM one.

Because the issue here is with the way EF implements lazy loading, you can also overcome it by always loading all children collections eagerly. And it will work in simple cases. In more complex scenarios, however, you can have deep hierarchical aggregates with multiple collections. You can also have different update use cases, most of which would require only one of those collections. Loading the full aggregate graph eagerly all the time would entail drastic dips in performance and lazy loading is essential to avoid this.

This issue might be fixed when the EF team rolls out the new lazy loading implementation but who knows how much time it will take and whether it will fix it at all.

With NHibernate, mapping to backing fields is as simple as this (no additional hacks required):

I’ll give EF Core a couple points for the effort, though. The score of #3: Mapping backing fields is:
EF Core vs NHibernate – 0.2 : 1.

#4: Deleting orphaned entities

EF Core finally supports deletion of orphaned entities. This is useful when you’ve got an aggregate root that needs to have full control over its children’s lifetimes. So basically the situation we have with students and their enrollments. When deleting an enrollment from the student’s collection of enrollments, it should be deleted from the database too as they don’t make sense without the parent entity.

Here’s how to configure it in EF Core:

The NHibernate’s version:

EF Core gets a full point here, good job. The score of #4: Deleting orphaned entities is:
EF Core vs NHibernate – 1 : 1.

#5: Single-valued Value Objects

Here they come, Value Objects. Value Object is a concept that assumes immutability, no identity, and inability to live outside of the host entity. Read more about them in this article: Entity vs Value Object: the ultimate list of differences.

It’s hard to overestimate how important they are for building a rich domain model. Read this article to learn why, I won’t rehash it here.

The whole topic of Value Objects can be split into single-valued (those that consist of a single value) and multi-valued Value Objects. That’s because these two types are handled differently by EF Core and NHibernate. We’ll talk about single-valued Value Objects in this section.

EF Core 2.1 has introduced a new feature called Value Conversions and it’s a perfect fit for Single-valued Value Objects.

Let’s take an example. Let’s say that each student has an email. Email is a domain concept and you don’t want to fall into the trap of primitive obsession here by representing emails as strings. And so you want the domain class to look like this:

(BTW, I’m using the ValueObject base class from this article here.)

Thanks to the Value Conversions feature, this entity is very easy to map to the database now. You only need to do this:

where the first delegate tells EF how to convert the value from Email to string, and the second – how to do the backward conversion.

With NHibernate, you need to introduce a backing field for this purpose:

As you can see, the EF Core’s syntax is cleaner as it doesn’t require a backing field in the domain entity. Therefore, the score of #5: Single-valued Value Objects is:
EF Core vs NHibernate – 1 : 0.8.

#6: Multi-valued Value Objects

With multi-valued Value Objects, the situation for EF Core is not as great.

You can map multi-valued Value Objects using Owned Entity Types. And the EF Core team decided to make some strange design decisions with this feature.

Internally, owned types are implemented as regular entities which means:

  • They have an Id property (declared as a shadow property, i.e. it doesn’t appear in the domain class), and
  • EF Core tracks changes in them just like it does changes in regular entities.

This is not how Value Objects work. They shouldn’t have an Id and EF’s Change Tracker should attribute all changes in them to their parent entities, not owned types themselves.

Alright, these are internal implementation details and you shouldn’t care about them per se. What you should care about is the limitations those details entail. So, what are they?

First of all, it’s nullability. Let me show that with an example. Let’s say that students in our domain model have an address, and this address is allowed to be null:

If you try to create a student without this address:

you will get an exception:

An unhandled exception of type ‘System.InvalidOperationException’ occurred in Microsoft.EntityFrameworkCore.dll
The entity of type ‘Student’ is sharing the table ‘Student’ with entities of type ‘Address’, but there is no entity of this type with the same key value ‘{Id: -2147482647}’ that has been marked as ‘Added’.

Why this exception? Because EF Core tries to support storing owned types in separate tables out of the box, and this imposes some limitations. In particular, it’s unclear how to differentiate between a null owned type stored in the parent entity’s table and an owned entity with its own table that has the Id of -2147482647 and all columns set to null. A quite artificial limitation that could be easily avoided had EF Core not tried to do so much magic, if you ask me.

The proposed workaround here is to use the Null Object pattern. In other words, if a student has no address, still create one but set all its fields to nulls:

This just doesn’t work from the DDD perspective, though. The Address value object might have its own invariants and one of them could be that the city and the street must not be null. So, another violation of the domain model’s encapsulation here.

How NHibernate deals with this, you might ask? For starters, it doesn’t treat value objects as entities with their own Ids. Second, if the Address property is null in the domain class, NHibernate will set all columns that relate to this value object to nulls on saving it to the database. And it will apply the same convention when materializing the student from the database.

Here’s the corresponding mapping for NHibernate:

But does it mean NHibernate’s components don’t support storing Value Objects in separate tables? That’s right. And that’s a good thing. A feature should do only one thing and it should do it well. If you want to store value objects in separate tables, you can host it inside an entity, although I don’t recommend doing it for 1-to-1 relationships. It’s much easier to store them in the parent entity’s table instead.

This issue should be fixed in EF Core 3.0, though. The team is going to mimic NHibernate’s behavior with regards to nulls.

The next limitation is that if you have a hierarchy of domain entities, you won’t be able to use multi-valued Value Objects declared in the derived class. So, something like this won’t be possible either:

There were a lot more of issues with owned types in EF Core 2.0 which are fixed in version 2.1. The problem is that the fixes seem superficial and don’t work together well.

For example. EF Core 2.1 now supports read-only owned types. And you don’t even have to declare a parameter-less constructor in them, EF Core will find the appropriate constructor given that you define one that sets up all of the properties. Here’s the Address value object once again and the corresponding mapping:

Note the use of UsePropertyAccessMode, this does the trick.

You can now also have your value objects contain a navigation property, like for example:

And that’s great. A long-awaited enhancement. But these two features don’t work together. In other words, you can’t define the value object like the following which is both immutable and contains a navigation property:

And I assume that’s just tip of the iceberg. I haven’t worked with EF Core in production, just toyed with it in my spare time, and who knows how many more of such pitfalls there are. All because the team tries to fit a square peg in a round hole (implement owned types as regular entities). Just do it the same way NHibernate implemented its Component feature already.

And yes, NHibernate supports read-only Value Objects, allows you to declare them in derived types, behaves well when you introduce a navigation property in them, and even allows for adding a collection of properties (not that you should ever do that, though).

So once again we have a feature that’s usable in simple scenarios only. The score of #6: Multi-valued Value Objects is:
EF Core vs NHibernate – 0.4 : 1.

#7: Domain Events

Domain events is an important part of your model in DDD. They represent domain-specific changes in your domain. I talked about them in my DDD in Practice Pluralsight course and also wrote about them here: Domain events: simple and reliable solution. But let me recap it real quick.

The best way to implement domain events is using the pattern that I call “commit before dispatching”. In other words, when you dispatch the events after the database transaction is complete. To do that, you need to introduce a collection of domain events into the base entity class:

In order to dispatch those events, you need to be able to listen for changes in your domain objects. This is how you can do that with NHibernate:

Note 2 things here:

  • The dispatch is done after the transaction is committed.
  • You listen for not only changes in the object themselves, but also for changes in their collections. It’s important because you might want to update a collection of children and not the parent entity itself, and this should still be considered a change from the DDD perspective.

In EF Core, you can do the same using the following code:

The score of #7: Domain Events is:
EF Core vs NHibernate – 1 : 1.

#8: Many-to-many relationships

EF Core still doesn’t support many-to-many relationships. I personally don’t use many-to-many relationships often but sometimes, they do pop up, and it’s nice to have this feature in the ORM.

Let’s take the following example:

EF Core vs NHibernate: Many-to-many


The reason why I don’t use many-to-many relationships that much is because it’s pretty rare when the intermediate table contains only the (composite) primary key. More often than not, this relationship contains some additional information, and so it makes sense to elevate it to its own entity.

For example here:

EF Core vs NHibernate: Many-to-many with additional column

Many-to-many with additional column

you can see the additional column StudentSince. And because it’s generally not a good idea for fully-fledged entities to have composite keys, you can see it now has its own dedicated primary key.

Alright, so EF Core doesn’t support many-to-many relationships and you have to always introduce an entity for the intermediate table. Which actually wouldn’t be that bad (unless of course you port a legacy project from EF6 with lots of already existing many-to-many relationships, in which case it would). It wouldn’t be that bad if you could encapsulate that additional entity and hide this technical inconvenience from the clients of your domain model.

But you can’t.

First of all, you must have explicit foreign key properties (StudentId and InstructorId) defined in that entity:

But that’s a minor issue. The more important one is that this class must show up in the domain model. You can’t hide it there using a trick like this:

That’s because in EF Core, for some reason, the type of the property has to match the type of the underlying backing field, and there’s no way to tweak the mapping to make it work.

And so you have to come up with a much less appealing workaround:

Clearly, the Student2Instructors collection is not something you’d like to see in your domain model, it’s another ORM concern leaking into it.

And that’s not all. You are also having an N+1 problem here:

Instead of just 2 database roundtrips (assuming lazy loading is on) for the student itself and its collection of instructors, EF Core loads each instructor separately.

This one is not an issue with the many-to-many mappings but rather with the lack of proper mapping settings. But the lack of it becomes especially painful when working with many-to-many relationships. In NHibernate, you can explicitly specify which relationships to map eagerly, load lazily or even not load at all. In EF Core, there’s no such flexibility.

With NHibernate, you can map many-to-many relationships, and it just works exactly as you’d expect it to:

And even if for some reason you want to introduce an explicit class for the intermediate table, you are still able to encapsulate the work with it. It doesn’t have to show up in the domain model.

The score of #8: Many-to-many relationships
EF Core vs NHibernate – 0 : 1.


The total score is:
EF Core vs NHibernate – 4.5 : 7.8.

The EF Core team is putting a lot of effort, and EF Core is slowly approaching parity with NHibernate. But it’s still light years behind, and I don’t even mention all other, non-DDD related features.

And frankly, I’m not sure it will ever reach the parity. How hard can it be to look at NHibernate at copy its functionality, at least in the key areas? And still, with all the available resources, Microsoft’s flagman ORM is not even close to the competitor that’s being developed by only a handful of people.

And if you insist that there’s no one true way and that maybe the EF Core team has its own view on how ORMs should work, just look at EF’s progression. The more recent the release, the closer it is to NHibernate in terms of key design decisions. The team eventually comes to the same conclusions (see ## 2, 3, 4, 6), except that in NHibernate, they were implemented more than a decade ago.

Microsoft should have adopted NHibernate from the very beginning, and not tried to reinvent the wheel. This train is long gone of course and no one is going to abandon EF Core, so the next best thing would be to try to speed up the convergence and finally start looking at how others solve the same problems. Implement proper multi-valued Value Objects (read: copy NHibernate’s Component feature), the work with disconnected graphs of objects, lazy loading. That would be a good start.

Stop working on features that shouldn’t have made it to the ORM in the first place. Such as the soft deletion feature for example. Soft deletion should be the domain’s responsibility, not something you delegate to the ORM.

Developers, try out NHibernate. I’ve been using it for building highly encapsulated and clean domain models for many years. And now that it supports both async operations and .NET Standard, there’s no reason not to. The combination that often works best for me is: NHibernate for commands (write operations) and Dapper for queries (read operations).

If you’d like to look at examples of those domain models, check out these GitHub repositories of mine: one, two. I also recommend you watch these two of my Pluralsight courses: Domain-Driven Design in Practice and Refactoring from Anemic Domain Model Towards a Rich One.

Alright, the comparison turned out to be quite harsh, but someone had to say all this. Hopefully, one day there will be no real difference between the two ORMs in terms of DDD, or maybe EF Core would even do that much better than NHibernate.

UPDATE 6/23/2018

  • Updated #7 (EF Core actually supports this case).
  • Added #8: Many-to-many relationships.

  • Fábio Faria

    Thank you, quite explanatory as always.
    Another thing I’ve noticed is still the awfulness when related to many to many relationships. How does Nhibernate deals with this? I never used Nhibernate hence the question.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      That’s something I’ll need to look at, haven’t tried many-to-many relationships with EF Core yet. And actually, don’t use them very often with NHibernate either. Could you elaborate on what you don’t like with them in EF?

      • Fábio Faria

        I don’t like the fact that we still need to create a 3rd entity in order to retrieve the data from the DB. I read somewhere that it’s on their roadmap to change this…but it didn’t happen yet :). Because of that and some of the limitations you mentioned in your article, I still have a separation from Data models and Domain models. It seems that Nhibernate is way more DDD friendly, so probably I wouldn’t need this separation, I will give it a try.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Added #8: Many-to-many relationships

          • Fábio Faria

            Thank you for the Nhibernate example.

  • Александр Смирнов

    You made mistake in #2 you can work with disconnnec entitiesis as you showed above in EF core, the issue is that you can’t use it with shadow properties. By including navigation is and assigning it instead of creating or loading entire navigation object.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      You mean do something like this?

      int courseCachedId;
      using (SchoolContext context = new SchoolContext(optionsBuilder.Options))
      courseCachedId = context.Courses.Last().Id;

      using (SchoolContext context = new SchoolContext(optionsBuilder.Options))
      Student student = context.Students.Find(id);
      student.FavoriteCourseId = courseCachedId;
      Well, that’s the whole point – being able to work with domain objects instead of their ids.

  • Fábio Faria

    Also, you mention that you use “Dapper” for queries and “NHibernate” for commands. What is the advantage of separating queries and commands over using entities specific repositories, if you don’t use event sourcing?

    • Johannes Norrbacka

      I would love a blog post about how Vladimir combine Dapper and Nhibernate smoothly. We use Nhibernate and are struggling with performance, and it would surely be interesting to read a post about how we can use Dapper to improve on this.

      • http://enterprisecraftsmanship.com/ Vladimir Khorikov

        The basic idea – not using NHibernate/EF when you need to just show some data from the database on the UI. This avoids unnecessary transformations (raw data -> domain objects -> dtos). And, as Anders pointed out above, you can also manually fine tune your queries in each separate case, which is often not possible when using a full ORM. I wrote on a similar topic here: https://enterprisecraftsmanship.com/2015/04/20/types-of-cqrs/

        • Piotr Bagrowski

          I didn’t want to write queries using Dapper and didn’t want to do data -> domain -> DTOs/VMs transformation (probably using Automapper). I decided to create another NHibernate’s ClassMaps for ViewModels/DTOs (outside domain’s scope) and set ReadOnly() flag for them to improve performance a bit.

    • Anders Baumann

      I also use NHibernate and Dapper. The main reason is that with Dapper you get full control of the SQL. It is really difficult to performance tune the NHibernate queries and the generated SQL is difficult to read.
      By the way: Don’t use repositories. NHibernate is already an abstraction over the database. No need to put an abstraction on top of an abstraction.https://ayende.com/blog/4784/architecting-in-the-pit-of-doom-the-evils-of-the-repository-abstraction-layer

      • Fábio Faria

        I have to agree with you that NHibernate and EF, for that matter, already provide and abstraction over the database. However, if you don’t want your services, on DDD, to be attached to a specific ORM, you need to add an abstraction which will just encapsulate how to fetch data. Needless to say, that nothing else is needed, as you pointed and very well, once this encapsulation is done, NHibernate and EF already deal with the rest of the abstractions.
        Thank you for the link, really good reading 🙂

      • Pete Weissbrod

        Agreed on all points, but the main feature of EF/NHibernate is not the SQL generation it is the object lifecycle management

        • Anders Baumann

          Sure. Use NHibernate for commands (update and create). Use Dapper for queries. Simple CQRS.

  • Mariusz Macheta

    #7 you can totally do this with entity framework by using change tracker

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      When working with domain events, you want to dispatch them after the db transaction is completed, not before. And also track changes in collections as well, which is impossible to implement in EF Core as far as I know.

      • Mariusz Macheta

        So it’s a matter of calling DispatchEvents just after base.SaveChanges (it completes a transaction).

        Hmm I’m not sure about that tracking changes in collection point. In proposed solution there is no listening for anything. Instead objects publish events into InMemory collection and then during/after save changes you retrieve every object that is loaded from db (including child coleciton objects) and if it had published any event then you dispatch it.

        • John Bouma

          Why not just override SaveChanges to:

          IEnumerable toPublish = this.ChangeTracker.Entities.OfType().Where(t => t.DomainEvents.Any).SelectMany(t => t.DomainEvents);

          await base.SaveChangesAsync();

          foreach(IDomainEvent e in toPublish)

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            You are right, EF Core does support this use case. Updated the article.

      • Harry McIntyre

        This still gives me the heebie-jeebies. Surely you want the domain events to be in a transaction too…

  • Hector Carballo

    Still a lot of ORM concerns in domain models. Domain events can be used to decouple the domain and the persistence, then use EF or NHibernate in data access layer (where they must be) for accessing DB; the main issue with my aproach is reconstructing domain objects (event sourcing is overkill most of the time), I use a mapper to map data access objects to domain objects, and then manually set properties than can’t be mapped (collections of value objects for example, which can be peristed in a separated table with its own Id, or as a JSON string)

  • John Smith

    > #1 EF Core vs NHibernate – 0.9 : 1.

    I know that the difference is minor, but, could you explain it better why EF Core is ~10% worst than NHibernate in this aspect?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      That’s because EF Core loads a related entity when you refer to that entity’s Id. For example, EF Core loads FavoriteCourse here (assuming lazy loading is on):

      Student student = context.Students.Find(id);
      int favoriteCourseId = student.FavoriteCourse.Id;
      This is unnecessary: the favorite course’s Id is already in the memory after EF loaded the student itself:

      /* Result includes FavoriteCourseID */
      SELECT * FROM dbo.Student WHERE StudentID = @ID

  • Vahid N.

    NH will never get the EF’s attraction, because ppl don’t need a PHD to work with EF and I’m not kidding!

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      If your point is that you need to be smart to use NHibernate but not EF Core, I would argue the opposite. A lot of things you need to work around in EF Core just work in NHibernate out of the box.

  • http://radblog.pl Radek Maziarka

    After your article, my question is – how Entity Framework could become some popular and NHibernate fell down in popularity?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      That’s a great question. I think the main reason is that when Microsoft comes up with an alternative for an open source library, it automatically becomes the default choice for many developers, especially when they are just starting out. This happened many times. NUnit vs MSTest, Nancy vs ASP.NET MVC are examples off the top of my head.

      • http://radblog.pl Radek Maziarka

        I would say that NUnit won with MSTest, but I get your point 🙂

      • John Bouma

        Except they hired Jason Newton-King

      • Matt

        And that’s exactly what they’re going to do in .NET Core 3.0 🙂

  • Muhammad Haggag

    A nasty side effect of the way EF Core modeled owned types is its unintended interaction with lazy loading. Since owned types are their own “entities”, they must also conform to all the lazy loading rules; their properties must be virtual, default constructors must exist, and so on. It also generates messy SQL.

    See the discussion here: https://github.com/aspnet/EntityFrameworkCore/issues/10787

  • John Bouma

    RE: #6

    The requirement for not null is only if you use the default behavior which nests the owned entity in the same table as the owner.

    You can always do
    builder.OwnsOne(t => t.Prop).ToTable(“table”) ;

    This creates a new table for the owned entity with a primary key/foreign key to the owning entity. This also lets you enforce all your business constraints. You don’t have to put any ids in the owned entity. Like all owned entities it’s automatically loaded when the entity is loaded and now can be null.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      only if you use the default behavior which nests the owned entity in the same table as the owner

      Well, that’s exactly the use case I want to implement for all of my Value Objects. Storing them in separate tables is not the best way to deal with VOs.

      • John Bouma

        What happened to persistence ignorance? If your repositories are returning fully formed aggregates, how they are stored should be of no consequence. If you want to customize your persistence and mapping behavior then you should be using dapper, nit ef or nhibetnate.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Persistence ignorance means you are handling the domain model and the database separately, not that you completely forget about the database. Not creating additional tables when working with Value Objects should be the default choice, and this choice shouldn’t affect the domain model.

          • Harry McIntyre

            Why? To reduce the number of tables being generated, and reduce joins? These seem like optimisations, rather than deal-breakers.

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            Simplicity of dealing with the database. Reducing the number of joins helps too. This alone is not a deal-breaker of course.

  • ThomasDC

    Excellent write-up! By the way, why not publish a new NHibernate course on Pluralsight? The most recent, dedicated one dates back from 2012 and doesn’t even cover things like fluent mapping. It would increase the visibility of this excellent, mature framework.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      Thanks 🙂 That’s actually a great suggestion, I’ll try to create one.

      • Niko Háquina

        Excellent idea! Vladimir’s previous courses have been my main resource for learning (Fluent) Nhibernate so far. A more in-depth course on it would be more than welcome.

      • NikoH

        Excellent idea! Vladimir’s previous courses have been my main resource for learning (Fluent) Nhibernate so far. A more in-depth course on it would be more than welcome.

      • Antonio Castro Jr

        Looking forward for the next NHibernate course.

      • Mikkel Riise Lund

        Still working on it, huh? 😉

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          It’s still on my list but with a lower priority. I’m currently working on another (bigger) project, that I will hopefully announce soon.

  • heap

    “And because it’s generally not a good idea for fully-fledged entities to have composite keys” … Interesting, have you written more about that by any chance? 🙂

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      Not in detail, no. The general idea is that it’s hard to maintain mapping between entities and tables with composite primary keys, and it’s harder to work with them on the database level too in case you need to write some SQL.

  • Jer0enH

    I have a hard time understanding why lazy loading is so important. Can’t help it, but invoking a round-trip to the database on property access, seems like an antipattern. An ORM works best in combination with DDD if you separate reads from writes anyway; only with “writes” do you load the aggregate, and write operations on an aggregate will typically require the full aggregate to be loaded anyway to validate invariants. So why not make that explicit?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I’m planning to write a separate article on this topic. But in short: this approach does work but only in simple cases. Imagine a domain model with multiple one-to-many relationships (collection of child entities) in an aggregate. If there are multiple write operations working with that aggregate, and each of them require one or two collections but not all. Would you still be loading the full aggregate into memory? What if there some of the child entities also has a one-to-many relationship?

      Also, without lazy loading, it’s much harder to follow case #1 (referencing related aggregates by their class and not Id). Hence, the domain model ends up being filled with Ids which is not too DDD-friendly.

      • Jer0enH

        I’d say such a domain model is probably not well factored, or even that DDD is not the right fit for this problem

  • Michael Hodgson

    Great post that has definitely encouraged me to give NHibernate a look, but many of the limitations of EF Core presented here stem from an early decision in how to reference related entities. It is not unusual to have domains where the graph of related domain entities can be very deep possibly allowing navigation through the whole domain. There is little to prevent us from modifying the whole graph in a single transaction, which defeats one of the points of aggregates in the first place.

    I know you have written about this already, but it seems at least still up for debate. See Vernon Vaughn’s (author of the red DDD book) excellent discussion on the subject: Effective Aggregate Design Part II.

    If we instead choose to reference other aggregates by id rather than by reference, in order to steer developers using our domain objects from updating a large graph of entities in a single operation – instead favoring eventual consistency then many of the weaknesses of EF Core presented here don’t really matter. The time that this becomes particularly apparent is if we start to distribute our domain (e.g. microservices), where aggregate boundaries are often consistent with service boundaries, and ‘domain events’ must be distributed, perhaps by a service bus.

    I find myself still on the fence for this one. I love the expressiveness of being able to work with objects rather than ids for example, but it’s all to easy to end up with spaghetti, especially if your working as part of a team with a range of experience.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I’ve read Vernon’s article, here’s my response to (the part about Ids of) it: https://enterprisecraftsmanship.com/2016/03/08/link-to-an-aggregate-reference-or-id/

      I’d consider using Ids an optimization technique – something to avoid by default (at least when working with an ORM) within the boundaries of a bounded context. “Within the boundaries of a bounded context” is an important clarification here: you won’t be able not to use Ids across multiple BCs/microservices, that’s for sure.

      I understand the concern, though. It does make it simpler to update the related entity. However, with proper encapsulation, such things are easy to catch. Here’s an example I gave earlier:

      var order = FindOrder(id);
      order.Customer.AddOrder(new Address(...), new Product(..), ...); // ???
      Here, you can modify the order’s customer when working with the order itself but because Customer has a limited API surface (in other words, it’s not anemic), it isn’t that easy to do that. You’ll need to call one of those rich APIs to perform the modification and it would be obvious that something wrong is going when you review such code.

      • Michael Hodgson

        Yes, I see that within a BC it makes a lot more sense. A subject I must revisit as I find many times, I end up with a single aggregate root per bounded context (2 or 3 at most). Perhaps I’m missing a trick here. Any tips on discovering natural BC boundaries?

        I have had good success with your refactoring anemic domain model tips to reach useful and powerful aggregate roots., I guess BCs are next on the list.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Finding proper BC boundaries is tricky. Trickier (and more important) than refactoring towards a rich domain model.

          One guideline I’d offer is to start with a large BC at first. It’s easier to separate BCs (given the code quality is good) than to merge or redesign the boundaries between them.

          This is akin to finding proper boundaries for microservices. And the guidelines is also the same: if you don’t know for sure (like really sure, which only happens when you build version two of the application), start off with a monolith because it’d be harder to modify the boundaries if the services are physically separated from each other. Extract microservices out of the monolith only when there’s too much complexity piled up.

  • Robert Giesecke

    Hi Vladimir, thx for the post.
    Still in the middle of reading it, but I got triggered by seeing something really nasty in a place where people might look for solutions.
    Pls, don’t ever do that! And more important: don’t show it to others as if they should copy it in their code.
    Not only creates it a new list + array every time someone accesses the prop, it also violates what everybody using your classes would consider a perfectly valid assumption about a property:

    var student = new Student();
    Assert.Same(student.Enrollments, student.Enrollments);

    The idiomatic way of doing that in C# is not hard at all:

    public virtual IReadOnlyList Enrollments { get; }

    public Student()
    Enrollments = _enrollments.ToList().AsReadOnly();

    I guess you just didn’t want to add the noise to your sample code. Couldn’t resist to comment.
    Back to reading… 🙂

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      Hi Robert, thanks for the comment.

      Your concern about performance is valid in scenarios with high performance requirements but in most enterprise-level applications, performance is not the main pain point and if it is, it’s usually the database, not the C# part of the application.

      I don’t think that asserting reference equality of the collections is a valid assumption. Why would you ever need that? When it comes to collections, it’s their content that matters. We can say that they have a Value Object semantics (equal as long as the elements inside are the same).

      • Robert Giesecke

        I still think that one should be as obvious and idiomatic as possible and if not there have to be good reasons. „Pit of success“ etc.
        If you expose a List property, people can expect to do a for-loop on it without getting bitch-slapped because it allocated a new list every time they accessed the property.

        And from a non-perf pov: Without explicit docs, I would think I can get a reference to enrollments which will also include ones that were added after I got the reference.

        Doing that just to save 1 or 2 lines might be okay to convey a point jn a blog post, but it is ridiculous to put that in production.

    • Romain Deneau

      Good tips. Thanks 🙂

      We can even do like above to get back to 1 line statement:

      public virtual IReadOnlyList Enrollments { get; } =

  • Anonimous

    You are speaking for “separation of concerns” multiple times in your blog, but you put Business logic in your Domain Model (Entities), I mean if you want to Add or Delete an OrderItem(from your example codes) you should delete it directly from the database, instead of loading the entire collection OrderItems, and using “AddOrderItem” or “DeleteOrderItem” methods…

    • Martín

      I don’t think I get this. Are you saying that Business Logic should be in the database?

  • Sviatoslav

    Do you have an article about any document oriented storage from DDD perspective?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I don’t unfortunately. It could be a nice post, though.

      • Sviatoslav

        Looking forward to see it soon. Recently have implemented solution using DDD approach with actor model (MS Orleans) for concurrency/clustering and document storage (Mongodb) as main persistence layer. It was quite a pleasure experience.

        • Diego Torres

          I agree. I did the same but with Akka actors

  • Mitsos

    I haven’t read the whole article, but I think MS has a different view on DDD.
    In this view, the model contains ONLY business logic and not persistence logic (makes sense – it is single responsibility principle!).
    Persistence is in the infrastructure layer, together with the repositories.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I don’t think this view is different. The separation of the domain and persistence logic is key to maintainable software, I agree with that.

  • Lerry Mashaba

    Hi I am on the quest to learn DDD in C# for Asp.net Core 2, is there any uptodate resources I can use?

  • http://blogs.microsoft.co.il/blogs/shimmy/ Shimmy

    And you didn’t mention lack of TPT/TPH support that EF Core lacks to provide.
    The only real downside of NH is that it’s destined for death sooner or later.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I disagree on the death point. NHibernate is still being maintained, the features that take advantage of the latest .NET developments are on par with EF.

  • Alberto Dallagiacoma

    Hello Vladimir, I enjoyed your Pluralsight DDD courses a lot.
    Given the case of a single value VO, using the ValueObject class to declare the Email property, can NHibernate execure the following QueryOver statement:

    string email = "[email protected]";
    var student = session.QueryOver().Where(s => s.Email == email).SingleOrDefault()

    without casting or type conversion issues?

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      Depending on the mapping. But yes, you can structure the mapping so that this query works. I recommend the combination of NHibernate for writes and Dapper for reads, though.

      • http://blog.albertodallagiacoma.it Alberto Dallagiacoma

        And can I kindly ask you how the mapping can be structured for supporting this?
        I tried, but encapsulating the property with the backing field and specifying the access mode as


        works fo writes but not for reads. No problems with HQL.
        Thanks again!

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          Ah, that was the solution I had in mind 🙂 If it doesn’t work – I don’t know what does, unfortunately.

          • http://blog.albertodallagiacoma.it Alberto Dallagiacoma

            I found a way to have this thing working: using QueryOver with Criteria API:

            string email = "[email protected]";
            var student = session.QueryOver().Where(Restrictions.Eq("Email", email).SingleOrDefault()

            Doing this, NHibernate correctly uses the encapsulated backing field, and honors the mapping (even if, sing “magic strings” is not the best solution…).
            I leave this there, if It can help someone. 😉

  • Tobiasz Tobialski

    Great article!

    I would suggest another category – mapping keys.

    1. What about mapping keys as single VO (potentially “nested” VO)?


    2. What about EF Core 2.2 ? Any important improvements?

  • Van Ly

    #5: Single-valued Value Objects
    NHibernate can do it with IUserType, and combines with FluentNHibernate Convention, you don’t have to map it everywhere.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      I don’t quite like NHibernate’s IUserType. It introduces too much framework-related stuff into what is meant to be pure value objects.

      • Van Ly

        I thought that it would promote pure value objects instead, doesn’t it? IUserType is defined within Infrastructure stuff and Domain object doesn’t know anything about it. It’s just a kind of mapping. You don’t have to declare a separate private field just to map it to database.

        • http://enterprisecraftsmanship.com/ Vladimir Khorikov

          I mean in the Value Object itself, you’d need to implement members required by NHibernate, which would conflate persistence and domain concerns. For the clients of the VO sure, it’d look quite clean. Mapping is best kept outside of domain classes.

          • Van Ly

            I’m not sure that we have on the same page yet. Why would VO must follow NHibernate requirements with IUserType? You don’t have to add `virtual` to VO in this case because it’d never be tracked by NHibernate. Did you see my link to a sample?

          • http://enterprisecraftsmanship.com/ Vladimir Khorikov

            Looks like I had some incorrect assumptions regarding IUserType and need to revisit it.

  • Riz Panjwani

    what is the nhibernate licensing like for commercial projects? I would love to give it a try for my next project.

    • http://enterprisecraftsmanship.com/ Vladimir Khorikov

      It’s free for commercial (or any other) use.