Entity Framework 6 Is All About Scrum
April 10, 2017 - Experts' Insights
In Chapter 1 of the first module of my ASP .NET MVC course, I discuss with the students strategies for development and fast database building using Entity Framework. In the consultancies that I do, it is quite common to find the configuration of the Entity Framework being used in a way that unnecessarily complicates the workflow of the development team, and this article seeks to discuss some of these configurations and guide the team to what is most appropriate.
For this article, I’m assuming a Scrum time setup like that of a “chapter”, as described in Spotify’s video about their culture, of up to 8 people. I am also assuming use of the ASP.NET MVC5 framework along with the latest version of the Entity Framework 6 (at the time of this article, 6.1.3).
In all places I have done consultancy services to date, the best configuration for incrementing the database is the one that uses Automatic Migrations and Data Loss Allowed. Why? Because, in development, it is common (and healthy) to build and destroy the sample database several times. Defects and design problems usually appear at this stage. It is also commonplace for each developer to work on a different functionality and implement a whole new set of data, without having to depend on the other developer database additions. Losing data in development is not exactly a problem. If a good data sowing procedure is set up, the loss of test data under development becomes a detail.
Internal sealed class Configuration : DbMigrationsConfiguration;
Public Configuration ()
AutomaticMigrationsEnabled = true;
AutomaticMigrationDataLossAllowed = true;
And at the time to unify the codes? The famous merge stage? What happens in it? Well, considering this setup, nothing.
By setting up automatic migrations, running the
Update-Database command makes the Entity Framework calculate the database variation and automatically assemble a custom script for each database situation, that is, for each developer.
A common practice in small teams developing small systems is to set up manual Migrations. At the end of the day (or of the week, or of the sprint), one of the programmers is responsible for destroying the Migrations of the other colleagues and generating a version that would be the union of the whole code production, and then publish the version in a staging/production environment.
This approach is not incorrect, but it does not make sense if the system is exclusively in development. You don’t have to keep incremental history of databases that are not even in production. For these cases, the simple automatic configuration solves all the situations that could appear until the system release.
But if we are talking about a system already verified by the stakeholders and product owner(s), for example? In this case, we can use a mixed approach. We keep the auto-migration configuration turned on but using the following procedures:
- At the end of the sprint, a developer is responsible for synchronizing the automatically incremented database with the last manual migration effort. Before that, you must rollback the local database to the latest manual migration version. This can be easily done through the command:
PM> Update-Database -TargetMigration:FileName
- Soon after, the developer must generate the new sprint migration using the traditional migration generation script:
PM> Add-Migration Number123
- Other developers can rollback their local database differences using the same procedure explained above or simply deleting the database and re-creating it. The effect will be the same.
- The remaining sprints will work normally using automatic migrations.
Finally, for production, any automatic migration or data loss setting must be removed:
AutomaticMigrationsEnabled = false;
AutomaticMigrationDataLossAllowed = false;