This is the second post in the series on Web API. Topic is TDD and code coverage, so I am going to demonstrate how to unit test your core code, as well as the API code and in the end, how to measure the code coverage you achieved on testing your code base. First, I am going through the changes needed to take place in the application architecture and then I will go to tests, so this article is divided into two parts.
Previously I was talking about setting up your self-hosted ASP.NET Web API application with OWIN. Now let’s see to test up everything and get some reports on our code coverage.
Code for the application can be found here.
About previous blog post
I know. This is not the right approach. If you are about to go TDD you should test first and incrementally write your business logic. This is a hard discipline to follow and occasionally you will find yourself drifting away from it. Resist. This was only for demonstration, the API code was somewhat created before hand, in the previous post. All code now for this post is written purely TDD, with tests first. So remember this rule of thumb. Do not write your business code first, do not even write a line of code without an equivalent unit test.
Of course this is not dogmatic, you can have code untested, like auto-generated code, third party libraries, properties, fields or plain old C# objects, these things do not need to be tested. But you need to be sure that your code works and the only way to do that is through TDD.
What is TDD and how is it going to help you
Test Driven Development is a professional discipline. You make sure you have your code tested by writing unit tests.
These tests give you enough certainty to know that your code is error and bug free*. Maybe it is an overstatement to say that it is bug free, at least we can say with certainty that there is a significant defect reduction when implementing TDD.
These tests give you the courage you need to refactor code. They are there to tell you if the code is doing what you expect to do or not. And you can tweak and fiddle around with that code, knowing that you can change it, because if the change doesn’t break anything, it won’t certainly break the test. If it does indeed break something, you revert back to the state before the change. Because of this, the fear to change code is eliminated! If you refactor the code it won’t simply rot. If you don’t it rots, and you don’t want to deal with this kind of code.
These tests are there, after a long time being away from the project, to let you know what is going on, they are the perfect kind of low-level documentation of your system. You return back after a long time to refactor or to implement a specific feature and they are there to assist you, to refresh your memory on the project, to make your life easier to implement your features. It’s all green or red.
These tests are there to assist you in the application design. They force you to think about good design, you make decoupled code, they force you to design small bits of functionality.
*Bugs of course can appear in tested code. But it is highly likely that most critical have been eliminated, at least, at the low level. You need a more sophisticated suite of tests to be sure that the vast majority of the system has been thoroughly tested, see the test automation pyramid.
Code is not testable. What should change and why
If you jump back to the previous post you will find some cumbersome code, poorly written, unmaintainable and impossible to test. And test as you saw before, is not running a debugger and manually run the application to discover the error. This is going to slow you down.
See the problems in previous codebase? First of all, everything is packed into a single project. What does this project do anyway? Is it an API project? Is it representing the domain? What exactly does this project represent?
This needs to be sorted out, by layering the application, with each layer being responsible of its own domain.
Have you discovered anything else?
Previous code violates SOLID principles and without these principles, don’t expect to have a quality product in your hands.
Where is the violation though? I can find one very quickly, it is located at the PeopleController.cs:
See the highlighted code? Now stop for a moment and think how are you going to test this. How are you going to mock the ApplicationDbContext API? There is no way you can do that. Code is tightly coupled, PeopleController depends on ApplicationDbContext object. If you want to screw up the PeopleController, just do some tweaks in you ApplicationDbContext. BOOM.
Wouldn’t it be better if you could pass as the ApplicationDbContext as a parameter? Still the dependency flow would be from PeopleController to ApplicationDbContext.
What if you introduced an Interface? This would make the dependency flow to point backwards! PeopleController now depends on an interface not an implementation, you have won over coupling! (The ApplicationDbContext is a low level policy and you don’t want to mess high level abstractions with low level like this one. That’s why it is pushed to the bottom of the abstraction layer, having a service, the
PeopleService, which represents a higher abstraction module, work with the controller).
There are a lot of DI containers out there, my personal preference is Autofac, and this will be used throughout this demo.
Autofac will make sure to feed all your entities with their required dependencies, you need only to configure it to do that.
The following tools are very useful in order to achieve the goal of this demo. Tools are about DI (to break coupling and make code testable), unit testing (runners, framework), mocking, code coverage and reports.
As a DI container, I use Autofac. I find it relatively easy, yet powerful tool to solve the DI problem in your application. The documentation is quite good and there is a huge community behind it.
For code coverage I use the open source OpenCover tool and ReportGenerator to create reports from XML reports. These are two very easy, interesting and free tools to use. You just create a .bat file, giving all the required commands to locate runners, tests and it runs the code coverage report for you. You see the reports on the browser, in a nicely formatted HTML page. More on this topic later.
Note that the architecture demonstrated is not perfect nor it is the recommended way, but it provides a level of abstraction with the different layers it consists, which is quite good for this occasion.
First thing first, I changed the application structure. The architecture consists of a database, several other .NET libraries and a web application.
Code for this architecture was made with TDD approach, only the API was there before.
The database is an SQL Server database, created by Entity Framework Code First (in previous post).
Entities, will be grouped into the Domain layer, along with a repository which will hold all the basic and generic functionality of accessing the data.
Next is the service layer, which contains all the underlying services the API will consume in order to orchestrate work to underlying stores. A store could be a database, an XML file virtually whatever. We don’t care, this is a detail. Our application should be free from the underlying store, we should be able to change from SQL Server to MongoDb in seconds, and behave as expected.
The services consume repositories, from the Domain layer.
The web application is just the Web API server and nothing else. It contains only the controllers and startup classes for OWIN.
Let’s visualize the application:
This is how it looks on Visual Studio 2015 Solution Explorer
In Api folder is the Web API application. In Core folder are the .NET libraries needed for accessing the data store as well as defining the business rules. Lastly,
Test.Suites folder contains all unit tests for the entire application.
High level structure:
The application layers are nicely distributed, each responsible for its own business. The data access layer (
People.Domain) is the layer which contains all the logic behind the data store, therefore all the business entities.
The service layer, contains all the business rules of your application and has access to business entities. This could be abstracted more, by having business related entities in this layer. Those entities then can be mapped to the back-end entities which represent the database tables.
The API consumes these services, performs CRUD operations through them.
The OWIN layer is there to receive traffic from the consumer clients, which can be virtually anything that has access to the internet.
Again, this is not a perfect architecture, I am not daydreaming of being a craftsman, but you get an idea about the abstraction levels that the application has been divided into.
I have added an AutofacConfig.cs file in App_Start folder. There, I am registering services into my DI container, in order to use them throughout the application.
You need the following packages to make it work:
This is a pretty standard DI configuration file, you register all dependencies and assign the AutofacWebApiDependencyResolver to Web API’s
DependencyResolver. The DependencyResolver, of type
IDependencyResolver is the interface which makes possible hooking different DI frameworks into ASP.NET Web API. Autofac implements this into AutofacWebApiDependencyResolver class for us.
I register the underlying store (ApplicationDbContext) twice, as different object, the first is of type BaseDbContext, while the second is of type ApplicationDbContext, thanks to
AsSelf() method. I do this in order to be resolved differently from its consumers. I also register the UserStore and other services, like the Repository and PersonService objects.
For UserController and CustomAuthorizationServerProvider I use property injection to inject the ApplicationUserManager instance as required, so that’s the reason of using PropertiesAutowired, Autofac will make sure to resolve the public properties based on the types it knows.
Lastly, I return the container back to the calling method in order to register it to OWIN pipeline.
In order to have Autofac DI container working with OWIN I need to use the
UseAutofacMiddleware extension and
UseAutofacWebApi to extend the Autofac lifetime scope added from the OWIN pipeline through to the Web API dependency scope.
Changes in controllers
Now controllers will have their dependencies injected.
PersonController will have
IPersonService injected through constructor, while
UserController will have
ApplicationUserManager injected through property injection.
These services are registered into Autofac container, it will be responsible to inject them into these controllers, based on previous configuration.
They also give us the advantage of mocking, as these dependencies can be easily mocked with Moq framework and injected into the respective instances.
Changes in CustomAuthorizationServerProvider
The ApplicationUserManager property is injected the same way like
ApplicationUserManager property is injected into the class.
This is a proxy layer, between the API and the repository infrastructure. Here, we can handle business logic for the application, as controllers might not be the ideal place for that. The controller should be the place where the data access logic and business logic are glued together, keeping it as thin as possible. This is the mision of this service layer, which talks with the underlying repository.
The PersonService is injected into the PeopleController, as a IPersonService type. This makes the PeopleController extremely easy to test, as the IPersonService can be easily mocked.
And the actual implementation
PersonService here has a simple implementation, it just exposes
Repository‘s CRUD functionality, essentially calling using the IRepository’s API, but it could be more complex, containing various rules of business logic.
For the data access layer, I have created a generic repository. The reason I choose this specific pattern is because it can make my data access code clean and testable. I can have repetitive data acess code encapsulated into specific actions.
Repository is doing only data access, it uses the BaseContext methods to interact with the underlying store. Code above is pretty standard EF code.
BaseContext dependency is an abstract class which derives from IdentityDbContext. In order to make code more testable, like to mock the Set or Entry methods of DbContext I needed to override them. Essentially, I hid the original methods (Entry<T>, Set<T>, Set) marking them as obsolete and added two methods for getting the DbSet<T> and using the base class Entry<T> method.
ApplicationDbContext is just an implementation of the BaseContext abstact class, which will be used for data access.
Have you noticed what happened? In order to make my application testable, I had to alter my architecture. I could have kept the previous, cumbersome one, but I wouldn’t go far, as I am not sure if my code is really working and how the code will behave in future changes. Isn’t this mad? To not know if your code works? Isn’t it frustrating to ship your code and not be confident about success?
Testing forced me to think about cleaner design. Testing forced me to make small, flexible, decoupled components. Testing made it possible to clean out the crap.
Again, tests should be made first. That said, first you create all the necessary projects, with it’s par unit tests projects and from that point you write unit tests to meet your production needs.
In this post I’ve been through on what TDD is and how it is going to benefit you as a developer. I’ve gone through a lot of changes in application structure and design as well. These changes were necessary in order to be able to test the application in small logical parts.
After that, tooling necessary for testing was presented, as well as mocking and running the unit tests.
In the next post to come, I am going to explore a bit more the actual code and show how the unit tests are authored for the Web API application, essentially testing routes, controllers, action selection. Also, I’ll do some data driven tests with NUnit.
Finally I will show how to install and configure OpenCover and ReportGenerator tools, make them work together with tests and code, in order to provide code coverage reports.
If you liked this blog post, please like, share and subscribe! For more, follow me on Twitter @giorgosdyrra.