Monday, 27 February 2012

How do you test a project without affecting the database?

Today I sat with a colleague and went through how we could implement testing on existing code, without affecting the data in the database. For anyone who answers that question the words “Dependency Injection” – well done! Maybe this post isn’t for you! Smile with tongue out
But I wanted to give anyone asking themselves this question a short guide on how to get this working, as it is a common problem faced when adapting an existing project to include unit testing.

An example

So first of all, lets get some basic code together to try this out:

namespace Model
{
     public class User
     {
          public User (int userId) { this.UserId = userId; }

          public int? UserId { get; private set;}
          public string UserName { get; set; }
          public string Password { get; set; }
     }
}

Next, lets pretend we’ve got some existing DB code, possibly Entity Framework, to get this data out of the database:

namespace DataAccess
{
     public class SqlRepository : DbContext
     {
          public User CreateUser() { /*..*/ }
          public User ReadUser(int id) { /*..*/ }
          public void UpdateUser(User userToUpdate)  { /*..*/ }
          public void DeleteUser(int userId) { /*..*/ }
     }
}

You also have a Business Logic layer, which will interface with the SqlRepository for us. For simplicity, I will only deal with the UpdateUser() method:

namespace Logic
{
     public class UserService
     {
          private SqlRepository repository = null;

          public UserService() 
          {
               this.repository = new SqlRepository();
          } 

          public void UpdateUser(User updatedUser)
          {
               if ( updatedUser == null ) 
               {
                    throw new ArgumentNullException("updatedUser");
               }
               else if ( updatedUser.UserId == null )
               {
                    throw new ArgumentException("UserId must be set.");
               }
               else if ( String.IsNullOrEmpty(updatedUser.UserName) )
               {
                    throw new ArgumentException("Username is not valid.");
               }
               else if ( !PasswordIsComplexEnough( updateUser.Password ) )
               {
                    throw new PasswordNotSecureException("Password must meet the minimum security requirements.");
               }
               else
               {
                    this.repository.UpdateUser (updatedUser);
               }
          }
     }
}

Now when we get to writing integration tests, we’ll find that we update users directly in the database, leaving the database in a possibly invalid state. A test might take the format of:

namespace Tests.Integration
{
     [TestClass]
     public class UserServiceTests
     {
          [TestMethod]
          public void WhenAServiceIsCreated_AndAnExistingUserIsUpdated_TheUserIsSavedSucessfully()
          {
              // Arrange
              var userService = new Logic.UserService();
              var existingUser = userService.ReadUser(1);
              
              existingUser.Password = StringFunctions.BuildRandomString(50); // 50 chars of text
             
              // Act
              userService.UpdateUser ( existingUser );
              
              // Assert - assume we have implemented Equals() to compare their content
              var updatedUser = userService.ReadUser(1);
              Assert.AreNotEqual(existingUser, updatedUser);
          }
     }
}

The problem is now, you’ve wiped the password for that user. If it is hashed in the database, you have no way of retrieving it without modifying it manually. I suppose you could re-apply the original user, but then you are now having to do a tidy up exercise after every test. If another test depends on that user account being valid (e.g. a UI test to log that user in), then you’re going to fail more tests and the problem will only get worse.

Dependency Injection

Dependency Injection allows us to “inject” the database we would like to modify. Some developers like to have their own database that can be freely modified whenever. But in the long term, the maintenance of tidying this database, coupled with the speed of database connections for thousands of tests becomes unmanageable. Ideally, you want these tests to pass as soon as possible so you can get on with your work.

But as this code stands, we will always point at the database. So we’ll need to do some slight modifications to get this code more flexible, without breaking existing code.

Step 1 – Extract an interface

The easiest step is to use Visual Studio to extract an interface for you. You do this by right clicking on the class name of the DataAccess layer > Refactor > Extract Interface

blog

Once you’ve clicked “Select All” and “OK”, this gives you your current code, implementing a newly created interface, which I will rename IRepository (and make public).

namespace DataAccess
{
   public interface IRepository
   {
      User CreateUser();
      void DeleteUser(int userId);
      User ReadUser(int id);
      void UpdateUser(User userToUpdate);
   }
 
   public class SqlRepository : DbContext, IRepository
   {
      /* As before */
   }
}

So now we have the ability to create a TestRepository, based on IRepository, that can act like a database. So lets quickly make a TestRepository:

namespace Tests.Helper
{
   public class TestRepository : IRepository
   {
       /** Methods leaving the NotImplementedException code in place **/
   }
}



Step 2 - Adapt the service layer to accept an IRepository


Next, we adapt the UserService class to accept a new parameter, to allow us to “inject” the database into the class. This way, existing code still works and the test code can take advantage of the new constructor.

public class UserService
{
     private IRepository repository;

     public UserService() : 
         this ( new SqlRepository() )
     {
     }

     internal UserService (IRepository injectedRepository)
     {
          this.repository = injectedRepository;
     }

     /* As before */
}

Notice I’m using an internal constructor intentionally, as I don’t want to expose this to just anyone. What I can do is instruct the CLR that internals are visible to another assembly. This is done in the AssemblyInfo.cs class for the DataAccess layer like this:

[assembly: InternalsVisibleTo("Tests")]

Note that I have used the Assembly Name of the assembly to which the internal fields, properties, methods and constructors can be accessed.


Step 3 - Inject the new repository

Now, we are able to modify our test to pass in the TestRepository class we created, so that when the UserService is created, it will access our implementation.

namespace Tests.Integration
{
     [TestClass]
     public class UserServiceTests
     {
          [TestMethod]
          public void WhenAServiceIsCreated_AndAnExistingUserIsUpdated_TheUserIsSavedSucessfully()
          {
              // Arrange
              var userService = new Logic.UserService(new TestRepository());
              var existingUser = userService.ReadUser(1);
              
              existingUser.Password = StringFunctions.BuildRandomString(50); // 50 chars of text
             
              // Act
              userService.UpdateUser ( existingUser );
              
              // Assert - assume we have implemented Equals() to compare their content
              var updatedUser = userService.ReadUser(1);
              Assert.AreNotEqual(existingUser, updatedUser);
          }
     }
}

Now – okay – the application will throw an exception! Because we haven’t implemented the TestRepository class and left it at its default implemention, the methods will throw errors. But by writing some simple code, which does as much as we need to get going, we no longer rely on running our tests through the DB:

namespace Tests.Helper
{
     public class TestRepository : IRepository
     {
         private List<User> users = null;
  
         public TestRepository() { users = new List<Users>(); }

         public void UpdateUser(User user)
         {
              var storedUser = users.Where(u => u.UserId == user.UserId).SingleOrDefault();

              // Do some exception handling here, just to throw an error if it doesn't exist.
              // This way it 'sort-of' acts like a database.

              storedUser.UserName = user.UserName;
              storedUser.Password = user.Password;
         }
     }
}

And now we have a testable repository.

There are frameworks called Mocking frameworks, that can even alleviate you of this burden. But I’ve yet to explore them enough to include in this blog.

Summary


In this blog, we have looked at a common example that many developers face. We have adapted the existing functionality, without breaking existing code, but extending it for use with a testing framework.
This adaptation allows you to concentrate on testing all of those permutations within the UserService, which is what we actually want to test.

I will adapt a full tutorial of this blog, so that users can try out the refactoring for themselves.

Sunday, 19 February 2012

How to implement best practices with the .NET Framework

One area that I know I’ve needed to improve is implementing some best practices within a team of developers.
The areas I am aiming to improve are:
  1. Developers being forced to use a code analysis tool, instead of ignoring the warnings
  2. Run unit tests or integration tests on a regular basis without developer intervention
  3. A way to build, package and deploy them without user intervention – including databases
  4. A report to generate automated processes
  5. (Optional) Developers following some kind of coding convention
My only interaction with any tool that pulls this together (via manual intervention to my experience) was Team Foundation Server. Since this was the choice of most clients I had worked with, I was interested in improving my own skills so that when new or existing projects come along, I could implement some standards. Therefore code quality and development become my main priorities. Manual intervention would be little or none.
I have read Rapid Development by Steve McConnell. This book to me was a bible of information on project management, team building and development practices. However I found that many of the examples were widely applicable, giving no examples of technologies due to its generality. RD is very broadly applicable to all development practices, but with this purchase I was specifically looking for something with products, processes and examples to implement them with.
I’ve started reading a book called Pro .NET Best Practices and this was exactly what I was after.
‘Best Practices’ is not a term the author chooses to use. Instead, he chose ‘Ruthlessly Helpful’ – the title of his blog site.
‘Best’ implies there is nothing better. However, a ‘better’ practice may suffice as each practice is different to one another. Some work better with small teams, some with bigger teams. The author chose ‘Ruthless’ – something that requires thought and consideration which will be able to you and your team size.
But by adding ‘Helpful’ also implies that the process will only serve as a benefit to you and your team. Whether you want to reduce bugs, improve product delivery or automate your deployment, any helpful practice is one worthy of consideration.
And so the term ‘Ruthlessly Helpful’ was born.
Pro .NET Best Practices covers a whole plethora of information for applying the development lifecycle properly. Everything from the tools used for development of the product, right through to the deliverable. This book isn’t another book full of code examples – although many are provided for clarity. The books goal is to educate and motivate readers on better practices of software development.
I see it has all 5 star ratings on the Amazon.com site. And it is well deserved!

Wednesday, 1 February 2012

Restricting URL access without using the web.config

I’ve been lucky enough to work with 2 major clients that do not use the out-of-the-box user-role association of security. If your site is not restricted by a user’s role or user name, how do you implement security? Good question!

So some examples of custom URL authorization is:

  • Access to a page is not determined solely on a role or username.
  • Allow admins to change web page access permissions on-the-fly from a maintenance page.
  • Allow pages to be restricted via a timeframe. Admin users may still be allowed access after working hours.

An example

Imagine a site that has 4 user roles:

  • Manager
  • Supervisor
  • Employee
  • Administrator

You might have a requirement that:

“An access control page needs to be created so that we (the administrators) can select the permissions of the pages through a UI. One week we might decide to extend a supervisors to a subset of the manager pages. These changes may be permanent or temporary. Either way we need an admin screen to selectively choose the permissions for each page from an admin screen and restrict access this way.”

Equipped with your vast ASP.NET knowledge, you could advise them to create an intermediate role of “Super-supervisor” and use web.config files to restrict user access. Responses are:

  • The existing system embeds the role so tightly, that this requires too much work to implement across the business logic and reporting structure.
  • Changes to web.config to incorporate page access will require direct access to the Web box. Our application’s sys admins are not technical users and could bring down the site.
  • We have already decided to split the site into 3 sections – Public, Secure and Admin sections. However you choose to implement it, this is the only 3 categories we care about.
  • We also want the sitemap to update dynamically with these changes.

So lets see what API we could use …. hmmm … unfortunately:

  • Membership providers only help us identify who a logged in user is – no go.
  • Role providers only help us identify what role a specific user has – no go.

What we need is a way to check the access types allowed upon access to a page. Then restrict page from there.

How does ASP.NET do it?

It does so by use of the UrlAuthorizationModule, which looks through the Web.config locating the page or directory. Then using the defined rules, it will either allow access, or send a 401 (Unauthorized) response to the pipeline. Later on in the pipeline, the FormsAuthenticationModule sees the 401 and pushes them to the login screen.

Good news is works in a very similar way to how we want. Sadly, it isn’t inheritable so we have to roll our own one. So lets have a stab at it.

Rolling our on UrlAuthorizationModule

Modules work by plugging into the HTTP pipeline for all requests. So just implement IHttpModule, add code to the Init method, and hook into the AuthorizeRequest event.

public class CustomUrlAuthorizationModule : IHttpModule
{
private const int Unauthorised = 401;

#region IHttpModule Members

public void Dispose()
{
// Do nothing
}

public void Init(HttpApplication context)
{
context.AuthorizeRequest += new EventHandler(context_AuthorizeRequest);
}

void context_AuthorizeRequest(object sender, EventArgs e)
{
// Work out access from the URL
}

#endregion
}

Nearly there! Okay, so the second-to-last thing you need to do is look at the page coming in, get the information about what users/roles can access that page. So here’s some sample code for it.

void context_AuthorizeRequest(object sender, EventArgs e)
{
var context = HttpContext.Current;

// Use custom logic to determine access criteria
bool accessAllowed = IsUserAuthorizedToSeePage(
context.User.Identity.IsAuthenticated,
context.User.Identity.Name,
context.Request.Path);

if (accessAllowed)
{
return;
}
else
{
// Set status code to 'Unauthorized' and bypass all other components
context.Response.StatusCode = 401;
context.ApplicationInstance.CompleteRequest();
}
}


And lastly, some sample logic to check if a user has access:


private bool IsUserAuthorizedToSeePage(bool isAuthenticated, string userName, string url)
{
using(Data.DbEntities db = new Data.DbEntities())
{
var dbUrl = db.PageAccess.Where(row => row.PageUrl.Equals(url,StringComparison.OrdinalIgnoreCase)).FirstOrDefault();

if(dbUrl == null)
{
logger.Error("Cannot find access rights for {0}", url);
return DenyAccess("Page access rights cannot be found.");
}
else if (dbUrl.Public)
{
return true;
}
else if(!isAuthenticated)
{
logger.Warn("Unauthenticated request for URL {0}",url);
return false;
}
else if(dbUrl.Secure)
{
return true;
}
else if (dbUrl.Admin)
{
var userInfo = db.Users.Where(user => user.UserName == userName).FirstOrDefault();

if (userInfo == null)
{
logger.Error("Cannot find access rights for {0} to {1}", userName, url);
return false;
}
else if (userInfo.Role == "Admin")
{
return true;
}
else
{
logger.Warn("User {0} attempted to access {1}, but was disallowed due to access rights", userName, url);
return false;
}
}
else
{
logger.Error("Unable to determine if user should access page. User: {0} -- URL: {1}", userName, url);
return false;
}
}

And that’s all you need to do really. I pull the role out of the database, but if you have decided to store the users role in Session, you can still pull their roles through from the HttpContext.Current.Session object.


(BTW, the “logger” would be some implementation of a logger, I use NLog)