Monday, 23 December 2013

Configuring IoC Services per area in MVC

In the last blog post, I described a solution that was broken down into different areas per location. The MVC project has this structure:
  • Areas
    • UK
    • US
    • HK
  • Controllers
  • Views
Each area had shared UI implementations for the most part, but required specialisation on the business logic side.
Problem
Specialising the services isn’t too much of a problem. You have an interface for the basic functions, and then specialise them within. What made our implementation work difficult is that there were specific repositories for each area! This was because each area interacted with a database of differing schema AND differing platform.
Design
The design of the services is fairly straightforward. You create the interface and then specialise each implementation like this:
public interface IOrderService
{
   OrderSummaryViewModel GetOrderSummary(string orderNumber);

   OrderViewModel GetOrder(string orderNumber);
}

// US specific implementation
public class USOrderService : IOrderService
{
   protected readonly IOrderRepository orderRep;

   public USOrderService(IOrderRepository orderRep)
   {
      this.orderRep = orderRep;
   }

   public OrderSummaryViewModel GetOrderSummary(string orderNumber)
   {
      var order = orderRep.FirstOrDefault(o=>o.OrderReference == orderNumber);

      if(order==null)
         return OrderSummaryViewModel.NoOrderFound;

      var model = new OrderSummaryViewModel
      {
         OrderNumber = order.OrderReference,
         CustomerName = order.FirstName + " " + order.LastName,
         OrderDate = order.CreatedDate,
         OrderTotal = order.Lines.Sum(x=>x.Quantity * x.UnitPrice),
         TaxTotal = order.Lines.Sum(x=>x.Quantity * x.UnitTaxPrice)
      };

      return model;
   }
}

All very easy code. So lets say we have one service per area, resulting in:
  • HKOrderService - using the SqlRepository repository
  • USOrderService - using the OracleRepository repository
  • UKOrderService - using the SybaseRepository repository

Therefore, to summarise we have these chains of dependencies:

Area Controller (assuming Area namespace) Service Repository
UK OrderSummaryController UKOrderService SybaseRepository
US OrderSummaryController USOrderService OracleRepository
HK OrderSummaryController HKOrderService SqlRepository

Solution

We were using AutoFac as our DI tool. Since MVC has DI support out of the box, this made it easier to start with. But breaking down dependencies by area was a little more tricky.

AutoFac works by inheriting from a Module class, and then connecting the dependencies together within the module. If you’ve used StructureMap before, this is exactly the same as the Registry class.

Here is an example of how it is implemented

public class ServiceInitializationModule : Module
{
   private static readonly HKArea = "HK";
   private static readonly USArea = "US";
   private static readonly UKArea = "UK";

   private static readonly SqlConfigKey = "SQL";
   private static readonly OracleConfigKey = "ORA";
   private static readonly SybaseConfigKey = "SYB";

   public override Load(ContainerBuilder builder)
   {
      InitializeRepositories(builder);
      InitializeFormatters(builder);
      InitializeServices(builder);
   }

   // This registers a concrete implementation with an interface and assigns it a name for later retrieval.
   private void InitializeRepositories(ContainerBuilder builder)
   {
     builder.Register( r => new SqlRepository (ConfigurationManager.ConnectionStrings[SqlConfigKey].ConnectionString))
            .Named<IOrderRepository>(SqlDb)
            .InstancePerHttpRequest();

     builder.Register( r => new OracleRepository (ConfigurationManager.ConnectionStrings[OracleConfigKey].ConnectionString))
            .Named<IOrderRepository>(OracleDb)
            .InstancePerHttpRequest();

     builder.Register( r => new SybaseRepository (ConfigurationManager.ConnectionStrings[SybaseConfigKey].ConnectionString))
            .Named<IOrderRepository>(SybaseDb)
            .InstancePerHttpRequest();
   }

   private void InitializeFormatters(ContainerBuilder builder)
   {
      builder.Register( r => new UKCurrencyFormatter() )
             .Named<ICurrencyFormatter>()
             .Singleton();

      builder.Register( r => new USCurrencyFormatter() )
             .Named<ICurrencyFormatter>(USAREA)
             .Singleton();

      builder.Register( r => new HKCurrencyFormatter() )
             .Named<ICurrencyFormatter>(HKAREA)
             .Singleton();
   }

   private void InitializeServices(ContainerBuilder builder)
   {
      builder.Register( r => new UKOrderService(
                  r.ResolveNamed<IOrderRepository>(SybaseDb))
             .Named<IOrderService>(UKAREA)
             .InstancePerHttpRequest();

      builder.Register( r => new USOrderService(
                  r.ResolveNamed<IOrderRepository>(OracleDb))
             .Named<IOrderService>(USAREA)
             .InstancePerHttpRequest();

      builder.Register( r => new HKOrderService(
                r.ResolveNamed<IOrderRepository>(SqlDb))
             .Named<IOrderService>(HKAREA)
             .InstancePerHttpRequest();
    }
}

This is only half the story. The DI is set up, but the controllers were taking interfaces in as parameters. Here they are again:

// Base Controller
public abstract class OrderSummaryBaseController : Controller
{
   protected readonly IOrderService orderService;
   protected readonly ICurrencyFormatter formatter;

   protected OrderSummaryBaseController (IOrderService orderService, ICurrencyFormatter formatter) 
   {
      this.orderService = orderService;
      this.formatter = formatter;
   }
}

// UK/Controller
public class OrderSummaryController : OrderSummaryBaseController
{
   public OrderSummaryController () 
     : base(new UKOrderService(), new UKCurrencyFormatter()) {}

   public OrderSummaryController (IOrderService service, ICurrencyFormatter formatter) 
     : base(service, formatter) {}
}

So what is the problem? The problem is that there are multiple controllers called OrderSummaryController. They all have the same constructor, which looks identical and does conflict with the base class.
Quick-recap
I'm throwing a lot of code at you at the moment. It is an important lesson to learn about how this solution is working in the bigger scheme of things. We want to use a controller and its base to share functionality. We also want to make sure the signatures match, as it will making unit testing much easier. My solution is attempting to keep the constructors the same signature, so that a) unit testing can be easier and b) so that when a new area is created, just the constructors need copying into the new file.

Ahh - you're back! Ok, so what are we trying to do? We want Autofac to differentiate between a class and its base. You'll find Autofac throws all sorts of resolution errors in this scenario. Luckily, if you don't mind taking the plunge and allowing a litle AutoFac code into your concrete controllers, all is not lost!

// UK/Controller
public class OrderSummaryController : OrderSummaryBaseController
{
   public OrderSummaryController () 
     : base(new UKOrderService(), new UKCurrencyFormatter()) {}

   public OrderSummaryController (IOrderService service, ICurrencyFormatter formatter) 
     : base(service, formatter) {}

   public OrderSummaryController (IComponentContext autoFacContext) 
     : base( 
          autoFactContext.ResolveNamed<IOrderService>(Areas.UK),  // <- Resolves the UKOrderService
          autoFacContext.ResolveNamed<ICurrencyFormatter>(Areas.UK));// <- Resolves the UKCurrencyFormatter
   {
   }
}

This constructor overload gives AutoFac an easy way in. The beauty of it is that the application is still unit testable via the IOrderService and ICurrencyFormatter, but the IComponentContext is giving you an easier (i.e. time-saving) way in. Now, time for the hooking-up within MVC. We need to register that Module, so that MVC resolves items correctly. The easiest (and most common way) is to inherit from the DefaultControllerFactory - the MVC built-in resolver - and register the ServiceInitializationModule. We then re-point MVC at this resolver instead of its default one.

public class AutofacControllerFactory : DefaultControllerFactory
{
   private readonly ContainerBuilder builder;

   public AutofacControllerFactory()
   { 
      this.builder = new ContainerBuilder();
      this.AddBindings();
   }

   protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
   {
      if (controllerType == null)
      {
         return null;
      }

      var controller = DependencyResolver.Current.GetService(controllerType) as IController;

      return controller;
   }

   public virtual void AddBindings()
   {
      this.builder.RegisterModule(new ServiceInitializationModule());
      DependencyResolver.SetResolver(new AutofacDependencyResolver(this.container)); //<-- Tell MVC to use the AutoFac resolver, instead of the default.
   } 
}

And now tell MVC to use this AutofacControllerFactory when creating controllers...

// Global.asax.cs
protected void Application_Start()
{
   AreaRegistration.RegisterAllAreas();

   // Use the MVC 4 file structure, as its cleaner.
   FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
   RouteConfig.RegisterRoutes(RouteTable.Routes);
   BinderConfig.RegisterModelBinders(ModelBinders.Binders);

   // Resolve using my custom controller factory
   ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory());
}

And we are done!
Summary

In this blog, I've described a scenario where an MVC application is separated into areas. Those areas all use different services, due to the business logic being different for them. In addition, the repositories themselves are also different.

Therefore, you should have come away with the following points

  1. A base controller can have multiple implementations in different areas
  2. Areas can have different services being injected into the controllers
  3. The services can even have repositories of differing concrete types injected as well.
This analysis took several weeks to get working. I wasn't able to find it possible with any other framework, but if you do find one that does, let me know!



Shared user interfaces, but still allowing for customisation per area

This is part 1 of a series of blogs, where I aim to give an answer to a question I posted on StackOverflow a while ago. Although the answer was useful, the solution was already implemented and thought it would be best to describe what the problem in more detail.

Problem

Our team were tasked with creating a consistent looking website, which would interact across several different locations. The locations would all be owned by a parent company, who were creating the site. At the project outset, an estimated 70-90% of the UI would look the same. There were caveats to be aware of when developing the site.
  1. Each location used a different order number format
  2. Each location has a custom currency format for the screen. For example;
    • United Kingdom (2 decimal places) = 1.00 GBP
    • Hong Kong (4 decimal places) = 1.0000 HKD
    • United States (3 decimal places) = 1.000 USD
  3. Each location had a different database schema – as these companies were buyouts.
  4. Each location may contain a different database platform.
  5. Each location may (or may not) require customised business rules. A base set of rules would be applicable and customised as such.

Design

The application was divided into 3 parallel sections / areas. The MVC structure of the application looked like this:
  • Areas
    • UK
      • Controllers
      • Views
    • US
    • HK
  • Controllers
  • Views
We also had some idea that most of the functionality would be the same, expect for specific UI tweaks. Luckily, the idea of shared functionality became apparent very early on in the project, so we had a starting point to go from.

Solution

We decided to have a base controller to contain a usual-case scenario for a particular process in the shared ~/Controllers directory. Each area would then inherit from this base controller, so that functionality was always available for all areas by default. The only additional work required was when the area needed to specialise the behaviour.
Base Controller Implementation
Here is an example of this approach:
// ~/Controllers
public abstract class OrderSummaryBaseController : Controller
{
   protected readonly IOrderService orderService;
   protected readonly ICurrencyFormatter currencyFormatter;

   protected OrderSummaryBaseController(IOrderService orderService, ICurrencyFormatter currencyFormatter)
   {
      this.orderService = orderService;
      this.currencyFormatter = currencyFormatter
   }

   public virtual ViewResult Index(string orderNumber)
   {
      var viewModel = this.orderService.GetOrderSummary(orderNumber);
      viewModel.Formatter = currencyFormatter;

      return this.View(viewModel);
   }
}

// ~/Areas/US/Controllers
public class OrderSummaryController : OrderSummaryBaseController
{
   // Poor mans IoC - to demonstrate the intent
   public OrderSummaryController()
        : base( new USOrderService(), new USCurrencyFormatter() )   {}
}
This was a good starting point for us, because we always had out-of-the-box functionality available to us. We only implemented additional functionality when required.
Overriding a view
We also had the additional benefit of having some inside knowledge of the ViewModel coming back, because we knew what service we were calling. This came in handy when the View had to be re-implemented. For example:

// ~/Areas/US/Controllers

public override ViewResult Index(string orderNumber)
{
   var viewModel = this.orderService.GetOrderSummary(orderNumber);

   var usViewModel = viewModel as USViewModel;

   if(usViewModel!=null)
   {
      usViewModel.HideTaxCalculations = true;
   }

   return this.View(viewModel);
}
I am abusing the inheritance structure here. It is almost as good as putting new in the method declaration. To respect the base classes implementation, another approach is:
// ~/Areas/US/Controllers
public override ViewResult Index(string orderNumber)
{
   var viewResult = base.Index(orderNumber);
   var viewModel = viewResult.ViewData.Model;

   var usViewModel = viewModel as USViewModel;

   if(usViewModel!=null)
   {
      usViewModel.HideTaxCalculations = true;
   }

   return this.View(viewModel);
}

Quick re-cap - what have we actually achieved here?

What we have achieved is a shared single controller, that allows customisation when required. By default, all controllers will use the same view. If you wanted to add a significantly different view, you can add it to the area, so that there is a clean separation of concerns. So lets say the UK has a customised branding of the order summary screen - radically different from the default, you can add it to the specific area:
  • Areas
    • UK 
      • Controllers 
        • OrderSummaryController
      • Views
        • OrderSummary
          • Index.cshtml
    • US / HK
      • Controllers
        • OrderSummaryController
  • Controllers
    • OrderSummaryBaseController
  • Views
    • OrderSummary
      • Index.cshtml

Summary

In this blog post, I demonstrated the use of a shared controllers, whilst allowing customisation between areas when required. This allows a new area to be built *almost* out of the box immediately.

There is a problem with this example; every controller is using a poor mans Dependency Injection. That means when a new area is brought in, every controller needs to be re-implemented.

So in the next blog, I will show you how we tackled this problem, so that new areas need only a little configuration to get up and running.

Monday, 2 April 2012

Encrypting your settings in your App.config files

I’m currently brushing up on my WCF after my exposure to WSE 3.0 a few years ago. In anticipation of upcoming client work and lack of a TS: WCF Application with the .NET Framework 4 book based material (note it is only course-based), I opted for a good solid book that covered the subject matter in a thorough way.

I opted for WCF 4 Step by Step by John Sharp, as I had purchased the previous WCF book he published a few years ago. I did like his thorough style because I needed to learn it from scratch.

In Chapter 4, “Protecting an Enterprise WCF Service”, he uses some examples where you enter your domain, username and password directly into the code(!). BUT – he does have a warning on every code sample:
Warning: This code is for illustrative purposes in this exercise only. In a production application, you should prompt the user for their name and password. You should never hard-code these details into an application.
Now, I do nearly all of my development on my work laptop. The thought of someone just searching my computer remotely for files with my well-known domain and username puts me off completely. So instead, I decided to apply encryption to it and looked for a way which did not require me to write another program.

Using existing tools to apply the encryption

In the TS: Web Applications with the .NET Framework 4, they discussed how to encrypt the <connectionStrings> section in your web.config. However, I want to leverage this to encrypt my <appSettings> section instead.

First thing, start up a Visual Studio Command Prompt and CD to your location. We want to rename the app.config (prior to the build process) to web.config. For this example, we’ll assume my application in development is at C:\Projects\EncryptConfig directory.
C:\> cd C:\Projects\EncryptConfig
C:\Projects\EncryptConfig> ren app.config web.config
Next is to leverage the encryption utilities in the aspnet_regiis utility. All we provide is the section to encrypt and the file to apply the encryption to:
aspnet_regiis –pef “appSettings” .
The –pef indicates we want to encrypt a specific section and provide the filename. If you have CD’d to the directory, you can just use –pe “appSettings” and it will look for the web.config file.

The next step is to rename the file back to an app.config:
ren web.config app.config
You will notice that the app.config has encrypted this section like this:
<appSettings configProtectionProvider="DataProtectionConfigurationProvider">
   <EncryptedData>
      <CipherData>
         <CipherValue>AQAAANCMnd8BFdERjHoAwE/Cl+sBAAAAaUZOOb4EQkGNPyy5tzAjBgQAAAACAAAAAAADZgAAwAAAABAAAADQc+YHWZxsOuA55uoTOnvLAAAAAASAAACgAAAAEAAAAPYMaWCQj/hrK3T9DwGH2rxQAQAAyGBicUMztQUL+3cm7QJa5Nxf0NIlHv8WcT7rog67OaMFFc09qVZjmoloAaSsSZMLJC+Xof42NQ1H+x90kOASWvyWibqXkczP7bIy5/9whKb9T0eoHgpnqKu+WmiQQCf7pnM5XIY25TJ1uxzSu+pWZfabLkzfFZah6PaT/fLNCR7DLraewvX7LMmQk2+YLhEot+RDrXAtum7qpCweFFLCS8g8L9tTpz/XzKFjaXJqlJAGru8f9+PgEDOBCVxic8cvzjKizyxSQlS55ht0bJUD1NO6LGOQwtek7SKX2DjOCqQoWGf1uVXePtft73eN+JY7wcCjftu6IWQqUYdj2DMCFn6vZhaNYF5TkHtKv4kpZtNer+s50Yc8E2uUPq99ZZ8vZQMiGdQ8xopIWwx5F/WFUxpeQ5/hG4A4IKhY2njSC3m/efH4M28MWET34HTXVx1gFAAAAGC4o4MxMGI73etkgTMojENDadwS
         </CipherValue>
      </CipherData>
   </EncryptedData>
</appSettings>
The thing to note here is it is using the DataProtectionConfigurationProvider accesses the Data Protection API which is a user-specific API. It is fine as long as you always log in with your user, on your domain. But if you tried to distribute this application, the section would never be able to be read by another computer.

The RSAProtectedConfigurationProvider allows you to encrypt specific to the user or the machine. It also allows you to export the key so that it can be moved to another machine. This would be useful over a web farm (this is an IIS tool after all), where the <machineKey> can be shared across computers.

In any case, if you are looking to distribute this application and encrypt the contents of a configuration file, be sure you understand what encryption methods are available to you.

SyntaxHighlighter - a syntax highlighter for blog posts

I usually use Windows Live Writer to do my blogs. I also have some add-ins to support the syntax highlight, which embeds the CSS into the page.

Today, I viewed the source on my own page, and here is an example of what the "Insert Code Snippet" generated:

// Original formatting with no CSS
Console.WriteLine("Hello World!");
Console.WriteLine("Hello World!");
Console.WriteLine("Hello World!");
Gets rendered to ....


<div id="codeSnippetWrapper">
<br>
<div style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt;
background-color: #f4f4f4; border-left-style: none; padding-left: 0px; width: 100%;
padding-right: 0px; font-family: 'Courier New', courier, monospace; direction: ltr;
border-top-style: none; color: black; border-right-style: none; font-size: 8pt;
overflow: visible; padding-top: 0px" id="codeSnippet">
<br />
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt;
background-color: white; margin: 0em; border-left-style: none; padding-left: 0px;
width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace;
direction: ltr; border-top-style: none; color: black; border-right-style: none;
font-size: 8pt; overflow: visible; padding-top: 0px"><br />
<span style="color: #606060" id="lnum1">1:</span><br />
Console.WriteLine(<span style="color: #006080">"Hello World!"</span>);</pre>
<br />
<!--CRLF-->
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt;
background-color: #f4f4f4; margin: 0em; border-left-style: none; padding-left: 0px;
width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace;
direction: ltr; border-top-style: none; color: black; border-right-style: none;
font-size: 8pt; overflow: visible; padding-top: 0px"><span style="color: #606060"
id="lnum2">2:</span><br />
Console.WriteLine(<span style="color: #006080">"Hello World!"</span>);</pre>
<!--CRLF-->
<pre style="border-bottom-style: none; text-align: left; padding-bottom: 0px; line-height: 12pt;
background-color: white; margin: 0em; border-left-style: none; padding-left: 0px;
width: 100%; padding-right: 0px; font-family: 'Courier New', courier, monospace;
direction: ltr; border-top-style: none; color: black; border-right-style: none;
font-size: 8pt; overflow: visible; padding-top: 0px"><span style="color: #606060"
id="lnum3">3:</span> <br/>
Console.WriteLine(<span style="color: #006080">"Hello World!"</span>);</pre>
<!--CRLF-->
</div>
</div>
Okay, that is pretty horrific. Does the job, no doubt but when it comes to editing this in the application, it is a complete nightmare. It selects single rows, changes their width accidently. When you have a bit of luck actually selecting the outer div, the Code Snippet editor even warns you that it is going to attempt to read it. When viewing this on a mobile device, it looks even worse. Much worse trust me!

At the same time I was catching up with my Google Reader and was looking through posts I'd missed by Scott Hanselman. He was looking for a syntax highlighter for Windows Live Writer a few years ago. He also went to the trouble of writing a Windows Live Writer plug-in for it. But luckily, things have been made much easier.

How to add SyntaxHighlighter to your site

  1. Download the Javascript and CSS libraries from the SyntaxHighligter authors site. (Alternatively reference them as described)
  2. Add a few lines of script, referencing the code types you wish to use, including the core libraries.
    <script src="shCore.js" type="text/javascript"></script>
    <script src="shAutoloader.js" type="text/javascript"></script>
    <script type="text/javascript">SyntaxHighlighter.autoloader(
    'js jscript javascript /js/shBrushJScript.js',
    'csharp c-sharp /js/shBrushCSharp.js',
    'xml /js/shBrushXml.js'
    );


    SyntaxHighlighter.all();
              http://alexgorbatchev.com/SyntaxHighlighter/manual/api/autoloader.html
In your HTML of your blog post, or site, all you do is add the nice and friendly <pre> tag that we all know and love, applied with a 'class' attribute with your programming language - simples!.

So here is a code sample (from a previous post) and the newer way I will be blogging from now on:

Before


   1: # PowerShell script to modify the 'timeout' value in the specified web.config    
   2: # when no Sessions are in use.    
   3:  
   4: # Constants used throughout application    
   5: $webConfig = "d:\web.config"    
   6: $newTimeout = "20"    
   7: $sessionCount = 0    
   8:  
   9: ## BEGIN   
  10: write-host Getting performance counters ...   
  11:  
  12: $perfCounterString = "\asp.net applications(__total__)\sessions total"    
  13: $perfCounter = get-counter -counter $perfCounterString    
  14: $rawValue = $perfCounter .CounterSamples[0].CookedValue    
  15:  
  16: write-host Session Count is $rawValue   
  17:  
  18: if( $rawValue -gt $sessionCount)   
  19: {   
  20:    write-host Session Count = $rawValue - exiting   
  21:    exit   
  22: }   
  23:  
  24: write-host Stopping IIS   
  25: stop-service "IISAdmin"   
  26:  
  27: # Open file and change value   
  28: $doc = new-object System.Xml.XmlDocument   
  29: $doc.Load($webConfig)   
  30: $doc.SelectSingleNode("//sessionState").timeout = $newTimeout    
  31: $doc.Save($webConfig)   
  32:  
  33: write-host Starting IIS   
  34: start-service "IISAdmin"   
  35:  
  36: write-host Done!   
  37: ## END

After


# PowerShell script to modify the 'timeout' value in the specified web.config    
# when no Sessions are in use.

# Constants used throughout application
$webConfig = "d:\web.config"
$newTimeout = "20"
$sessionCount = 0

## BEGIN
write-host Getting performance counters ...
$perfCounterString = "\asp.net applications(__total__)\sessions total"
$perfCounter = get-counter -counter $perfCounterString
$rawValue = $perfCounter .CounterSamples[0].CookedValue

write-host Session Count is $rawValue

if( $rawValue -gt $sessionCount)
{
write-host Session Count = $rawValue - exiting
exit
}

write-host Stopping IIS
stop-service "IISAdmin"

# Open file and change value
$doc = new-object System.Xml.XmlDocument
$doc.Load($webConfig)
$doc.SelectSingleNode("//sessionState").timeout = $newTimeout
$doc.Save($webConfig)

write-host Starting IIS
start-service "IISAdmin"
write-host Done!
## END

Have a look at the source of this page to see how readable each section is.

Tuesday, 27 March 2012

Using PowerShell to modify configuration files in IIS when the Session Count is 0

(Apologies for re-posting this – I thought I was deleting a draft post and ended up deleting the actual post!)
I was looking at Stack Overflow today and I saw a question where a user wanted to change the Web.config when the number of Sessions reaches 0. He wasn’t aware of any way to find out the Session Count, or any automated way to do this. I thought I’d have a go anyway using a C# .NET Console Application anyway … but then I discovered PowerShell!

To summarise PowerShell, think of it like a command prompt on steroids! Not only can you do the usual process starting, killing and file system navigation, but it provides a much more streamlined command system. You also have hundreds, if not thousands of commands at your disposal.
So let me talk you through how I would have done this in .NET and then the equivalent PowerShell implementation I used.

Goals of the application

  1. Get the Session Count of IIS in total from a performance counter
  2. If Count > 0, quit
  3. Stop IIS
  4. Change web.config
  5. Start IIS

Retrieving performance counters in .NET for the Session Count

I’ve recently passed my TS: Data Applications in the .NET 4 Framework exam, which covered how to create and retrieve performance counter data into your applications. So there’s step one – wait until the Active Session Count is 0. An example of retrieving this is:

var counter = new PerformanceCounter(
  "ASP.NET Applications",    
  "Sessions Active",    
  "__Total__",    
  true);   

var sessionCount = counter.RawValue;   

if( sessionCount > 0 )  
{  
   Console.WriteLine("Session Count is {0} - exiting", sessionCount );  
   return;  
}

These values match up to those in the Performance Monitor (perfmon.exe).

In PowerShell, performance counters are easily retrieved as well. There is a specific command for retrieving them. PowerShell declares variables inline using a $ prefix, so here is how to retrieve the same counter in PowerShell. I saved these to a file called “ChangeIIS.ps1”:



   1: # Set up the string
   2: $perfCounterString = "\asp.net applications(__total__)\sessions active"
   3:  
   4: # Retrieve the counter
   5: $perfCounter = get-counter -counter $perfCounterString 
   6:  
   7: # Retrieve the raw value for the first (and only) counter
   8: $rawValue = $counter.CounterSamples[0].CookedValue 
   9:  
  10: if( $rawValue -gt 0 )
  11: {
  12:     write-host Session Count is $rawValue - exiting
  13:     exit
  14: }

Stopping & Starting IIS

This process is straightforward as well in both languages. First C#:



   1: var iisProcess = ServiceController.GetServices().First(s => s.ServiceName == "IISADMIN");
   2: iisProcess.Stop();
   3: 
   4: // Perform tasks
   5:  
   6: iisProcess.Start();

Now PowerShell:



   1: stop-service "IISADMIN"
   2:  
   3: # Perform tasks
   4:  
   5: start-service "IISADMIN"

Modifying the web.config

Unfortunately, I couldn’t find a way of using the in-built System.Configuration classes to open configuration files that are outside of your application. Instead, I just resorted to XmlDocument:



   1: var location = "d:\\web.config";
   2: var newTimeout = "20";
   3:  
   4: var xmlDoc = new XmlDocument();
   5: xmlDoc.Load(location);
   6: xmlDoc.SelectSingleNode("//sessionState").Attributes["timeout"].Value = newTimeout;
   7: xmlDoc.Save(location);

I know what your thinking. How can we access the file in PowerShell? One of the fantastic things with PowerShell is that you have access to all of the .NET classes (including static methods and classes) at your disposal. So here is the same code in PowerShell:



   1: # Set values 
   2: $location = "d:\web.config"
   3: $newTimeout = "20"
   4:  
   5:  # Open file and change value
   6: $doc = new-object System.Xml.XmlDocument
   7: $doc.Load($location)
   8: $doc.SelectSingleNode("//sessionState").timeout = $newTimeout 
   9: $doc.Save($location)

Notice how I am using a “timeout” variable directly. PowerShell has exposed the attribute as a property of the .SelectSingleNode() method. How do I know that? Well, if you execute this in isolation, you’ll get a nice helpful output on all of the properties available to use directly:



PS C:\> $doc.SelectSingleNode("//sessionState")
 
timeout                                                     #text
-------                                                     -----
20                                                          20

Now we have a PowerShell script in its entirety:



   1: # PowerShell script to modify the 'timeout' value in the specified web.config
   2: # when no Sessions are in use.
   3:  
   4: # Constants used throughout application
   5: $webConfig = "d:\web.config"
   6: $newTimeout = "20"
   7: $sessionCount = 0
   8:  
   9: ## BEGIN
  10: write-host Getting performance counters ...
  11:  
  12: $perfCounterString = "\asp.net applications(__total__)\sessions total" 
  13: $perfCounter = get-counter -counter $perfCounterString 
  14: $rawValue = $perfCounter .CounterSamples[0].CookedValue 
  15:  
  16: write-host Session Count is $rawValue
  17:  
  18: if( $rawValue -gt $sessionCount)
  19: {
  20:     write-host Session Count = $rawValue - exiting
  21:     exit
  22: }
  23:  
  24: write-host Stopping IIS
  25: stop-service "IISAdmin"
  26:  
  27: # Open file and change value
  28: $doc = new-object System.Xml.XmlDocument
  29: $doc.Load($webConfig)
  30: $doc.SelectSingleNode("//sessionState").timeout = $newTimeout 
  31: $doc.Save($webConfig)
  32:  
  33: write-host Starting IIS
  34: start-service "IISAdmin"
  35:  
  36: write-host Done!
  37: ## END

Granting permissions to the Script

We’re not done yet. PowerShell also has a security feature that doesn’t allow users to just run PowerShell scripts as soon as they are created. They have to come from a credible source to be run straight away.
In order to run this script, we need to tell PowerShell to bypass security checks for this specific process (or user). For security reasons, I will grant access for this process, as we will only want to run this script once.


  1. Open a Command Prompt – Right Click – Properties - "Run As Administrator"
  2. Type powershell.exe
  3. Type Set-ExecutionPolicy –scope Process Bypass
  4. Type sl <directory of file>sl acts like cd on the command prompt
  5. Type $ ‘.\ChangeIIS.ps1’
And off we go!

Summary

Where opportunities arise to try out new technologies, it is always work having a go. PowerShell isn’t difficult to learn. In fact, its actually very powerful and intuitive. It is also a great ‘Immediate Window’ style interface to try out .NET code. I’m starting to use it quite a lot for even simple calculations:


224 ==> [System.Math]::Pow(2,24)

And more importantly – I hope I win that bounty question!





Recent Edits:



  • 28/3/12 - Changed performance counter to use "Sessions Active" instead of "Sessions Total" as Sessions Total inclues Abandoned (forcefully ended), Timed Out and Active.

Monday, 27 February 2012

How do you test a project without affecting the database?

Today I sat with a colleague and went through how we could implement testing on existing code, without affecting the data in the database. For anyone who answers that question the words “Dependency Injection” – well done! Maybe this post isn’t for you! Smile with tongue out
But I wanted to give anyone asking themselves this question a short guide on how to get this working, as it is a common problem faced when adapting an existing project to include unit testing.

An example

So first of all, lets get some basic code together to try this out:

namespace Model
{
     public class User
     {
          public User (int userId) { this.UserId = userId; }

          public int? UserId { get; private set;}
          public string UserName { get; set; }
          public string Password { get; set; }
     }
}

Next, lets pretend we’ve got some existing DB code, possibly Entity Framework, to get this data out of the database:

namespace DataAccess
{
     public class SqlRepository : DbContext
     {
          public User CreateUser() { /*..*/ }
          public User ReadUser(int id) { /*..*/ }
          public void UpdateUser(User userToUpdate)  { /*..*/ }
          public void DeleteUser(int userId) { /*..*/ }
     }
}

You also have a Business Logic layer, which will interface with the SqlRepository for us. For simplicity, I will only deal with the UpdateUser() method:

namespace Logic
{
     public class UserService
     {
          private SqlRepository repository = null;

          public UserService() 
          {
               this.repository = new SqlRepository();
          } 

          public void UpdateUser(User updatedUser)
          {
               if ( updatedUser == null ) 
               {
                    throw new ArgumentNullException("updatedUser");
               }
               else if ( updatedUser.UserId == null )
               {
                    throw new ArgumentException("UserId must be set.");
               }
               else if ( String.IsNullOrEmpty(updatedUser.UserName) )
               {
                    throw new ArgumentException("Username is not valid.");
               }
               else if ( !PasswordIsComplexEnough( updateUser.Password ) )
               {
                    throw new PasswordNotSecureException("Password must meet the minimum security requirements.");
               }
               else
               {
                    this.repository.UpdateUser (updatedUser);
               }
          }
     }
}

Now when we get to writing integration tests, we’ll find that we update users directly in the database, leaving the database in a possibly invalid state. A test might take the format of:

namespace Tests.Integration
{
     [TestClass]
     public class UserServiceTests
     {
          [TestMethod]
          public void WhenAServiceIsCreated_AndAnExistingUserIsUpdated_TheUserIsSavedSucessfully()
          {
              // Arrange
              var userService = new Logic.UserService();
              var existingUser = userService.ReadUser(1);
              
              existingUser.Password = StringFunctions.BuildRandomString(50); // 50 chars of text
             
              // Act
              userService.UpdateUser ( existingUser );
              
              // Assert - assume we have implemented Equals() to compare their content
              var updatedUser = userService.ReadUser(1);
              Assert.AreNotEqual(existingUser, updatedUser);
          }
     }
}

The problem is now, you’ve wiped the password for that user. If it is hashed in the database, you have no way of retrieving it without modifying it manually. I suppose you could re-apply the original user, but then you are now having to do a tidy up exercise after every test. If another test depends on that user account being valid (e.g. a UI test to log that user in), then you’re going to fail more tests and the problem will only get worse.

Dependency Injection

Dependency Injection allows us to “inject” the database we would like to modify. Some developers like to have their own database that can be freely modified whenever. But in the long term, the maintenance of tidying this database, coupled with the speed of database connections for thousands of tests becomes unmanageable. Ideally, you want these tests to pass as soon as possible so you can get on with your work.

But as this code stands, we will always point at the database. So we’ll need to do some slight modifications to get this code more flexible, without breaking existing code.

Step 1 – Extract an interface

The easiest step is to use Visual Studio to extract an interface for you. You do this by right clicking on the class name of the DataAccess layer > Refactor > Extract Interface

blog

Once you’ve clicked “Select All” and “OK”, this gives you your current code, implementing a newly created interface, which I will rename IRepository (and make public).

namespace DataAccess
{
   public interface IRepository
   {
      User CreateUser();
      void DeleteUser(int userId);
      User ReadUser(int id);
      void UpdateUser(User userToUpdate);
   }
 
   public class SqlRepository : DbContext, IRepository
   {
      /* As before */
   }
}

So now we have the ability to create a TestRepository, based on IRepository, that can act like a database. So lets quickly make a TestRepository:

namespace Tests.Helper
{
   public class TestRepository : IRepository
   {
       /** Methods leaving the NotImplementedException code in place **/
   }
}



Step 2 - Adapt the service layer to accept an IRepository


Next, we adapt the UserService class to accept a new parameter, to allow us to “inject” the database into the class. This way, existing code still works and the test code can take advantage of the new constructor.

public class UserService
{
     private IRepository repository;

     public UserService() : 
         this ( new SqlRepository() )
     {
     }

     internal UserService (IRepository injectedRepository)
     {
          this.repository = injectedRepository;
     }

     /* As before */
}

Notice I’m using an internal constructor intentionally, as I don’t want to expose this to just anyone. What I can do is instruct the CLR that internals are visible to another assembly. This is done in the AssemblyInfo.cs class for the DataAccess layer like this:

[assembly: InternalsVisibleTo("Tests")]

Note that I have used the Assembly Name of the assembly to which the internal fields, properties, methods and constructors can be accessed.


Step 3 - Inject the new repository

Now, we are able to modify our test to pass in the TestRepository class we created, so that when the UserService is created, it will access our implementation.

namespace Tests.Integration
{
     [TestClass]
     public class UserServiceTests
     {
          [TestMethod]
          public void WhenAServiceIsCreated_AndAnExistingUserIsUpdated_TheUserIsSavedSucessfully()
          {
              // Arrange
              var userService = new Logic.UserService(new TestRepository());
              var existingUser = userService.ReadUser(1);
              
              existingUser.Password = StringFunctions.BuildRandomString(50); // 50 chars of text
             
              // Act
              userService.UpdateUser ( existingUser );
              
              // Assert - assume we have implemented Equals() to compare their content
              var updatedUser = userService.ReadUser(1);
              Assert.AreNotEqual(existingUser, updatedUser);
          }
     }
}

Now – okay – the application will throw an exception! Because we haven’t implemented the TestRepository class and left it at its default implemention, the methods will throw errors. But by writing some simple code, which does as much as we need to get going, we no longer rely on running our tests through the DB:

namespace Tests.Helper
{
     public class TestRepository : IRepository
     {
         private List<User> users = null;
  
         public TestRepository() { users = new List<Users>(); }

         public void UpdateUser(User user)
         {
              var storedUser = users.Where(u => u.UserId == user.UserId).SingleOrDefault();

              // Do some exception handling here, just to throw an error if it doesn't exist.
              // This way it 'sort-of' acts like a database.

              storedUser.UserName = user.UserName;
              storedUser.Password = user.Password;
         }
     }
}

And now we have a testable repository.

There are frameworks called Mocking frameworks, that can even alleviate you of this burden. But I’ve yet to explore them enough to include in this blog.

Summary


In this blog, we have looked at a common example that many developers face. We have adapted the existing functionality, without breaking existing code, but extending it for use with a testing framework.
This adaptation allows you to concentrate on testing all of those permutations within the UserService, which is what we actually want to test.

I will adapt a full tutorial of this blog, so that users can try out the refactoring for themselves.

Sunday, 19 February 2012

How to implement best practices with the .NET Framework

One area that I know I’ve needed to improve is implementing some best practices within a team of developers.
The areas I am aiming to improve are:
  1. Developers being forced to use a code analysis tool, instead of ignoring the warnings
  2. Run unit tests or integration tests on a regular basis without developer intervention
  3. A way to build, package and deploy them without user intervention – including databases
  4. A report to generate automated processes
  5. (Optional) Developers following some kind of coding convention
My only interaction with any tool that pulls this together (via manual intervention to my experience) was Team Foundation Server. Since this was the choice of most clients I had worked with, I was interested in improving my own skills so that when new or existing projects come along, I could implement some standards. Therefore code quality and development become my main priorities. Manual intervention would be little or none.
I have read Rapid Development by Steve McConnell. This book to me was a bible of information on project management, team building and development practices. However I found that many of the examples were widely applicable, giving no examples of technologies due to its generality. RD is very broadly applicable to all development practices, but with this purchase I was specifically looking for something with products, processes and examples to implement them with.
I’ve started reading a book called Pro .NET Best Practices and this was exactly what I was after.
‘Best Practices’ is not a term the author chooses to use. Instead, he chose ‘Ruthlessly Helpful’ – the title of his blog site.
‘Best’ implies there is nothing better. However, a ‘better’ practice may suffice as each practice is different to one another. Some work better with small teams, some with bigger teams. The author chose ‘Ruthless’ – something that requires thought and consideration which will be able to you and your team size.
But by adding ‘Helpful’ also implies that the process will only serve as a benefit to you and your team. Whether you want to reduce bugs, improve product delivery or automate your deployment, any helpful practice is one worthy of consideration.
And so the term ‘Ruthlessly Helpful’ was born.
Pro .NET Best Practices covers a whole plethora of information for applying the development lifecycle properly. Everything from the tools used for development of the product, right through to the deliverable. This book isn’t another book full of code examples – although many are provided for clarity. The books goal is to educate and motivate readers on better practices of software development.
I see it has all 5 star ratings on the Amazon.com site. And it is well deserved!