Dec 24, 2012

Unit Testing. definition and details

Description:
Unit testing is the automated testing of software components. The technique is used to build high-quality, reliable software by writing a suite of accompanying automated tests that validate assumptions and business requirements implemented by your software.

Over the last few years a movement has appeared in software development called ‘eXtreme Programming’ or XP. XP has many facets, but one of the most interesting is the idea of ‘agile’ methodologies. The primary idea behind agile programming is that software is delivered early and continuously. To achieve this, developers must ensure that their software is well tested. This has led to the idea of ‘Test Driven Development’ or TDD. In TDD, developers continually test their code to ensure that the code works, and also to ensure that the changes they have made do not break existing code. To do this effectively requires several things:
  • The tests have to be repeatable, which means that they can be re-run whenever necessary and so allow for regression testing.
  • The tests have to be runable by somebody other than the test author. This allows for sharing of code and tests, and it also means that if the developer leaves the project then the tests are still there to be used and are meaningful.
  • The test results have to be concise, but errors must be very visible. There is little point in running tests if the errors are hidden in the output of successful tests.
The above requirements have led to several testing frameworks, collectively known as the xUnit frameworks, where the x is replaced by a letter or word that identifies the language or system being used, for example JUnit for Java testing and NUnit for .NET testing. One other thing to keep in mind is that exponents of TDD do exactly what TDD says, i.e. use the tests to drive the development. This means writing the tests first, then writing the code. Initially this is hard to do because all developers want to get on and write code, but psychologically it makes sense. There is a tendency when writing code first to then test what you have written, whereas if you write the test first you should write tests to test the ideas that you are trying to convey.

Unit Testing tools:
NUnit:
NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 2.6, is the seventh major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.

NUnit is the unit testing framework that has the majority of the market share. It was one of the first unit testing frameworks for the .NET platform. It utilizes attributes to identify what a test is. The TestFixture attribute is used to identify a class that will expose test methods. The Test attribute is used to identify a method that will exercise a test subject. Let's get down to business and look at some code.
First we need something to test:

public class Subject { 
  public Int32 Add(Int32 x, Int32 y)
  { 
    return x  + y; 
  } 
}
That Subject class has one method: Add. We will test the Subject class by exercising the Add method with different arguments.

[TestFixture]
public class tSubject
{
  [Test]
  public void tAdd()
  {
    Int32 Sum;
    Subject Subject = new Subject();
    Sum = Subject.Add(1,2);
    Assert.AreEqual(3, Sum);
  }
}
The class tSubject is decorated with the attribute TestFixture, and the method tAdd is decorated with the attribute Test. You can compile this and run it in the NUnit GUI application. It will produce a successful test run.
That is the basics of what NUnit offers. There are attributes to help with setting up and tearing down your test environment: SetUp, SetUpFixture, TearDown, and TearDownFixture. SetUpFixture is run once at the beginning when the fixture is first created; similarly, TearDownFixture is run once after all tests have completed. SetUp and TearDown are run before and after each test.
NUnit tests can be run several different ways: from the GUI application, from the console's application, and from a NAnt task. NUnit has been integrated into Cruise Control .NET as well. In the last product review, you will see how it has been integrated into the VS.NET IDE as well.
NUnit GUI Application
Figure 1. NUnit GUI Application

--------------------------------------------------------------------------------------------------

NUnit is a framework for unit testing .NET applications. It is an open source tool, and at the time of writing version 2.2 is available from www.nunit.org. To see NUnit in action we will write a class and a set of tests for that class. To run the tests for your code you will need to download this version of NUnit and then install it. NUnit will be installed in the GAC making it available to all .NET processes.

Setting up Visual Studio

The easiest way to write test code if you are using Visual Studio is to use two separate projects, one that contains the code under test (i.e. your normal code), and one that contains the testing code. Typically each of these projects will be built into its own assembly, and so long as both assemblies are available to the NUnit framework, this is not a problem, as NUnit will be able to load them. However, Visual Studio does put a roadblock in the way of this. If the code under test is to be built into a .EXE assembly rather than a .DLL, then Visual Studio will not let you reference this assembly from another project, something you have to do if you want your test assembly to compile. To get around this you can build a copy of the code to be tested into the same assembly as the testing code. This assembly can then be loaded by NUnit and all the tests run. To do this under Visual Studio you create a solution with two projects, the first project is your code and the second project is your testing code. In the test project you then need to add references to the code from the ‘real’ project. To do this you right click on the test project, select Add then ‘Add Existing Item’. Now browse to the directory the real code is in, then select the files to add, then in the button on the dialog make sure you select “Link File” (see Figure_1 for an example).
Figure 1
Figure 1

Once you link to the files you can then build your testing assembly.

The class to test

To show how NUnit works we need a class to test. I wanted something that was fairly easy to understand but would also allow me to point out the features of NUnit. The class we will write will be a utility class that will count the number of characters, words and paragraphs in a file (very useful if you are an author and get paid by the word!). Our class will look something like this
public class FileCount
{
 private int _words;
 private int _characters;
 private int _paragraphs;
 public int Characters
 {
  get{ return _characters; }
 }
 public int Words
 {
  get{ return _words; }
 }
 public int Paragraphs
 {
  get{ return _paragraphs; }
 }
 public FileCount(string fileName)
 {
  // read file into buffer
  // count chars, words and paras
 }
 private void CountChars(byte[] data)
 {
 }
 private void CountWords(byte[] data)
 {
 }
 private void CountParagraphs(byte[] data)
 {
 }
}
In this class the constructor is passed a filename. It has to open the file and determine the number of characters, words and paragraphs in the file. Following in the footsteps of the agile programmers we will write the tests before we write the code. Let’s start by writing a test and then running it in NUnit, to show how this all hangs together.

Writing and testing code

Our testing class looks like this:
using System;
using NUnit.Framework;
using kevinj;
namespace FileDataTest
{
 [TestFixture]
 public class TestFileCount
 {
  FileCount fc;
  [SetUp]
  public void SetUp()
  {
   fc = new FileCount(“”);
  }
  [Test]
  public void TestCountChars()
  {
   fc.countChars();
  }
 }
}
A couple of points to note here: The kevinj namespace is the namespace containing the code under test, while the NUnit.Framework namespace references the NUnit code.
This test currently contains two methods: Setup() and TestCountChars(). Traditionally (i.e. in JUnit, the Java equivalent of NUnit, which is the grand-daddy of all unit testing tools) these names matter, however in NUnit it is the attributes that tell the story.
The [TestFixture] attribute marks this class as containing tests.
The [Test] attribute marks this as a test case, i.e. code that will be run by NUnit to execute one or more tests; you can mark as many methods as you need with this attribute.
The [setUp] attribute on the other hand can only be applied to one method. This method is run before the start of each test and is used to initialise the test environment. There is also a corresponding [tearDown] method that is run at the end of each test case.
This means that if I had two tests called foo and bar, the order of execution would be:
Setup(), Foo(), Teardown(), Setup(),
 Bar(), TearDown().
The first test we write will test the CountChars method of the FileCount class – as this method hasn’t been written yet, the test should fail. The test code looks like this:
[Test]
public void TestCountCharsNoNewLines()
{
 // 1234567890123456789012345678901
 string stringData =
 “There are 31 chars in this line”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 fc.CountChars(byteData);
 Assert.AreEqual(
 byteData.Length, fc.Characters);
}
In the test we create the data used for the test, put it into a byte array and call the method we would like to test. When the method returns we check to see if the test has succeeded or failed, we do this by calling a method of the Assert class. Assert is a class that provides many static methods to check the result of tests that you run. There are three kinds of methods on Assert, comparisons, conditionals and utility methods. The comparison methods have two forms:
Assert.AreEqual( type expected,
  type actual );
…and:
Assert.AreEqual( type expected,
 type actual, string description );
…where type is a .Net type, for example:
Assert.AreEqual( int expected,
 int actual );
Although there are some variations on this, for example the AreEqual methods for float also take a parameter that is the tolerance in the comparison. There are also AreSame methods that check for reference equality, and since NUnit 2.2 you can also compare arrays for equality.
The conditional tests are IsTrue, IsFalse, IsNull and IsNotNull and finally the utility methods are Fail and Ignore.

Running the tests

If you try and compile this code, it will fail. This is because the CountChars() method in the FileCount class is private. There is an ongoing debate as to whether ‘private’ methods should be tested, and by private here I mean anything that is not public. My personal view is that you should test anything that needs testing. If your class contains only a single method in its public interface and many non-public methods, simply testing that public method becomes extremely difficult. Tests should be as simple as possible and as easy to understand as possible. However I do not recommend making everything non-private just to get testing code to compile. For now we will take a short cut by marking the method as internal allowing access to the method from other code in the same assembly (another reason for putting all the code into one assembly), later I will present another solution to this problem. So, if you mark the method as internal and recompile, everything should now build and you can run the tests. NUnit comes with two test ‘runners’ – the GUI runner and the console runner. To run the NUnit console runner, run a command prompt and execute the following command:
c:>”c:\Program Files\NUnit
 2.2\bin\nunit-console.exe”
This will show the help for the command. To run the console test runner change to the directory containing the assembly and run:
“c:\Program Files\NUnit 2.2\bin\
 nunit-console.exe” FileDataTest.dll
This will produce output something like:
.F
Tests run: 1, Failures: 1,
 Not run: 0, Time: 0.0400576 seconds

Failures:
1) FileDataTest.TestFileCount.
  TestCountCharsNoNewLines :
  expected:<0>
   but was:<31>
 at FileDataTest.TestFileCount.
  TestCountCharsNoNewLines() in
  c:\filedatatest\testfilecount.cs:
  line 30
The ‘F’ indicates a test failure. The console runner is very quick and efficient, however NUnit has another runner that you might want to use, the GUI runner. This is a Windows Forms application that we will use for the rest of the article.
To run the test fixture using the GUI runner you first need to start it. It should be available from the Start menu under the NUnit 2.2 program group (at the time of writing). Once started, select File..Open File then browse to the DLL containing your test cases, load the DLL and hit the run button.
Figure_2 shows what the results look like.
Figure 2
Figure 2

We have a red bar! Red is bad! The idea is to make the bar go green. Green is good! It means all the tests have passed. To get to this state of Nirvana we have to go back and write an implementation of the method under test. A naïve implementation would look like this:
internal void CountChars(byte[] data)
{
 _characters += data.Length;
}
Re-compile, re-run the test and the bar goes green, remember, this is good! However this implementation is far too simple, it counts carriage returns and line-feeds as characters, most tools do not do this, so we have to define what we mean by a character. To do this we can use the .NET Char.IsControl(char c) method. Using this changes our implementation to:
internal void CountChars(byte[] data)
{
 foreach (byte b in data)
 {
  if (Char.IsControl((char)b) == false)
   _characters++;
 }
}
Not as efficient, but it does turn the bar green, so we move on for now. We can now write tests for the other methods, testing and refining as we go along.

Testing exceptions

One of the important points of unit testing is that you must test edge cases. For example, what happens when you pass a null buffer to the CountChars method? Let’s try it:
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 fc.CountChars(byteData);
 Assert.AreEqual(0, fc.Characters);
}
If you run the tests now the tests fail with an exception:
FileDataTest.TestFileCount.TestCountCharsWithNullBuffer :
 System.NullReferenceException :
 Object reference not set to an
 instance of an object.
This is probably not what is wanted. In this case we have two choices (at least). Within the method we can check for the null case and set the word count to 0, or we can throw an application level exception. For pedagogical reasons we will throw the exception. The code now looks like this:
internal void CountChars(byte[] data)
{
 if (data == null) throw new ArgumentNullException(
  “Data element cannot be null”);
 foreach (byte b in data)
 {
  if (Char.IsControl((char)b) == false)
   _characters++;
 }
}
Running the test again still produces an exception:
FileDataTest.TestFileCount.TestCountCharsWithNullBuffer :
 System.ArgumentNullException :
 Data element cannot be null
However, this is now expected, and we must amend the test case to succeed if this exception is thrown. The way we do this in NUnit is to add another attribute to the test case, this is the ExpectedException attribute. The test case now looks like this:
[Test]
[ExpectedException (typeof (ArgumentNullException))]
public void TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 fc.CountChars(byteData);
}
Notice that the ExpectedException attribute takes a type as its parameter, this is the type of the exception that we expect to be thrown. If the exception is thrown the test will succeed and if the exception is not thrown then the test will fail, this is exactly what we want. There are some other attributes we should mention, starting with the Ignore attribute. You add this to any test cases that should not be run. It takes a message as a parameter which is the reason for not running the test case, something like this:
[Ignore (“Code not yet written”)]
public void testSomeTest()
{
  ...
}
You can use this to include tests that you know need to be written but you have not got around to yet, this will act as a reminder that the test has to be written and run at some point (in the NUnit GUI these tests show up in yellow). There is also the Explicit attribute. A test case with this attribute will not be run automatically, instead you have to explicitly choose the test case in the runner.
Testing private methods
I now want to address one of the issues we skipped over above, how to test private methods. Originally I said to mark the method as internal, however this breaks one of the cardinal rules of object-oriented programming. We should keep our scopes as narrow as possible, marking a method as internal when it should be private smells wrong. However if you mark the CountChars method private the code simply fails to compile. To overcome this limitation we have to use another feature of .NET, reflection. Reflection allows us to reach into any .NET class and examine the information about the class, including the data members, properties and methods the class has available. Reflection also allows us to set the values of the properties and data members, and to execute methods, including private methods (assuming we have the correct security permissions).
Let’s take the simplest test first TestCountCharsNoNewLines(), this changes to the following:
[Test]
public void TestCountCharsNoNewLines()
{
 // 1234567890123456789012345678901
 string stringData =
 “There are 31 chars in this line”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc,
  new object[]{byteData});
 Assert.AreEqual(stringData.Length,
  fc.Characters);
}
In this code we replace the call to fc.CountChars() with a set of Reflection APIs. To call the CountChars method using reflection we have to do several things:
  • We have to know the type of object we want to call the method on
  • We have to have an instance of that type
  • We have to have a reference to the method to call
  • We have to pass any data to the method
So in the above code the first thing we do is use the typeof operator to get a Type reference to the Type object for FileCount. We use the Type’s GetMethod member to get a reference to a MethodInfo instance that represents the CountChars method. Notice that to the GetMethod call we pass the BindingFlags.NonPublic and BindingFlags.Instance, no prizes for guessing that these says we want a reference to a non-public, non-static member of FileCount. Once we have this reference we can then call the method. This is done through the Invoke method of MemberInfo; this takes two arguments, the instance on which to call the method and the parameters to that method. Remember that the instance (fc in the above code) is created in the Setup method. The parameters (in this case the byte array) have to be passed as an array of objects. The CLR then manages the calling of the method with the correct stack in place. Phew!
We can now convert the other two test cases, which look like this:
[Test]
public void
 TestCountCharsWithNewLines()
{
 // 1234567890123456789012345678901
 string stringData = “There are 31
  chars in this line\r\n”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc, new
  object[]{byteData});
 Assert.AreEqual(stringData.Length
  - 2, fc.Characters);
}

[Test]
[ExpectedException (typeof
 (ArgumentNullException))]
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc, new
  object[]{byteData});
}
Recompile and run the code. The bar goes red – oops! The failing test is TestCountCharsWithNullBuffer and the error is:
FileDataTest.TestFileCount.
 TestCountCharsWithNullBuffer :
 Expected: ArgumentNullException but
 was TargetInvocationException
What happens is that the exception is being thrown by the test case, but the Invoke method is wrapping the application exception in a TargetInvocationException, which is not what we want. We need to unwrap the exception, which is easy to do. Amend the code to look like the following:
[Test]
[ExpectedException (typeof
 (ArgumentNullException))]
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 try
 {
  mi.Invoke(fc, new
   object[]{byteData});
 }
 catch(TargetInvocationException
 tie)
 {
  throw tie.InnerException;
 }
}
We wrap the call to invoke in a try..catch block and re-throw the TargetInvocationExceptions InnerException. Re-run the code and the bar turns green, Nirvana again.



Visual Studio Unit Tests:

Description : (from Wikipedia:)
The Visual Studio Unit Testing Framework describes Microsoft's suite of unit testing tools as integrated into some[1] versions of Visual Studio 2005 and later. The unit testing framework is defined in Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll. Unit tests created with the unit testing framework can be executed in Visual Studio or, using MSTest.exe, from a command line.

Elements

Test class

Test classes are declared as such by decorating a class with the TestClass attribute. The attribute is used to identify classes that contain test methods. Best practices state that test classes should contain only unit test code.

Test method

Test methods are declared as such by decorating a unit test method with the TestMethod attribute. The attribute is used to identify methods that contain unit test code. Best practices state that unit test methods should contain only unit test code.

Assertions

An assertion is a piece of code that is run to test a condition or behavior against an expected result. Assertions in Visual Studio unit testing are executed by calling methods in the Assert class.

Initialization and cleanup methods

Initialization and cleanup methods are used to prepare unit tests before running and cleaning up after unit tests have been executed. Initialization methods are declared as such by decorating an initialization method with the TestInitialize attribute, while cleanup methods are declared as such by decorating a cleanup method with the TestCleanup attribute.

Sample Test

Below is a very basic sample unit test:------
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
[TestClass]
public class TestClass
{
    [TestMethod]
    public void MyTest()
    {
        Assert.IsTrue(true);
    }
} 
 
 
 

More Tutorials and walkthroughs:

1. A Unit Testing Walkthrough with Visual Studio Team Test : (http://msdn.microsoft.com/en-us/library/ms379625(v=vs.80).aspx)

2.  Visual Studio Unit testing intro :  (http://www.jeff.wilcox.name/2008/08/utbasics/)
 
3. Unit Testing 401 : (http://www.learnvisualstudio.net/series/unit_testing_401/) 

4. Unit testing with Microsoft Visual Studio 2012 : (http://www.agile-code.com/blog/unit-testing-with-microsoft-visual-studio-2012/)
 



XUnit:

xUnit.net is a unit testing tool for the .NET Framework. Written by the original inventor of NUnit, xUnit.net is the latest technology for unit testing C#, F#, VB.NET and other .NET languages. Works with ReSharper, CodeRush, and TestDriven.NET. xUnit.net is currently the highest rated .NET unit testing framework

How do I use xUnit.net?

This page contains basic instructions on how to use xUnit.net. If you are an existing user of NUnit 2.x or MSTest (the Visual Studio unit testing framework), you should see our comparisons with existing frameworks page.

Writing and Running Your First Test

  • Create a Class Library project to hold your tests (we will assume it is called "MyTestLibrary").
  • Add a reference to the xunit.dll assembly.
  • Add a class to hold your first test class (here we call it "MyTests"). Here is an example test:
using Xunit;

public class MyTests
{
    [Fact]
    public void MyTest()
    {
        Assert.Equal(4, 2 + 2);
    }
}
  • Compile your project, and ensure it compiles correctly.
  • From the command line, run the following command: xunit.console MyTestLibrary.dll (Note: if xunit.console.exe is not in your path, you may need to provide a full path name to it in the command line above). You should see output like this:
C:\MyTests\bin\Debug> xunit.console MyTestLibrary.dll
xUnit.net console test runner (64-bit .NET 2.0.50727.0)
Copyright (C) 2007-11 Microsoft Corporation.

xunit.dll:     Version 1.9.1.0
Test assembly: C:\MyTests\bin\Debug\MyTestLibrary.dll

1 total, 0 failed, 0 skipped, took 0.302 seconds
  • Success!
The Assert class is provided by xUnit.net and contains various methods that can be used to ensure that your test data is valid.

When you run the console runner and pass your library DLL name, the runner loads the DLL and looks for all the methods decorated with the [Fact] attribute and runs them as unit tests. If you want to add more tests, simply add more methods to your test class, or even start new test classes!

When a Test Fails

If a test fails, the xUnit.net console runner will tell you which test failed and where the failure occurred. The failure might be because of a bad assertion:

[Fact]
public void BadMath()
{
    Assert.Equal(5, 2 + 2);
}
Which shows output like this:

MyTests.BadMath [FAIL]
   Assert.Equal() Failure
   Expected: 5
   Actual:   4
   Stack Trace:
      C:\MyTests\MyTests.cs(8,0): at MyTests.BadMath()

The message clearly shows what happened ("Assert.Equal() Failure"), the expected and actual values, and the stack trace of where the failure occurred.

Your test will also fail if an unexpected exception occurs, such as:

[Fact]
public void BadMethod()
{
    double result = DivideNumbers(5, 0);

    Assert.Equal(double.PositiveInfinity, result);
}

public int DivideNumbers(int theTop, int theBottom)
{
    return theTop / theBottom;
}
When run, you should see output like:

MyTests.BadMethod [FAIL]
   System.DivideByZeroException : Attempted to divide by zero.
   Stack Trace:
      C:\MyTests\MyTests.cs(15,0): at MyTests.DivideNumbers(Int32 theTop, Int32 theBottom)
      C:\MyTests\MyTests.cs(8,0): at MyTests.BadMethod()

1 total, 1 failed, 0 skipped, took 0.274 seconds

Obviously, we must've thought that DivideNumbers used doubles instead of ints! :)

What if I Expected an Exception?

In the example above, what if I wanted to write a test to show I was expecting an exception to be thrown? You can use the Assert.Throws method:

[Fact]
public void DivideByZeroThrowsException()
{
    Assert.Throws(
        delegate
        {
            DivideNumbers(5, 0);
        });
}

public int DivideNumbers(int theTop, int theBottom)
{
    return theTop / theBottom;
}
When this test runs, it passes. Note that Assert.Throws requires you to specify the exact exception you're expecting. If the code throws any other exception, even one that's derived from the one you're expecting, it's still a failure. Additionally, if you want to inspect the values of the exception object, Assert.Throws returns the exception object as a return value for you to do further assertions on.

Skipping a Test

Sometimes you will need to temporarily skip a test. The [Fact] attribute has a Skip parameter which can be used to skip the test and show the reason it's being skipped.

[Fact(Skip="Can't figure out where this is going wrong...")]
public void BadMath()
{
    Assert.Equal(5, 2 + 2);
}
When you run this test with the console runner, you should see output like:

MyTests.BadMath [SKIP]
   Can't figure out where this is going wrong...

1 total, 0 failed, 1 skipped, took 0.000 seconds

Ensuring a Test Does Not Run Too Long

The [Fact] attribute contains a parameter named Timeout, which can be used to specify that a test must finish completely within the given time (in milliseconds).

[Fact(Timeout=50)]
public void TestThatRunsTooLong()
{
    System.Threading.Thread.Sleep(250);
}
When you run this test, you should see output similar to this:

MyTests.TestThatRunsTooLong [FAIL]
   Test execution time exceeded: 50ms

1 total, 1 failed, 0 skipped, took 0.050 seconds




TestDriven.Net:


TestDriven.NET is a zero friction unit testing add-in for Microsoft Visual Studio .NET The current release of TestDriven.NET supports multiple unit testing frameworks including NUnit, MbUnit and MS Team System and is fully compatible with all versions of the .NET Framework.
TestDriven.NET allows a developer to run (or debug!) their tests from within Visual Studio with a single-click.

source: http://www.codeproject.com/Articles/16810/Unit-Testing-with-TestDriven-NET

Test Fixtures

Create a new project and copy the code below into a new class file.
Study the following code for a moment. The code implements a Test Fixture, which is a normal class decorated with the special attribute [TestFixture]. Test Fixtures contain Test Methods. Test Methods are decorated with the [Test] attribute. Other decorations, such as [TestFixtureSetup] and [TearDown], are used to decorate methods that have special meanings that will be explained later.
SampleFixture.cs
using System;
using NUnit.Framework;

namespace UnitTest
{
    [TestFixture]
    public class SampleFixture
    {
        // Run once before any methods
        [TestFixtureSetUp]
        public void InitFixture()
        {
        }

        // Run once after all test methods
        [TestFixtureTearDown]
        public void TearDownFixture()
        {
        }

        // Run before each test method
        [SetUp]
        public void Init()
        {
        }

        // Run after each test method
        [TearDown]
        public void Teardown()
        {
        }

        // Example test method
        [Test]
        public void Add()
        {
            Assert.AreEqual(6, 5, "Expected Failure.");
        }

    }
}

Running a Test Fixture

You can right-click on any test fixture file and run it directly from Visual Studio .NET. This is the beauty of TestDriven.NET.

Notice in your Error or Output tabs that a failure message appears.

Double-clicking on the failure will take you to the precise line that failed. Correct this line so it will pass, then re-test the Fixture.

Running a Test Method

You may also right-click anywhere inside a method and run just that one method.


Setup/Teardown Methods

If you have setup code that should be run once before any method or once after all methods, use the following attributed methods:

If you have setup code that should run once before each method or once after each method in your fixture, use the following attributed methods:

Tips on Writing Good Unit Tests

A proper unit test has these features:
  • Automated
  • No human input should be required for the test to run and pass. Often this means making use of configuration files that loop through various sets of input values to test everything that you would normally test by running your program over and over.
  • Unordered
  • Unit tests may be run in any order and often are. TestDriven.NET does not guarantee the order in which your fixtures or methods will execute, nor can you be sure that other programmers will know to run your tests in a certain order. If you have many methods sharing common setup or teardown code, use the setup/teardown methods shown above. Otherwise, everything should be contained in the method itself.
  • Self-sufficient
  • Unit tests should perform their own setup/teardown, and optionally may rely upon the setup/teardown methods described above. In no circumstances should a unit test require external setup, such as priming a database with specific values. If setup like that is required, the test method or fixture should do it.
  • Implementation-agnostic
  • Unit tests should validate and enforce business rules, not specific implementations. There is a fine line between the end of a requirement and the beginning of an implementation, yet it is obvious when you are squarely in one territory or the other. Business requirements have a unique smell: there is talk of customers, orders, and workflows. Implementation, on the other hand, smells very different: DataTables, Factories, and foreach() loops. If you find yourself writing unit tests that validate the structure of a Dictionary or a List object, there is a good chance you are testing implementation.
    Unit tests are designed to enforce requirements. Therefore, implementation tests enforce implementation requirements, which is generally a Bad Idea. Implementation is the part you don't care to keep forever. Depending on your skill level, implementations may change and evolve over time to become more efficient, more stable, more secure, etc. The last thing you need are unit tests yelling at you because you found a better way to implement a business solution.
    This advice runs counter to what you may read in other unittesting literature; most authors recommend testing all public methods of all classes. I find that while this is consistent with the goals of testing all code, it often forces tests that do more to enforce implementation than business requirements.
    Business requirements often follow a sequence or pattern, and my view is that the pattern is the real thing to be tested. Writing unit tests for every CustomerHelper class and OrderEntryReferralFactory class often indicates that classes and methods could be organized to better follow the business requirements, or at least wrapped in classes that reflect the requirements.




Running the tests: 

Running and Debugging tests

ReSharper automatically detects unit tests of NUnit and MSTest frameworks in your .NET projects; for JavaScript, QUnit and Jasmine frameworks are supported. Other unit testing frameworks such as xUnit.net and MSpec are supported via ReSharper plug-ins.
Next to declarations of test classes and single tests, ReSharper adds special icons on the left gutter of the editor window. Click these icons to run or debug tests.
Tests can also be run from the context menu. In addition, an arbitrary set of unit tests can be run or debugged from the Visual Studio's Solution Explorer. Just right-click the project or solution and select Run unit tests or Debug unit tests.

Unit Test Explorer


ReSharper presents Unit Test Explorer — a structured list of unit tests for reviewing the structure of tests in your whole solution. The tree is available via the ReSharper | Windows menu and is quickly populated after you build your project. Using Unit Test Explorer, you can run any combination of tests in one or more unit test sessions.

Unit Test Sessions



ReSharper runs unit tests in the Unit Test Sessions window. It is designed to help you run any number of unit test sessions, independently of each other, as well as simultaneously. Sessions can be composed of any combination of tests. In the debugging mode, only one session can be run at a time.
The unit test tree shows the structure of tests belonging to a sessions, which you can filter to show only passed, failed or ignored unit tests. You can navigate to the code of any unit test by double-clicking it.
The progress bar and the status bar display the current progress. You can stop, run or re-build and re-run unit tests at any time.
The preview pane lets you analyze test results and navigate from a failed test's output to the code lines that originated the exception, all with a single click.

Profiling Unit Tests with dotTrace Performance


You can also quickly profile the performance of unit tests from Visual Studio via JetBrains dotTrace Performance, a powerful .NET profiling tool.
To profile tests, you will need to install dotTrace Performance. You will then be able to start profiling directly from the editor using the sidebar marks that ReSharper adds for test classes and individual tests.

Analyzing Code Coverage with DotCover


Another JetBrains tool that helps working with unit tests can be integrated with Visual Studio and ReSharper. With JetBrains dotCover, you can easily discover the degree to which the code of your solution is covered with unit tests.
When you install dotCover, you will be able to analyze and visualize code coverage on unit tests from the selected scope and thus quickly spot uncovered code areas. These data can be very helpful for prioritizing application development and testing activities.