Dec 31, 2012

Ways of Data Access in Silverlight



Question: It is recommended to either use WCF, ADO.NET Data Services or the out of band .NET RIA services.
I know that ADO.NET data services are an abstraction that lies on top of WCF, but where do RIA services fit in? What do they solve that I don't have in WCF?
Question posted in MSDN (http://channel9.msdn.com/shows/The+knowledge+Chamber/Yavor-Georgiev-using-WCF-with-Silverlight-30)

 Answered by Yavor Georgiev (Program Manager at Microsoft):
You've nailed the big three ways to get data into your Silverlight app. Using WCF gives you access to the message exchange pattern and things like how the data is encoded (hence features like duplex and binary), so it's the most flexible and powerful way to do services. Also when you use WCF you are building loosely-coupled and standards based services that can be composed and used by many other clients, not just Silverlight (after all WCF services are standard SOAP 1.2 services).
ADO.NET Data Services is geared toward a scenario where you want to expose a database as a REST-style service. It is great for that scenario but you are constrained to the serialization formats and message patterns of REST.
.Net RIA Services is similarly scoped to an end-to-end data-driven solution: you expose a data and bind it to a rich Silverlight control in very few steps. The "service" and "client" are very tightly coupled but you get features such as validation, paging, conflict management, batching, offline support, etc. Again you get great value if you are implementing this kind of scenario, but you lose the flexibility WCF gives you.
So I think all three approaches have great use cases.

Dec 28, 2012

Knockout : a Javascript library

I was searching the youtube for some video on Asp.net MVC 4 when I stumbled upon one which had ASP.Net MVC 3 + Knockout….

The name naturally evoked curiosity and slowly I was going through sites to learn more about the library. The more I read the more I got fascinated by it..

The features that it offered were so much desirable by us web developers and yet it was all so simple..

Despite the act that it also means that we would again have to get used to another set of syntax (am still not completely recuperated from the wave of learnings for the jquery libs yet !!! )

I can definitely see a big push towards it in the near future…

 

Definition of the Knockout.js Library as available in the official website (Knockoutjs.com)

Knockout is a JavaScript library that helps you to create rich, responsive display and editor user interfaces with a clean underlying data model.

Any time you have sections of UI that update dynamically (e.g., changing depending on the user’s actions or when an external data source changes), KO can help you implement it more simply and maintainably.

 

--------------------------------------------------------------------------------------------------------

Headline features:

  • Elegant dependency tracking - automatically updates the right parts of your UI whenever your data model changes.
  • Declarative bindings - a simple and obvious way to connect parts of your UI to your data model. You can construct a complex dynamic UIs easily using arbitrarily nested binding contexts.
  • Trivially extensible - implement custom behaviours as new declarative bindings for easy reuse in just a few lines of code.

---------------------------------------------------------------------------------------------------------

Other things that I noticed…

  • Tight integration with Asp.net MVC –
    • In a controller action use the JSON ActionResult as a return type
    • In a simple html page call the controller action through a jquery Ajax call
    • On the success of the previous call fill the knockout variable
    • bind the ui controls with the ko variable.
  • MVVM pattern based bidirectional binding.
    • Bind the ui controls to the ko variable
    • create functions to add remove to the ko variables
    • all the dependent controls will automatically reflect the changes
  • Very small footprint.
    • Easy for mobile browsers too
    • Since it is backward compatible till IE6, can be used without the fear of unsupported browser.
  • Main components:
    • KO observablevariables and observableVariableArrays (for binding list type controls)
    • The code to fill these variables
    • // This is a simple *viewmodel* - JavaScript that defines the data and behavior of your UI
      function AppViewModel() {
      this.firstName = "Bert";
      this.lastName = "Bertington";
      }

    • The activation code
    • // Activates knockout.js
      ko.applyBindings(new AppViewModel());

    • Binding the observable variables to the ui controls

                           <pre>
                             &lt;p&gt;First name: &lt;input data-bind=&quot;value: firstName&quot; /&gt;&lt;/p&gt;
                            &lt;p&gt;Last name: &lt;input data-bind=&quot;value: lastName&quot; /&gt;&lt;/p&gt;

                          </pre>

Some important links:

Tutorials: http://learn.knockoutjs.com/#/?tutorial=intro

Video tutorials:

By the creator:  http://www.youtube.com/watch?v=DnhGqcKEPiM

Very simple but important tutorial: http://www.youtube.com/watch?v=ZAZrUUv2Xuk

HTML encoder: For encoding ur HTM codes to be added to your blog!!!

http://www.opinionatedgeek.com/dotnet/tools/htmlencode/encode.aspx

Dec 26, 2012

ASP.Net : Masterpage and HTML Table height 100% issue

Recently, In a test project I was facing a strange problem.
The webpage layout was supposed to be as follows :

 
 The layout was supposed to cover the entire browser real estate.. and should be resizable... so no hard coded heights or widths...
I wanted to render it using a HTML table approach instead of the standard Div approach.
The first hurdle was that the Height attribute of the HTML Table isn't strictly followed during rendering in the browsers, even though the Width attribute works just fine. This was due to the fact that for the height attribute the browsers also  depend on the containers height. So the oft repeated suggstion is to use a style to spread the height and weight of the 2 outermost containers of a page, i.e., the HTML and the BODY.
So I added the following style to my stylesheet:

html, body
{
height:100%;
width: 100%;
}

Followed by this in the webpage:

<table style="height: 100%px; width: 100%px;">
<tbody>
<tr>
  <td colspan=2 height=80>Header</td>
</tr>

<tbody>
<tr>
  <td  width=25% height=*>Navigation Links</td>
  <td  width=75%>Body</td>
</tr>


<tbody>
<tr>
  <td colspan=2 height=50>Footer</td>
</tr>

</tbody>
</table>

<br />



I noticed that webpages, both aspx and plain htmls, works fine with this code..
 But the moment I modify it to be used in a masterpage, all the row height formatting goes for a toss..

<table style="height: 100%px; width: 100%px;">
<tbody>
<tr>
  <td colspan=2 height=80>Header</td>
</tr>

<tbody>
<tr>
  <td  width=25% height=*>Navigation Links</td>
  <td  width=75%>
   <asp:contentplaceholder height:100="height:100" id="ContentPlaceHolderBody" runat="server />
</td>
</tr>


<tbody>
<tr>
  <td colspan=2 height=50>Footer</td>
</tr>

</tbody>
</table>

<br />

I tried all the suggested help on this matter  on the net, like 

1. Puttng the table inside a div and increasing the height of the div.
 
2. Adding a div within the   asp:contentplaceholder tags. Viz.,    


<asp:contentplaceholder height:100="height:100" id="ContentPlaceHolderBody" runat="server>
<div style="height:100%">
....
</div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
</asp:contentplaceholder>


    
3. Adding min-height and min width styles to the html, body and tables.
4. Setting the CellPadding and CellSpacing attributes of the table to zero 
5. I analysed the pages generated by the aspx/htmls vs that using the masterpage..
..etc..etc...

During my efforts at removing this aberration, I was trying multiple things and during one such run Voila! I struck gold!!!

The page was rendering just as expected even inside the masterpage and what did I do...?
Just deleted the Doctype tag, which is generated by default by the IDE.
FYI, this is how the usual doctype tag looks like ...
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

I just deleted it from the top.. and that's it!!!

Hope it helps!!!

HTML: Media Attribute in the Link tag and CSS Media Queries

source:w3 schools - http://www.w3schools.com/tags/att_link_media.asp

HTML media Attribute

Example

 <head>


      <link href="theme.css" rel="stylesheet" type="text/css"></link>

 <link href="print.css" media="print" rel="stylesheet" type="text/css"></link>

    </head>
 




Browser Support

Internet Explorer Firefox Opera Google Chrome Safari

 



Definition and Usage

The media attribute specifies what media/device the target resource is optimized for.
This attribute is mostly used with CSS stylesheets to specify different styles for different media types.
The media attribute can accept several values.


Possible Operators

Value Description
and Specifies an AND operator
not Specifies a NOT operator
, Specifies an OR operator

Devices

Value Description
all Default. Suitable for all devices
aural Speech synthesizers
braille Braille feedback devices
handheld Handheld devices (small screen, limited bandwidth)
projection Projectors
print Print preview mode/printed pages
screen Computer screens
tty Teletypes and similar media using a fixed-pitch character grid
tv Television type devices (low resolution, limited scroll ability)

Values

Value Description
width Specifies the width of the targeted display area.
"min-" and "max-" prefixes can be used.
Example: media="screen and (min-width:500px)"
height Specifies the height of the  targeted display area.
"min-" and "max-" prefixes can be used.
Example: media="screen and (max-height:700px)"
device-width Specifies the width of the target display/paper.
"min-" and "max-" prefixes can be used.
Example: media="screen and (device-width:500px)"
device-height Specifies the height of the target display/paper.
"min-" and "max-" prefixes can be used.
Example: media="screen and (device-height:500px)"
orientation Specifies the orientation of the target display/paper.
Possible values: "portrait" or "landscape"
Example: media="all and (orientation: landscape)"
aspect-ratio Specifies the width/height ratio of the targeted display area.
"min-" and "max-" prefixes can be used.
Example: media="screen and (aspect-ratio:16/9)"
device-aspect-ratio Specifies the device-width/device-height ratio of the target display/paper.
"min-" and "max-" prefixes can be used.
Example: media="screen and (aspect-ratio:16/9)"
color Specifies the bits per color of target display.
"min-" and "max-" prefixes can be used.
Example: media="screen and (color:3)"
color-index Specifies the number of colors the target display can handle.
"min-" and "max-" prefixes can be used.
Example: media="screen and (min-color-index:256)"
monochrome Specifies the bits per pixel in a monochrome frame buffer.
"min-" and "max-" prefixes can be used.
Example: media="screen and (monochrome:2)"
resolution Specifies the pixel density (dpi or dpcm) of the target display/paper.
"min-" and "max-" prefixes can be used.
Example: media="print and (resolution:300dpi)"
scan Specifies scanning method of a tv display.
Possible values are "progressive" and "interlace".
Example: media="tv and (scan:interlace)"
grid Specifies if the output device is grid or bitmap.
Possible values are "1" for grid, and "0" otherwise.
Example: media="handheld and (grid:1)"





source: w3.org

Media types

Introduction to media types

One of the most important features of style sheets is that they specify how a document is to be presented on different media: on the screen, on paper, with a speech synthesizer, with a braille device, etc.
Certain CSS properties are only designed for certain media (e.g., the 'page-break-before' property only applies to paged media). On occasion, however, style sheets for different media types may share a property, but require different values for that property. For example, the 'font-size' property is useful both for screen and print media. The two media types are different enough to require different values for the common property; a document will typically need a larger font on a computer screen than on paper. Therefore, it is necessary to express that a style sheet, or a section of a style sheet, applies to certain media types.

Specifying media-dependent style sheets

There are currently two ways to specify media dependencies for style sheets:
  • Specify the target medium from a style sheet with the @media or @import at-rules.
    @import url("fancyfonts.css") screen;
    @media print {
      /* style sheet for print goes here */
    }
    
  • Specify the target medium within the document language. For example, in HTML 4 ([HTML4]), the "media" attribute on the LINK element specifies the target media of an external style sheet:
       <head>
          <title>Link to a target medium</title>
          <link href="foo.css" media="print, handheld" rel="stylesheet" type="text/css"></link>
       </head>
       <body>
          The body...
       </body>
    </html>
    
The @import rule is defined in the chapter on the cascade.

The @media rule

An @media rule specifies the target media types (separated by commas) of a set of statements (delimited by curly braces). Invalid statements must be ignored per 4.1.7 "Rule sets, declaration blocks, and selectors" and 4.2 "Rules for handling parsing errors." The @media construct allows style sheet rules for various media in the same style sheet:
  @media print {
    body { font-size: 10pt }
  }
  @media screen {
    body { font-size: 13px }
  }
  @media screen, print {
    body { line-height: 1.2 }
  }
Style rules outside of @media rules apply to all media types that the style sheet applies to. At-rules inside @media are invalid in CSS2.1.

Recognized media types

The names chosen for CSS media types reflect target devices for which the relevant properties make sense. In the following list of CSS media types the names of media types are normative, but the descriptions are informative. Likewise, the "Media" field in the description of each property is informative.
all
Suitable for all devices.
braille
Intended for braille tactile feedback devices.
embossed
Intended for paged braille printers.
handheld
Intended for handheld devices (typically small screen, limited bandwidth).
print
Intended for paged material and for documents viewed on screen in print preview mode. Please consult the section on paged media for information about formatting issues that are specific to paged media.
projection
Intended for projected presentations, for example projectors. Please consult the section on paged media for information about formatting issues that are specific to paged media.
screen
Intended primarily for color computer screens.
speech
Intended for speech synthesizers. Note: CSS2 had a similar media type called 'aural' for this purpose. See the appendix on aural style sheets for details.
tty
Intended for media using a fixed-pitch character grid (such as teletypes, terminals, or portable devices with limited display capabilities). Authors should not use pixel units with the "tty" media type.
tv
Intended for television-type devices (low resolution, color, limited-scrollability screens, sound available).
Media type names are case-insensitive.
Media types are mutually exclusive in the sense that a user agent can only support one media type when rendering a document. However, user agents may use different media types on different canvases. For example, a document may (simultaneously) be shown in 'screen' mode on one canvas and 'print' mode on another canvas.
Note that a multimodal media type is still only one media type. The 'tv' media type, for example, is a multimodal media type that renders both visually and aurally to a single canvas.
@media and @import rules with unknown media types (that are nonetheless valid identifiers) are treated as if the unknown media types are not present. If an @media/@import rule contains a malformed media type (not an identifier) then the statement is invalid.
Note: Media Queries supercedes this error handling.
For example, in the following snippet, the rule on the P element applies in 'screen' mode (even though the '3D' media type is not known).
@media screen, 3D {
  P { color: green; }
}
Note. Future updates of CSS may extend the list of media types. Authors should not rely on media type names that are not yet defined by a CSS specification.

Media groups

This section is informative, not normative.
Each CSS property definition specifies which media types the property applies to. Since properties generally apply to several media types, the "Applies to media" section of each property definition lists media groups rather than individual media types. Each property applies to all media types in the media groups listed in its definition.
CSS 2.1 defines the following media groups:
The following table shows the relationships between media groups and media types:
Relationship between media groups and media types
Media Types Media Groups
  continuous/paged visual/audio/speech/tactile grid/bitmap interactive/static
braillecontinuoustactilegridboth
embossedpagedtactilegridstatic
handheldbothvisual, audio, speechbothboth
printpagedvisualbitmapstatic
projectionpagedvisualbitmapinteractive
screencontinuousvisual, audiobitmapboth
speechcontinuousspeechN/Aboth
ttycontinuousvisualgridboth
tvbothvisual, audiobitmapboth



Some scenarios of usage of the media attribute

  •  A solution to print Webpages (http://ozinisle.blogspot.sg/2009/12/media-attribute-of-link-tag-solution-to.html)



 Problems associated with using media attribute

 source : http://friendlybit.com/css/media-attribute/

Problem number 1: Presentation and content are coupled

One of the big selling points of CSS is the separation of presentation from content. And while I believe that’s still a good thing to have, not everyone agrees about it’s usefulness. Some time ago Jeff Croft wrote an article about how unusual it is to only change the CSS and not the HTML of a site. While he’s mostly making a point of why we should use a certain CSS framework, there’s a good point hidden there: Right now it isn’t possible to separate things completely.
Designers have of course known this for centuries. They will tell you: you need to adapt the design to the content you’re designing. If you’re building a site for a shampoo, you might use water and bubbles in your design. If you build a web development blog, you use an image of blueish sky… ehm… Well, you get the point. Good design adapts to the content. They are coupled.
You can’t just switch out the content and expect the design to still work. Sure, you can make small adjustments, and make these available as alternate stylesheets, but larger changes just doesn’t work. The problem with the media attribute is that it’s made for big design changes (switching media), but with no changes in content. How often can you just restyle a word document to get a powerpoint slide? What about converting that slide to something nicely viewable on a mobile phone? That’s what the media attribute is there for.
My point is: switching to another media needs much more than just a change in design. You need to change the content to fit that media too. And if you change the content, why not change what stylesheet you link to? This is why I rarely use the “handheld” or “projection” values.

Problem number 2: Load time

You could think that the browser only loads the one stylesheet that matches the media it’s currently showing. Not true. All stylesheets, no matter what media they are tied to, are loaded at startup.
This means that the more media types you account for, the longer the load time for any of them will be. Very annoying.

Problem number 3: User agent support

While being able to design for specific user agents might sound like a good idea, the media attribute still requires user agents to support it. If a large part of user agents refuse to apply your style using the media attribute, why not use another method directly? Just to get an idea of how messy support currently is, you can read the css-discuss summary of the handheld problems:
“Some current phones apply “screen” styles as well as “handheld” styles, others ignore both, and in some cases the phone carrier runs pages through a proxy that strips styles out even if the phone could recognize them, so it’s a crapshoot figuring out what will get applied”
As you see, the value of the media attribute isn’t entirely obvious. Sure, you might be able to use it for print with good results, but that’s not all it’s there for. Right?

 


Alternative to using the media attribute


 


CSS Media queries

source: w3.org

Example


<link rel="stylesheet" href="/stylesheets/homedark.css" type="text/css" 
    media="screen and (max-device-width: 480px)">

Inside your stylesheet, it would be something like this
@media only screen and (max-device-width: 480px) 
{
    //Your styles here
}

 

 Description and documentation-

  •  http://cssmediaqueries.com/
  • A super easy video tutorial : www.youtube.com/watch?v=FNIe-Y2V0hg

Dec 24, 2012

Generating Dynamic HTML Tables with jQuery From an In-Memory DataTable

(sourced from: http://www.codecapers.com/post/generating-dynamic-html-tables-with.aspx)

By nature, web applications are stateless so you have to work a little harder to make them produce an user experience equivalent to that of a traditional windows application.
In this article, I will go over the process for how to load an HTML table to a web page using Ajax and jQuery. The process will require you to execute an asynchronous query in SQL Server, load it into a DataTable and return it to the browser as JSON (JavaScript Object Notation). Finally the JSON string can be consumed by jQuery and rendered as an HTML table on the browser.
Lets start out by looking at this traditional snippet of ADO.NET code:
System.Data.DataTable tbl = new System.Data.DataTable();

try {
    using (SqlConnection conn = new SqlConnection(connectString)) {
        SqlDataAdapter da = new SqlDataAdapter(sql, conn);                
        da.Fill(tbl);                                         
     }
}
catch {
    //do something
}
The code simply loads a query result into a DataTable. This is something that most .NET developers have probably done a million times before. Therefore I will just move forward onto the next step which is serializing the DataTable as JSON.
After a bit of experimenting, I determined that a collection of Dictionary objects would be a perfect candidate for serializing the DataTable as a JavaScript object. I originally tried just converting the DataTable by calling Json(tbl). However, the conversion was not sufficient for my needs. By using a collection of Dictionary objects, each row in the DataTable would map to a Dictionary object. Here is the code that I ended up with:
using System.Web.Script.Serialization;
...
public static string GetJson(DataTable table) {
    JavaScriptSerializer jss = new JavaScriptSerializer();
    Liststring
, object>> rows = new Liststring, object>>();
Dictionary<string, object> row;

foreach (DataRow dr in table.Rows) {
row = new Dictionary<string, object>();
foreach( DataColumn col in table.Columns ) {
row.Add(col.ColumnName, dr[col]);
}
rows.Add(row);
}
return jss.Serialize(rows);
} To clarify the code, lets consider the following table:
ID NAME
1 Foo
2 Bar
This table would produce a list with two objects. The row with the ID of 1 would be represented as a dictionary object that has a Key equal to 1 and a Value equal to "Foo". The row with the ID of 2 would be represented as a dictionary object that has a Key equal to 2 and a Value equal to "Bar". When converted to a JSON string the entire table would look something like this:
[ { "ID" = 1, "NAME" = "Foo" }, { "ID" = 2, "NAME" = "Bar" } ]
For those of you who can interpret JSON, this is simply an array with two objects.
So now I have two parts of the puzzle completed. First, I executed the query and populated a DataTable. Secondly, I serialized the DataTable into JSON. Now I need to add the code to my webpage to make the asynchronous JavaScript call and format the results as an HTML table. Luckily for us, jQuery makes AJAX calls simple by using the post method:
   1:  $.post("/Utils/GetData", 
   2:      { orderBy: ord, keyword: key },
   3:      function(data) {
   4:         buildTable(data);
   5:      }
   6:  );
The first parameter in the post method is the URL of the method we are invoking on the web server. The second part consists of the parameters you are passing to the method. In my case, the method took 2 strings, one named orderBy and another named keyword. The variables ord and key were initialized earlier in the code by reading the values of two textboxes on the page. On line 3, we have the callback function that is invoked when the results are returned from the web server. The variable named data, is the JSON representation of the DataTable we created using the GetJson method. The only thing left to do now, is render the JSON results as a table.
To render the dynamic table I started by adding a placeholder to my web page which will hold the results:
  
Since there are no rows initially in my table it does not appear on the page. However, if you wanted to hide it you could easily do so by calling $("#grid").hide() or setting the initial style to display:none. And finally, here is the javascript that converts the serialized DataTable into HTML:
function buildTable(tableData) {    
    var table = $("#grid");
    table.html("");  //clear out the table if it was previously populated
    eval("var data = " + tableData);  //load the data variable as an object array

    table.append('

');
    var thead = $('thead tr', table);                                        
    
    //create the table headers
    for (var propertyName in $(data)[0]) {                
        thead.append('' + propertyName + '
');
}

//add the table rows
$(data).each(function(key, val) {
table.append(' ');
var tr = $('tr:last', table);
for (var propertyName in val) {
tr.append('
'
+ val[propertyName] + '');
}
});
} In my application, I used this code to create a web page that allows a user to constantly modify the parameters for a query and dynamically update the results of a table without refreshing the page. jQuery and Ajax really make the web application feel "state-ful" Not to mention, it drastically improves the user experience within the application.

Unit Testing. definition and details

Description:
Unit testing is the automated testing of software components. The technique is used to build high-quality, reliable software by writing a suite of accompanying automated tests that validate assumptions and business requirements implemented by your software.

Over the last few years a movement has appeared in software development called ‘eXtreme Programming’ or XP. XP has many facets, but one of the most interesting is the idea of ‘agile’ methodologies. The primary idea behind agile programming is that software is delivered early and continuously. To achieve this, developers must ensure that their software is well tested. This has led to the idea of ‘Test Driven Development’ or TDD. In TDD, developers continually test their code to ensure that the code works, and also to ensure that the changes they have made do not break existing code. To do this effectively requires several things:
  • The tests have to be repeatable, which means that they can be re-run whenever necessary and so allow for regression testing.
  • The tests have to be runable by somebody other than the test author. This allows for sharing of code and tests, and it also means that if the developer leaves the project then the tests are still there to be used and are meaningful.
  • The test results have to be concise, but errors must be very visible. There is little point in running tests if the errors are hidden in the output of successful tests.
The above requirements have led to several testing frameworks, collectively known as the xUnit frameworks, where the x is replaced by a letter or word that identifies the language or system being used, for example JUnit for Java testing and NUnit for .NET testing. One other thing to keep in mind is that exponents of TDD do exactly what TDD says, i.e. use the tests to drive the development. This means writing the tests first, then writing the code. Initially this is hard to do because all developers want to get on and write code, but psychologically it makes sense. There is a tendency when writing code first to then test what you have written, whereas if you write the test first you should write tests to test the ideas that you are trying to convey.

Unit Testing tools:
NUnit:
NUnit is a unit-testing framework for all .Net languages. Initially ported from JUnit, the current production release, version 2.6, is the seventh major release of this xUnit based unit testing tool for Microsoft .NET. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages.

NUnit is the unit testing framework that has the majority of the market share. It was one of the first unit testing frameworks for the .NET platform. It utilizes attributes to identify what a test is. The TestFixture attribute is used to identify a class that will expose test methods. The Test attribute is used to identify a method that will exercise a test subject. Let's get down to business and look at some code.
First we need something to test:

public class Subject { 
  public Int32 Add(Int32 x, Int32 y)
  { 
    return x  + y; 
  } 
}
That Subject class has one method: Add. We will test the Subject class by exercising the Add method with different arguments.

[TestFixture]
public class tSubject
{
  [Test]
  public void tAdd()
  {
    Int32 Sum;
    Subject Subject = new Subject();
    Sum = Subject.Add(1,2);
    Assert.AreEqual(3, Sum);
  }
}
The class tSubject is decorated with the attribute TestFixture, and the method tAdd is decorated with the attribute Test. You can compile this and run it in the NUnit GUI application. It will produce a successful test run.
That is the basics of what NUnit offers. There are attributes to help with setting up and tearing down your test environment: SetUp, SetUpFixture, TearDown, and TearDownFixture. SetUpFixture is run once at the beginning when the fixture is first created; similarly, TearDownFixture is run once after all tests have completed. SetUp and TearDown are run before and after each test.
NUnit tests can be run several different ways: from the GUI application, from the console's application, and from a NAnt task. NUnit has been integrated into Cruise Control .NET as well. In the last product review, you will see how it has been integrated into the VS.NET IDE as well.
NUnit GUI Application
Figure 1. NUnit GUI Application

--------------------------------------------------------------------------------------------------

NUnit is a framework for unit testing .NET applications. It is an open source tool, and at the time of writing version 2.2 is available from www.nunit.org. To see NUnit in action we will write a class and a set of tests for that class. To run the tests for your code you will need to download this version of NUnit and then install it. NUnit will be installed in the GAC making it available to all .NET processes.

Setting up Visual Studio

The easiest way to write test code if you are using Visual Studio is to use two separate projects, one that contains the code under test (i.e. your normal code), and one that contains the testing code. Typically each of these projects will be built into its own assembly, and so long as both assemblies are available to the NUnit framework, this is not a problem, as NUnit will be able to load them. However, Visual Studio does put a roadblock in the way of this. If the code under test is to be built into a .EXE assembly rather than a .DLL, then Visual Studio will not let you reference this assembly from another project, something you have to do if you want your test assembly to compile. To get around this you can build a copy of the code to be tested into the same assembly as the testing code. This assembly can then be loaded by NUnit and all the tests run. To do this under Visual Studio you create a solution with two projects, the first project is your code and the second project is your testing code. In the test project you then need to add references to the code from the ‘real’ project. To do this you right click on the test project, select Add then ‘Add Existing Item’. Now browse to the directory the real code is in, then select the files to add, then in the button on the dialog make sure you select “Link File” (see Figure_1 for an example).
Figure 1
Figure 1

Once you link to the files you can then build your testing assembly.

The class to test

To show how NUnit works we need a class to test. I wanted something that was fairly easy to understand but would also allow me to point out the features of NUnit. The class we will write will be a utility class that will count the number of characters, words and paragraphs in a file (very useful if you are an author and get paid by the word!). Our class will look something like this
public class FileCount
{
 private int _words;
 private int _characters;
 private int _paragraphs;
 public int Characters
 {
  get{ return _characters; }
 }
 public int Words
 {
  get{ return _words; }
 }
 public int Paragraphs
 {
  get{ return _paragraphs; }
 }
 public FileCount(string fileName)
 {
  // read file into buffer
  // count chars, words and paras
 }
 private void CountChars(byte[] data)
 {
 }
 private void CountWords(byte[] data)
 {
 }
 private void CountParagraphs(byte[] data)
 {
 }
}
In this class the constructor is passed a filename. It has to open the file and determine the number of characters, words and paragraphs in the file. Following in the footsteps of the agile programmers we will write the tests before we write the code. Let’s start by writing a test and then running it in NUnit, to show how this all hangs together.

Writing and testing code

Our testing class looks like this:
using System;
using NUnit.Framework;
using kevinj;
namespace FileDataTest
{
 [TestFixture]
 public class TestFileCount
 {
  FileCount fc;
  [SetUp]
  public void SetUp()
  {
   fc = new FileCount(“”);
  }
  [Test]
  public void TestCountChars()
  {
   fc.countChars();
  }
 }
}
A couple of points to note here: The kevinj namespace is the namespace containing the code under test, while the NUnit.Framework namespace references the NUnit code.
This test currently contains two methods: Setup() and TestCountChars(). Traditionally (i.e. in JUnit, the Java equivalent of NUnit, which is the grand-daddy of all unit testing tools) these names matter, however in NUnit it is the attributes that tell the story.
The [TestFixture] attribute marks this class as containing tests.
The [Test] attribute marks this as a test case, i.e. code that will be run by NUnit to execute one or more tests; you can mark as many methods as you need with this attribute.
The [setUp] attribute on the other hand can only be applied to one method. This method is run before the start of each test and is used to initialise the test environment. There is also a corresponding [tearDown] method that is run at the end of each test case.
This means that if I had two tests called foo and bar, the order of execution would be:
Setup(), Foo(), Teardown(), Setup(),
 Bar(), TearDown().
The first test we write will test the CountChars method of the FileCount class – as this method hasn’t been written yet, the test should fail. The test code looks like this:
[Test]
public void TestCountCharsNoNewLines()
{
 // 1234567890123456789012345678901
 string stringData =
 “There are 31 chars in this line”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 fc.CountChars(byteData);
 Assert.AreEqual(
 byteData.Length, fc.Characters);
}
In the test we create the data used for the test, put it into a byte array and call the method we would like to test. When the method returns we check to see if the test has succeeded or failed, we do this by calling a method of the Assert class. Assert is a class that provides many static methods to check the result of tests that you run. There are three kinds of methods on Assert, comparisons, conditionals and utility methods. The comparison methods have two forms:
Assert.AreEqual( type expected,
  type actual );
…and:
Assert.AreEqual( type expected,
 type actual, string description );
…where type is a .Net type, for example:
Assert.AreEqual( int expected,
 int actual );
Although there are some variations on this, for example the AreEqual methods for float also take a parameter that is the tolerance in the comparison. There are also AreSame methods that check for reference equality, and since NUnit 2.2 you can also compare arrays for equality.
The conditional tests are IsTrue, IsFalse, IsNull and IsNotNull and finally the utility methods are Fail and Ignore.

Running the tests

If you try and compile this code, it will fail. This is because the CountChars() method in the FileCount class is private. There is an ongoing debate as to whether ‘private’ methods should be tested, and by private here I mean anything that is not public. My personal view is that you should test anything that needs testing. If your class contains only a single method in its public interface and many non-public methods, simply testing that public method becomes extremely difficult. Tests should be as simple as possible and as easy to understand as possible. However I do not recommend making everything non-private just to get testing code to compile. For now we will take a short cut by marking the method as internal allowing access to the method from other code in the same assembly (another reason for putting all the code into one assembly), later I will present another solution to this problem. So, if you mark the method as internal and recompile, everything should now build and you can run the tests. NUnit comes with two test ‘runners’ – the GUI runner and the console runner. To run the NUnit console runner, run a command prompt and execute the following command:
c:>”c:\Program Files\NUnit
 2.2\bin\nunit-console.exe”
This will show the help for the command. To run the console test runner change to the directory containing the assembly and run:
“c:\Program Files\NUnit 2.2\bin\
 nunit-console.exe” FileDataTest.dll
This will produce output something like:
.F
Tests run: 1, Failures: 1,
 Not run: 0, Time: 0.0400576 seconds

Failures:
1) FileDataTest.TestFileCount.
  TestCountCharsNoNewLines :
  expected:<0>
   but was:<31>
 at FileDataTest.TestFileCount.
  TestCountCharsNoNewLines() in
  c:\filedatatest\testfilecount.cs:
  line 30
The ‘F’ indicates a test failure. The console runner is very quick and efficient, however NUnit has another runner that you might want to use, the GUI runner. This is a Windows Forms application that we will use for the rest of the article.
To run the test fixture using the GUI runner you first need to start it. It should be available from the Start menu under the NUnit 2.2 program group (at the time of writing). Once started, select File..Open File then browse to the DLL containing your test cases, load the DLL and hit the run button.
Figure_2 shows what the results look like.
Figure 2
Figure 2

We have a red bar! Red is bad! The idea is to make the bar go green. Green is good! It means all the tests have passed. To get to this state of Nirvana we have to go back and write an implementation of the method under test. A naïve implementation would look like this:
internal void CountChars(byte[] data)
{
 _characters += data.Length;
}
Re-compile, re-run the test and the bar goes green, remember, this is good! However this implementation is far too simple, it counts carriage returns and line-feeds as characters, most tools do not do this, so we have to define what we mean by a character. To do this we can use the .NET Char.IsControl(char c) method. Using this changes our implementation to:
internal void CountChars(byte[] data)
{
 foreach (byte b in data)
 {
  if (Char.IsControl((char)b) == false)
   _characters++;
 }
}
Not as efficient, but it does turn the bar green, so we move on for now. We can now write tests for the other methods, testing and refining as we go along.

Testing exceptions

One of the important points of unit testing is that you must test edge cases. For example, what happens when you pass a null buffer to the CountChars method? Let’s try it:
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 fc.CountChars(byteData);
 Assert.AreEqual(0, fc.Characters);
}
If you run the tests now the tests fail with an exception:
FileDataTest.TestFileCount.TestCountCharsWithNullBuffer :
 System.NullReferenceException :
 Object reference not set to an
 instance of an object.
This is probably not what is wanted. In this case we have two choices (at least). Within the method we can check for the null case and set the word count to 0, or we can throw an application level exception. For pedagogical reasons we will throw the exception. The code now looks like this:
internal void CountChars(byte[] data)
{
 if (data == null) throw new ArgumentNullException(
  “Data element cannot be null”);
 foreach (byte b in data)
 {
  if (Char.IsControl((char)b) == false)
   _characters++;
 }
}
Running the test again still produces an exception:
FileDataTest.TestFileCount.TestCountCharsWithNullBuffer :
 System.ArgumentNullException :
 Data element cannot be null
However, this is now expected, and we must amend the test case to succeed if this exception is thrown. The way we do this in NUnit is to add another attribute to the test case, this is the ExpectedException attribute. The test case now looks like this:
[Test]
[ExpectedException (typeof (ArgumentNullException))]
public void TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 fc.CountChars(byteData);
}
Notice that the ExpectedException attribute takes a type as its parameter, this is the type of the exception that we expect to be thrown. If the exception is thrown the test will succeed and if the exception is not thrown then the test will fail, this is exactly what we want. There are some other attributes we should mention, starting with the Ignore attribute. You add this to any test cases that should not be run. It takes a message as a parameter which is the reason for not running the test case, something like this:
[Ignore (“Code not yet written”)]
public void testSomeTest()
{
  ...
}
You can use this to include tests that you know need to be written but you have not got around to yet, this will act as a reminder that the test has to be written and run at some point (in the NUnit GUI these tests show up in yellow). There is also the Explicit attribute. A test case with this attribute will not be run automatically, instead you have to explicitly choose the test case in the runner.
Testing private methods
I now want to address one of the issues we skipped over above, how to test private methods. Originally I said to mark the method as internal, however this breaks one of the cardinal rules of object-oriented programming. We should keep our scopes as narrow as possible, marking a method as internal when it should be private smells wrong. However if you mark the CountChars method private the code simply fails to compile. To overcome this limitation we have to use another feature of .NET, reflection. Reflection allows us to reach into any .NET class and examine the information about the class, including the data members, properties and methods the class has available. Reflection also allows us to set the values of the properties and data members, and to execute methods, including private methods (assuming we have the correct security permissions).
Let’s take the simplest test first TestCountCharsNoNewLines(), this changes to the following:
[Test]
public void TestCountCharsNoNewLines()
{
 // 1234567890123456789012345678901
 string stringData =
 “There are 31 chars in this line”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc,
  new object[]{byteData});
 Assert.AreEqual(stringData.Length,
  fc.Characters);
}
In this code we replace the call to fc.CountChars() with a set of Reflection APIs. To call the CountChars method using reflection we have to do several things:
  • We have to know the type of object we want to call the method on
  • We have to have an instance of that type
  • We have to have a reference to the method to call
  • We have to pass any data to the method
So in the above code the first thing we do is use the typeof operator to get a Type reference to the Type object for FileCount. We use the Type’s GetMethod member to get a reference to a MethodInfo instance that represents the CountChars method. Notice that to the GetMethod call we pass the BindingFlags.NonPublic and BindingFlags.Instance, no prizes for guessing that these says we want a reference to a non-public, non-static member of FileCount. Once we have this reference we can then call the method. This is done through the Invoke method of MemberInfo; this takes two arguments, the instance on which to call the method and the parameters to that method. Remember that the instance (fc in the above code) is created in the Setup method. The parameters (in this case the byte array) have to be passed as an array of objects. The CLR then manages the calling of the method with the correct stack in place. Phew!
We can now convert the other two test cases, which look like this:
[Test]
public void
 TestCountCharsWithNewLines()
{
 // 1234567890123456789012345678901
 string stringData = “There are 31
  chars in this line\r\n”;
 byte[] byteData =
Encoding.ASCII.GetBytes(stringData);
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc, new
  object[]{byteData});
 Assert.AreEqual(stringData.Length
  - 2, fc.Characters);
}

[Test]
[ExpectedException (typeof
 (ArgumentNullException))]
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 mi.Invoke(fc, new
  object[]{byteData});
}
Recompile and run the code. The bar goes red – oops! The failing test is TestCountCharsWithNullBuffer and the error is:
FileDataTest.TestFileCount.
 TestCountCharsWithNullBuffer :
 Expected: ArgumentNullException but
 was TargetInvocationException
What happens is that the exception is being thrown by the test case, but the Invoke method is wrapping the application exception in a TargetInvocationException, which is not what we want. We need to unwrap the exception, which is easy to do. Amend the code to look like the following:
[Test]
[ExpectedException (typeof
 (ArgumentNullException))]
public void
 TestCountCharsWithNullBuffer()
{
 byte[] byteData = null;
 Type t = typeof(FileCount);
 MethodInfo mi =
  t.GetMethod(“CountChars”,
  BindingFlags.NonPublic |
  BindingFlags.Instance);
 try
 {
  mi.Invoke(fc, new
   object[]{byteData});
 }
 catch(TargetInvocationException
 tie)
 {
  throw tie.InnerException;
 }
}
We wrap the call to invoke in a try..catch block and re-throw the TargetInvocationExceptions InnerException. Re-run the code and the bar turns green, Nirvana again.



Visual Studio Unit Tests:

Description : (from Wikipedia:)
The Visual Studio Unit Testing Framework describes Microsoft's suite of unit testing tools as integrated into some[1] versions of Visual Studio 2005 and later. The unit testing framework is defined in Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll. Unit tests created with the unit testing framework can be executed in Visual Studio or, using MSTest.exe, from a command line.

Elements

Test class

Test classes are declared as such by decorating a class with the TestClass attribute. The attribute is used to identify classes that contain test methods. Best practices state that test classes should contain only unit test code.

Test method

Test methods are declared as such by decorating a unit test method with the TestMethod attribute. The attribute is used to identify methods that contain unit test code. Best practices state that unit test methods should contain only unit test code.

Assertions

An assertion is a piece of code that is run to test a condition or behavior against an expected result. Assertions in Visual Studio unit testing are executed by calling methods in the Assert class.

Initialization and cleanup methods

Initialization and cleanup methods are used to prepare unit tests before running and cleaning up after unit tests have been executed. Initialization methods are declared as such by decorating an initialization method with the TestInitialize attribute, while cleanup methods are declared as such by decorating a cleanup method with the TestCleanup attribute.

Sample Test

Below is a very basic sample unit test:------
using Microsoft.VisualStudio.TestTools.UnitTesting;
 
[TestClass]
public class TestClass
{
    [TestMethod]
    public void MyTest()
    {
        Assert.IsTrue(true);
    }
} 
 
 
 

More Tutorials and walkthroughs:

1. A Unit Testing Walkthrough with Visual Studio Team Test : (http://msdn.microsoft.com/en-us/library/ms379625(v=vs.80).aspx)

2.  Visual Studio Unit testing intro :  (http://www.jeff.wilcox.name/2008/08/utbasics/)
 
3. Unit Testing 401 : (http://www.learnvisualstudio.net/series/unit_testing_401/) 

4. Unit testing with Microsoft Visual Studio 2012 : (http://www.agile-code.com/blog/unit-testing-with-microsoft-visual-studio-2012/)
 



XUnit:

xUnit.net is a unit testing tool for the .NET Framework. Written by the original inventor of NUnit, xUnit.net is the latest technology for unit testing C#, F#, VB.NET and other .NET languages. Works with ReSharper, CodeRush, and TestDriven.NET. xUnit.net is currently the highest rated .NET unit testing framework

How do I use xUnit.net?

This page contains basic instructions on how to use xUnit.net. If you are an existing user of NUnit 2.x or MSTest (the Visual Studio unit testing framework), you should see our comparisons with existing frameworks page.

Writing and Running Your First Test

  • Create a Class Library project to hold your tests (we will assume it is called "MyTestLibrary").
  • Add a reference to the xunit.dll assembly.
  • Add a class to hold your first test class (here we call it "MyTests"). Here is an example test:
using Xunit;

public class MyTests
{
    [Fact]
    public void MyTest()
    {
        Assert.Equal(4, 2 + 2);
    }
}
  • Compile your project, and ensure it compiles correctly.
  • From the command line, run the following command: xunit.console MyTestLibrary.dll (Note: if xunit.console.exe is not in your path, you may need to provide a full path name to it in the command line above). You should see output like this:
C:\MyTests\bin\Debug> xunit.console MyTestLibrary.dll
xUnit.net console test runner (64-bit .NET 2.0.50727.0)
Copyright (C) 2007-11 Microsoft Corporation.

xunit.dll:     Version 1.9.1.0
Test assembly: C:\MyTests\bin\Debug\MyTestLibrary.dll

1 total, 0 failed, 0 skipped, took 0.302 seconds
  • Success!
The Assert class is provided by xUnit.net and contains various methods that can be used to ensure that your test data is valid.

When you run the console runner and pass your library DLL name, the runner loads the DLL and looks for all the methods decorated with the [Fact] attribute and runs them as unit tests. If you want to add more tests, simply add more methods to your test class, or even start new test classes!

When a Test Fails

If a test fails, the xUnit.net console runner will tell you which test failed and where the failure occurred. The failure might be because of a bad assertion:

[Fact]
public void BadMath()
{
    Assert.Equal(5, 2 + 2);
}
Which shows output like this:

MyTests.BadMath [FAIL]
   Assert.Equal() Failure
   Expected: 5
   Actual:   4
   Stack Trace:
      C:\MyTests\MyTests.cs(8,0): at MyTests.BadMath()

The message clearly shows what happened ("Assert.Equal() Failure"), the expected and actual values, and the stack trace of where the failure occurred.

Your test will also fail if an unexpected exception occurs, such as:

[Fact]
public void BadMethod()
{
    double result = DivideNumbers(5, 0);

    Assert.Equal(double.PositiveInfinity, result);
}

public int DivideNumbers(int theTop, int theBottom)
{
    return theTop / theBottom;
}
When run, you should see output like:

MyTests.BadMethod [FAIL]
   System.DivideByZeroException : Attempted to divide by zero.
   Stack Trace:
      C:\MyTests\MyTests.cs(15,0): at MyTests.DivideNumbers(Int32 theTop, Int32 theBottom)
      C:\MyTests\MyTests.cs(8,0): at MyTests.BadMethod()

1 total, 1 failed, 0 skipped, took 0.274 seconds

Obviously, we must've thought that DivideNumbers used doubles instead of ints! :)

What if I Expected an Exception?

In the example above, what if I wanted to write a test to show I was expecting an exception to be thrown? You can use the Assert.Throws method:

[Fact]
public void DivideByZeroThrowsException()
{
    Assert.Throws(
        delegate
        {
            DivideNumbers(5, 0);
        });
}

public int DivideNumbers(int theTop, int theBottom)
{
    return theTop / theBottom;
}
When this test runs, it passes. Note that Assert.Throws requires you to specify the exact exception you're expecting. If the code throws any other exception, even one that's derived from the one you're expecting, it's still a failure. Additionally, if you want to inspect the values of the exception object, Assert.Throws returns the exception object as a return value for you to do further assertions on.

Skipping a Test

Sometimes you will need to temporarily skip a test. The [Fact] attribute has a Skip parameter which can be used to skip the test and show the reason it's being skipped.

[Fact(Skip="Can't figure out where this is going wrong...")]
public void BadMath()
{
    Assert.Equal(5, 2 + 2);
}
When you run this test with the console runner, you should see output like:

MyTests.BadMath [SKIP]
   Can't figure out where this is going wrong...

1 total, 0 failed, 1 skipped, took 0.000 seconds

Ensuring a Test Does Not Run Too Long

The [Fact] attribute contains a parameter named Timeout, which can be used to specify that a test must finish completely within the given time (in milliseconds).

[Fact(Timeout=50)]
public void TestThatRunsTooLong()
{
    System.Threading.Thread.Sleep(250);
}
When you run this test, you should see output similar to this:

MyTests.TestThatRunsTooLong [FAIL]
   Test execution time exceeded: 50ms

1 total, 1 failed, 0 skipped, took 0.050 seconds




TestDriven.Net:


TestDriven.NET is a zero friction unit testing add-in for Microsoft Visual Studio .NET The current release of TestDriven.NET supports multiple unit testing frameworks including NUnit, MbUnit and MS Team System and is fully compatible with all versions of the .NET Framework.
TestDriven.NET allows a developer to run (or debug!) their tests from within Visual Studio with a single-click.

source: http://www.codeproject.com/Articles/16810/Unit-Testing-with-TestDriven-NET

Test Fixtures

Create a new project and copy the code below into a new class file.
Study the following code for a moment. The code implements a Test Fixture, which is a normal class decorated with the special attribute [TestFixture]. Test Fixtures contain Test Methods. Test Methods are decorated with the [Test] attribute. Other decorations, such as [TestFixtureSetup] and [TearDown], are used to decorate methods that have special meanings that will be explained later.
SampleFixture.cs
using System;
using NUnit.Framework;

namespace UnitTest
{
    [TestFixture]
    public class SampleFixture
    {
        // Run once before any methods
        [TestFixtureSetUp]
        public void InitFixture()
        {
        }

        // Run once after all test methods
        [TestFixtureTearDown]
        public void TearDownFixture()
        {
        }

        // Run before each test method
        [SetUp]
        public void Init()
        {
        }

        // Run after each test method
        [TearDown]
        public void Teardown()
        {
        }

        // Example test method
        [Test]
        public void Add()
        {
            Assert.AreEqual(6, 5, "Expected Failure.");
        }

    }
}

Running a Test Fixture

You can right-click on any test fixture file and run it directly from Visual Studio .NET. This is the beauty of TestDriven.NET.

Notice in your Error or Output tabs that a failure message appears.

Double-clicking on the failure will take you to the precise line that failed. Correct this line so it will pass, then re-test the Fixture.

Running a Test Method

You may also right-click anywhere inside a method and run just that one method.


Setup/Teardown Methods

If you have setup code that should be run once before any method or once after all methods, use the following attributed methods:

If you have setup code that should run once before each method or once after each method in your fixture, use the following attributed methods:

Tips on Writing Good Unit Tests

A proper unit test has these features:
  • Automated
  • No human input should be required for the test to run and pass. Often this means making use of configuration files that loop through various sets of input values to test everything that you would normally test by running your program over and over.
  • Unordered
  • Unit tests may be run in any order and often are. TestDriven.NET does not guarantee the order in which your fixtures or methods will execute, nor can you be sure that other programmers will know to run your tests in a certain order. If you have many methods sharing common setup or teardown code, use the setup/teardown methods shown above. Otherwise, everything should be contained in the method itself.
  • Self-sufficient
  • Unit tests should perform their own setup/teardown, and optionally may rely upon the setup/teardown methods described above. In no circumstances should a unit test require external setup, such as priming a database with specific values. If setup like that is required, the test method or fixture should do it.
  • Implementation-agnostic
  • Unit tests should validate and enforce business rules, not specific implementations. There is a fine line between the end of a requirement and the beginning of an implementation, yet it is obvious when you are squarely in one territory or the other. Business requirements have a unique smell: there is talk of customers, orders, and workflows. Implementation, on the other hand, smells very different: DataTables, Factories, and foreach() loops. If you find yourself writing unit tests that validate the structure of a Dictionary or a List object, there is a good chance you are testing implementation.
    Unit tests are designed to enforce requirements. Therefore, implementation tests enforce implementation requirements, which is generally a Bad Idea. Implementation is the part you don't care to keep forever. Depending on your skill level, implementations may change and evolve over time to become more efficient, more stable, more secure, etc. The last thing you need are unit tests yelling at you because you found a better way to implement a business solution.
    This advice runs counter to what you may read in other unittesting literature; most authors recommend testing all public methods of all classes. I find that while this is consistent with the goals of testing all code, it often forces tests that do more to enforce implementation than business requirements.
    Business requirements often follow a sequence or pattern, and my view is that the pattern is the real thing to be tested. Writing unit tests for every CustomerHelper class and OrderEntryReferralFactory class often indicates that classes and methods could be organized to better follow the business requirements, or at least wrapped in classes that reflect the requirements.




Running the tests: 

Running and Debugging tests

ReSharper automatically detects unit tests of NUnit and MSTest frameworks in your .NET projects; for JavaScript, QUnit and Jasmine frameworks are supported. Other unit testing frameworks such as xUnit.net and MSpec are supported via ReSharper plug-ins.
Next to declarations of test classes and single tests, ReSharper adds special icons on the left gutter of the editor window. Click these icons to run or debug tests.
Tests can also be run from the context menu. In addition, an arbitrary set of unit tests can be run or debugged from the Visual Studio's Solution Explorer. Just right-click the project or solution and select Run unit tests or Debug unit tests.

Unit Test Explorer


ReSharper presents Unit Test Explorer — a structured list of unit tests for reviewing the structure of tests in your whole solution. The tree is available via the ReSharper | Windows menu and is quickly populated after you build your project. Using Unit Test Explorer, you can run any combination of tests in one or more unit test sessions.

Unit Test Sessions



ReSharper runs unit tests in the Unit Test Sessions window. It is designed to help you run any number of unit test sessions, independently of each other, as well as simultaneously. Sessions can be composed of any combination of tests. In the debugging mode, only one session can be run at a time.
The unit test tree shows the structure of tests belonging to a sessions, which you can filter to show only passed, failed or ignored unit tests. You can navigate to the code of any unit test by double-clicking it.
The progress bar and the status bar display the current progress. You can stop, run or re-build and re-run unit tests at any time.
The preview pane lets you analyze test results and navigate from a failed test's output to the code lines that originated the exception, all with a single click.

Profiling Unit Tests with dotTrace Performance


You can also quickly profile the performance of unit tests from Visual Studio via JetBrains dotTrace Performance, a powerful .NET profiling tool.
To profile tests, you will need to install dotTrace Performance. You will then be able to start profiling directly from the editor using the sidebar marks that ReSharper adds for test classes and individual tests.

Analyzing Code Coverage with DotCover


Another JetBrains tool that helps working with unit tests can be integrated with Visual Studio and ReSharper. With JetBrains dotCover, you can easily discover the degree to which the code of your solution is covered with unit tests.
When you install dotCover, you will be able to analyze and visualize code coverage on unit tests from the selected scope and thus quickly spot uncovered code areas. These data can be very helpful for prioritizing application development and testing activities.