Software developmentSoftware testing

Nuclear testing done right

This is a post that I had hoped to write for almost a year and that has been burning under my finger nails. It’s about the latest release of my test platform Nuclear.Test, that I uploaded a few days ago. I’m talking about that magic version 2.0, the very first product grade release. Version 1.0 was made available to the public last summer, but that wasn’t much more than a documented prototype. There were bugs and quirks, and it was a quick and dirty state that I just wanted to get out the door quickly before my son was born. But the first public version of a software is hardly ever any good. It’s there to give the project some traction and for getting to know it a little better. I think it’s hard to find any project that didn’t receive a major rewrite between versions 1.0 and 2.0, so I’m quite happy with how the development went. In fact, it’s been the third major refactoring since I created the very first proof of concept back in late 2012, which eventually has evolved into the product that it is today. But since there is already a bunch of excellent test solutions out there, I should explain why the world needs yet another one.

Today we can choose from MSTest developed by Microsoft and NUnit which is available in the 3rd major iteration so far. Then there is xUnit.net, the successor to NUnit and – according to the authors – the latest and greatest when it comes to unit testing in the world of .NET. Each one of them is stuffed with cool features and is being maintained and extended at regular intervals. But when I first came across unit testing and started to adopt test driven development, I immediately felt that it was harder than it should be. That was back in 2012, when I was employed on my first job as a software developer with the energy sector of Siemens. I was tasked to work on an engineering software called PVplanet (photo voltaic plant engineering tool box) as one of the two coders at the company, tightly coupled with three other developers from Fraunhofer ITWM. Most of the features that I was responsible for, had to do with all sorts of physics calculations and handling of values with given dimensions. Converting units back and forth and normalizing them is what my code did most of the time. Naturally, I had to use a lot of post fixes in order to keep track of all values and their context. This inspired me to create my own unit conversion library, where I used NUnit for automated testing and life was good. But there was a lot of boilerplate code required for all the tests and I abandoned NUnit two weeks later. Instead, I decided to find solutions to the problems that I saw in NUnit. The resulting proof of concept is what eventually became Nuclear.Test. This post will shed some light on the design decisions and the routes I took to create the platform the way it is.

Wording and definitions

Some definitions and common naming in the realm of automated testing in .NET have always made me wonder. Why do people refer to test methods as tests, when they don’t actually test anything? The most basic test method is just an empty method decorated with an attribute. I fail to see how this can be a test by any definition, yet it will be handled as a successful test by today’s test platforms. This is because a test can only fail when it throws an exception during execution. A test method with an empty body will have a hard time failing as a result, so I think that the very definition of a test is flawed.

[TestMethod] 
public void JustAnotherMethod() {
    // nothing to see here... 
}

[TestMethod]
public void CalculatorAddWorks() {
    MyCalculator calc = new MyCalculator();

    Int32 sum = calc.Add(17, 25);

    Assert.That.AreEqual(sum, 42);
}

[TestMethod]
[DataRow(0, 0, 0)]
[DataRow(2, 3, 5)]
[DataRow(2, -3, -1)]
public void CalculatorAddWorks(Int32 a, Int32 b, Int32 expected) {
    MyCalculator calc = new MyCalculator();

    Int32 sum = calc.Add(a, b);

    Assert.That.AreEqual(sum, expected);
}

The only place where any real testing is done are assertions, so maybe we’d better count successful and failed assertions instead? Automated unit testing in Visual Studio has always suffered from OAPT anyway, so that the number of assertions should match the number of test methods exactly. But I don’t get why it’s called Assert in the first place, since it doesn’t assure anything. It probes a value and compares it against another value. If the comparison matches expectations, the test will succeed. No assertions to be seen here in my understanding, so I’m just going to call them tests instead of assertions. Obviously, I’m not a native English speaker, so I’m probably missing something here.

[TestMethod]
public void CalculatorAddWorks() {
    MyCalculator calc = new MyCalculator();

    Int32 sum = calc.Add(17, 25);

    Test.That.AreEqual(sum, 42);
}

Redesigning assertions

Apart from the fact that I think they should be called tests, there is more to be done with assertions. At the moment, a failing assertion throws a specific exception that is caught by the test runner. This is the sole reason why we are all stuck in OAPT madness, but I want to have more than just one assertion in a test method. I want to make sure that setting a property to a new value doesn’t throw an exception. I may want to check, if that also raises PropertyChanged with the correct value. Maybe there are even more relevant checks that should be done in order to get it right. But I cannot be arsed to write half a dozen test methods when, in reality just one should be sufficient.

[TestMethod]
public void AssignNameToPerson() {
    Person person = new Person("Joe", "Doe");

    Test.That.RaisesPropertyChangedEvent(() => person.FirstName = "John", person, out EventData<PropertyChangedEventArgs> eventData);
    Test.That.AreEqual(eventData.Sender, person);
    Test.That.AreEqual(eventData.EventArgs.PropertyName, "FirstName");
    Test.That.AreEqual(person.FirstName, "John");
    Test.That.AreEqual(person.LastName, "Doe");
    Test.That.AreEqual(person.Name, "John Doe");
}

To achieve this, an assertion must not throw exceptions in any circumstance. If – for whatever reason – a failing assertion aborts the test execution, then it’s back to square one, back to OAPT. This is in fact the one thing that made me create Nuclear.Test and Nuclear.TestSite in the first place. Both started as a proof of concept demonstrating that it can be done in a good way and without relying on exceptions. And let’s face it, controlling a program flow through exceptions is never a good idea. I see why it was done like that in the beginning, when the first unit testing frameworks have surfaced. But .NET has changed a lot, since then. We have many more sophisticated mechanics available today, but for some reason, we are stuck in the past.

Multi-layered testing

Today’s testing standards dictate, that test assemblies are executed on the same runtime that they are targeting. Again, this doesn’t work well with my workflow – and if you think about it, it probably won’t with yours either. I need to know if my code works equally on both x86 and x64, especially if it references libraries that are not architecture-agnostic. And I have no control over everyone else either, or how they may use my libraries. Naturally, I want my tests to execute on all runtimes, that my code could see usage on. Even more important, I don’t want to have to worry about those things. A test platform should be able to figure out, where a code can be run and where it can’t. If I’m forced to build my code for a dozen target frameworks to achieve that kind of coverage, then usability is going to need a revamp.

Nuclear.Test uses a three-layer test execution system, where each individual layer is tackling just one of the issues above. The benefits are the ability to execute multiple test assemblies on all possible processor architectures and runtimes in one go, without the need for a heavy assembly loading management. It also means, that the risk of unexpected failure due to differing implementation details of .NET platforms is quite low. More over, test assemblies can target .NETStandard as well and will be executed on whatever implements the standard version of the tests.

Generics support

A minor but important feature is the support of generic test methods. And I’m not talking about generic helper methods. I mean proper generic methods with generic type parameters and injectable parameters. Having that flexibility makes a lot of sense, and it isn’t that hard to get right. This is why Nuclear.Test has them, and I’m sure, you’ll love them like I do.

[TestMethod]
[TestParameters(typeof(Person), "John", "John Doe")]
[TestParameters(typeof(Teacher), "Jane", "Jane Doe")]
[TestParameters(typeof(Student), "Bill", "Bill Doe")] 
public void AssignNameToPerson<T>(String firstName, String expectedFullName)
    where T : Person, new() {

    T person = new T();
    person.FirstName = "Joe";
    person.LastName = "Doe";

    Test.That.RaisesPropertyChangedEvent(() => person.FirstName = firstName, person, out EventData<PropertyChangedEventArgs> eventData)
    Test.That.AreEqual(eventData.EventArgs.PropertyName, "FirstName");
    Test.That.AreEqual(person.FirstName, firstName);
    Test.That.AreEqual(person.LastName, "Doe");
    Test.That.AreEqual(person.Name, expectedFullName);
}

Drawbacks

Luckily, there isn’t any fallout or nuclear winter to be expected and no-one is going to die from radiation poisoning. But there are real drawbacks that need to be considered. While every other test platform can resolve the exact namespace, class name and method name of where an assertion has failed, this is not true for Nuclear.Test. Things like Method Inlining and Tail Call Optimization will prevent any meaningful analysis of the stack trace. Nuclear.TestSite relies on the attributes CallerMemberName and CallerFilePath to figure out the call site during ordinary execution. In case of an exceptional test abort, the call site is resolved through the test method info that was invoked. This means that a test class name must match the containing file, but it also means that there is no way of retrieving the namespace. So unless searching forever is a hobby, it pays to keep test class names unique in a test assembly.

Long story short

Automated testing is probably the best way to fight bugs these days, and all the existing test platforms are formidable weapons to help in the fight. Software keeps becoming more and more complex and so do bugs, but our tools don’t seem to grow along with it. Nuclear.Test isn’t going to revolutionize the way we test our software, but I have big hopes, that some improvements will catch on. Support for generic test methods and testing on multiple runtimes will have a great impact on usability without expensive changes. Getting around OAPT is going to be harder without breaking test code for many people, but it’s not impossible. The future could be to support both approaches and let developers choose for themselves.

Leave a Reply

Your email address will not be published.