Your browser may have trouble rendering this page. See supported browsers for more information.

|<<>>|78 of 273 Show listMobile Mode

Why use an IOC? (hint: testing)

Published by marco on

Updated by marco on

[[_TOC_]]

## Introduction

Testing is any form of validation that verifies code. That includes not only structured validation using checklists, test plans, etc. but also informal testing, as when developers click their way through a UI or emit values in debugging output to a console.

_Automated testing_ covers the topic of all regression-style tests that execute both locally and in CI. This includes unit, integration, and end-to-end tests.

Testing is primarily a mindset.

You should think of writing tests not as something you _have_ to do, but rather as something you _want_ to do.

- How else do you prove that what you wrote works?
- What does _”it works”_ mean?
- Which _use cases_ are covered?
- How do you answer these questions without tests?
- What do we mean by _writing_ tests?

## You’re already testing!

You’re almost certainly already testing.

You might be clicking through the UI or emitting statements in a command-line application, but you’re verifying your code _somehow_. I mean … you are, right? RIGHT?

I’m kidding. Of course you’re not just writing code, building it, and committing it. You’re validating it somehow.

That’s testing.

### A list of validations

If you’re really good, you might even keep a list of these validations. Once you have a list, then,

1. You don’t have to worry about forgetting to do them in the future
1. Even someone with no knowledge of the system can perform validation

This is fine, but it’s still a manual process. A manual process carries with it the following drawbacks:

1. It gets quite time-consuming, especially as the list of validations grows
1. You’re highly unlikely to perform the validations often enough
− It’s much easier to fix a mistake if you learn about it relatively soon after you made it
1. You’re also unlikely to add _all_ of the validations you need
− Generally, you won’t validate smaller “facts” and will focus on high-level stuff
1. A manual validation process can’t be run as part of CI or CD

### Automating the list

Automated testing means that you _codify_ those validations.

> 😒 Great! I have tests! How the heck do I _codify_ them?

Don’t panic. Almost any code can be tested. In fact, if you can’t get at it with a test, then you might have found an architectural problem.

See? Automating tests will even help you write better code!

> 🤨 How do I get started?

Just start somewhere. It doesn’t matter where. Don’t worry about coverage. Just get the feeling for writing a proof about a facet of your code. Any bit of logic can—and should—be tested.

What if you still don’t know where to begin? Ask someone for help! Don’t be shy. It’s in everyone’s best interest for a project to have good tests. You want everyone’s code to have tests so you know _right away_ when you’ve broken something in a completely unrelated area. This is a good thing!

## Goals

> 🤸‍♀️ Developers should be excited to use tests to prove that their code works.

### Tests should be quick and easy (maybe even fun) to write

A project should provide support for mocking devices and external APIs, or for using test-specific datasets.

### Tests should be reasonably fast

A reasonably fast test suite will tend to be run more often. We would like a developer to notice a broken test right after the change that broke it, preferably even before pushing it.

### Avoid debugging tests in CI

Tests a developer runs locally should almost always work in CI. Failing tests in CI should also fail locally.

## Guidelines

> 🤨 Don’t be pedantic.

For example,

- [Stop requiring only one assertion per unit test: Multiple assertions are fine](https://stackoverflow.blog/2022/11/03/multiple-assertions-per-test-are-fine/)
- Don’t forbid mocking in integration tests and don’t force mocking in unit tests.
− In fact, stop worrying about whether it’s a unit or an integration and just _write useful tests_ that _prove useful things_ about your code.
- Don’t get obsessed with automating _everything_.
− Get the low-hanging fruit first, and leave the rest to manual testing.
− See where you stand.
− If you haven’t automated enough, iterate until done. 🔄

### Tests should be useful

We never want anyone in a team to get the impression that we’re writing tests just to write tests. We write tests because they help us write better code and because it feels good to be able to prove that something that was working continues to work. You should feel more efficient and productive and feel like you’re producing higher-quality code.

- Tests should confirm use cases
- Tests should prove something about your code that you think is worth proving.
- Tests should confirm behavior that either is how the code _currently_ works or how it _should_ work.
- Tests should help you write better code from the get-go.
- Every bug that you need to fix is de-facto a use case that needs a test.

### Code Coverage & Reviews

How do you know when there are “enough” automated tests?

Don’t get distracted by trying to achieve a specific coverage percentage. The most important thing is that the major use cases are covered.

If software is stable and there is “only” 40% test-coverage, then maybe there is a lot of code that rarely or never gets used? In that case, you might want to think about removing code that you don’t need rather than to waste time writing tests for code that never runs.

New code, though, should always have automated tests. A **code reviewer** should verify that new functionality is being tested.

## Types of tests

| Type | Definition | When to use them |
| − | − | − |
| Unit | Cover a single unit, mocking away other dependencies where needed | Useful for verifying simple logic like calculated properties or verifying the results of service methods with given inputs |
| Integration | Cover multiple units, possibly mocking unwanted dependencies| Useful for verifying behavior of units in composition, as they will be used in the end product. The goal is to cover as much as possible without resorting to more costly end-to-end tests |
| End-to-End | Also called _UI Tests_, these tests verify the entire stack for actual customer use cases | Very useful, but generally require more maintenance as they tend to be more fragile. Essential for verifying UI behavior not reflected in a programmatic model. Can work with snapshots (e.g. error label is in red) |

## Approach

The article [Write tests. Not too many. Mostly integration.](https://kentcdodds.com/blog/write-tests) describes a pragmatic approach quite well. Instead of the classic “testing pyramid”, it suggests a “testing trophy”.

![image.png](/.attachments/image-6b9cafdf-0bac-4155-bb8f-363a92822bc3.png =300x)

This style of development has the following aims:

1. Verify as much as possible _statically_, with linting and analyzers
1. Make _integration tests_ cheaper because they prove more about your system than _unit tests_
1. Prove as much as possible outside of _end-to-end tests_ because they’re expensive and brittle

## Analysis

> Remember that everything you use has to work both locally and in CI.

### Static-checking

A project should include analyzers and techniques so that the compiler helps make many tests unnecessary. For example, if you know that a parameter or result can never be `null`, then you can avoid a whole slew of tests.

Developers should only spend time writing tests that verify semantic aspects that can’t be proven by the compiler.
ac
#### Null-reference analysis in .NET

The .NET world provides many, many analyzers and tools to verify code quality. One of the most important things a project can do is to improve null-checking. The best way to do this is to upgrade to C# 8 or higher and enable [null-reference analysis](https://learn.microsoft.com/en-us/dotnet/csharp/nullable-references). The [default language for .NET Framework is going to stay C# 7.3](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/configure-language-version), but
you can [enable null-reference analysis for .NET Framework](https://www.infoq.com/articles/CSharp-8-Framework/) quite easily.

Another option is to use the [JetBrains Annotations NuGet package](https://www.nuget.org/packages/JetBrains.Annotations/), which provides attributes to indicate whether parameters or results are nullable.

The preferred way, though, is to use the by-now standard nullability-checking available in .NET.

Doing neither is not a good option, as it will be very difficult to avoid null-reference exceptions.

### Unit-testing

Unit tests are very useful for validating _requirements_ and _invariants_ about your code.

These are the easiest tests to write and will generally be the first ones that you will write.

A requirement or an invariant may be specified in the story itself, but it can be anything that you know about the code that’s important. It’s up to the developer and the reviewer(s) to determine which tests are necessary. It gets easier with experience—and it doesn’t take long to get enough experience so that it’s no longer so intimidating.

#### Unit-testing example

Just as a quick example in .NET, consider the following code,

```csharp
public bool IsDiagnosticModeRunning
{
get => _isDiagnosticModeRunning;
set
{
_isDiagnosticModeRunning = value;
_statusManager.InstrumentState = value ? InstrumentState.DiagnosticMode : InstrumentState.Ready;
}
}
```

Here we see a relatively simple property with a getter and a setter. However, we also see that there is an invariant in the implementation: that the `_statusManager.InstrumentState` is synced with it.

Using many of the [techniques described below](#tools-and-techniques), we could write the following test:

```csharp
[DataRow(true, InstrumentState.DiagnosticMode)]
[DataRow(false, InstrumentState.Ready)]
[TestMethod]
public void TestIsDiagnosticModeRunning(bool running, InstrumentState expectedInstrumentState)
{
var locator = CreateLocator();
var instrumentControlService = locator.GetInstance<IInstrumentControlService>();
var statusManager = locator.GetInstance<IStatusManager>();

Assert.AreNotEqual(expectedInstrumentState, statusManager.InstrumentState);

instrumentControlService.IsDiagnosticModeRunning = running;

Assert.AreEqual(expectedInstrumentState, statusManager.InstrumentState);
}
```

Here, we’re using MSTest to create a parameterized test that,

- creates the IOC
- gets the two relevant services from it
- Verifies that the state is not already set to the expected state (in which case the test would succeed even if the tested code doesn’t do anything)
- Sets the property to a given value
- Verifies that the state is correct for that value

We now have code that validates two _facts_ about the system. Should something change where these facts are no longer true, the tests will fail, giving the developer a chance to analyze the situation.

- Was the change inadvertent or deliberate?
- Are the facts still correct? Does the test need to be updated?

If you’re addressing a bug-fix, though, you might be able to _prove_ that you’ve fixed the bug with a unit test, but it’s also likely that you’ll have to write an integration test instead.

### Integration-testing

Unit tests have their place, but they are far too emphasized in the testing pyramid. The testing pyramid comes from a time when writing integration tests was much more difficult than it (theoretically) is today.

The “theoretically” above means that the ability to write integration tests as efficiently as unit tests is contingent on a project offering proper tools and support.

One common complaint about integration tests vis à vis unit tests is that they run more slowly. Another is that they take longer to develop. Ideally, a project provides support to counteract both of these tendencies.

To this end, then, a project should offer base and support classes that make common integration tests easy to set up and quick to execute:

- Interacting with a database
- Setting up a known database schema
- Getting to a clean dataset
- [Mocking]() the database
- Mocking other external dependencies in a project (e.g. loading configuration from an endpoint, sending emails, sending modifications to endpoints)

There are many different ways to solve this problem, each with tradeoffs. For example, a project can load dependencies in Docker containers, either created and started manually (see [Testing your ASP.NET Core application − using a real database](https://josef.codes/testing-your-asp-net-core-application-using-a-real-database/)) or even dynamically with a tool like the [Testcontainers NuGet package](https://github.com/testcontainers/testcontainers-dotnet).

### Comparing Unit and Integration tests

A drawback to unit tests is that, while they can test an individual component well, it’s really the big picture that we want to test. We want to test scenarios that correspond to actual use cases rather than covering theoretical call stacks. It’s not that the second part _isn’t_ important, but that it’s not _as_ important.

Given limited time and resources, we would prefer to have integration tests that also cover a lot of the same code paths that we would have covered with unit tests, rather than to have unit tests, but few to no integration tests.

This, however, leads directly to…

The advantage of a unit test over an integration test is that when it fails, it’s obvious which code failed. An integration test, by its very nature, involves multiple components. When it fails, it might not be obvious which sub-component caused the error.

If you find that you have integration tests failing and it takes a while to figure out what went wrong, then that’s a sign that you should bolster your test suite with more unit tests.

Once an integration test fails _and_ one or more unit tests fail, then you have the best of both worlds: you’ve been made aware that you’ve broken a use case (integration test), but you also know which precise behavior is no longer working as before (unit test).

## Tools and Techniques

### Tests are Code

Test code is just as important as product code. Use all of the same techniques to improve code quality in test code as you would in product code. Clean coding, good variable names, avoid copy/paste coding—all of it applies just as much to tests.

There are two main differences:

- You don’t need to document tests
- You don’t have to write tests for tests. :-)

### Writing testable code

This is a big, big topic, of course. There are a few guidelines that make it easier to write tests—or to avoid having to write tests at all.

As noted above, code that can be validated by the compiler (static analysis) doesn’t need tests. E.g. you don’t have to write a test for how your code behaves when passed a `null` parameter if you just _forbid it_. Likewise, you don’t have to re-verify that types work as they should in statically typed languages. We can trust the compiler.

Here are a handful of tips.

- Prefer composition to inheritance
- A functional programming style is very testable
- An IOC Container is very helpful
- Avoid nullable properties, results, and parameters
- Avoid mutable data
- Interfaces are much easier to fake or mock; use those wherever you can

See the following articles for more ideas.

- [C# Handbook − Chapter 4: Design](https://github.com/mvonballmo/CSharpHandbook/blob/master/4_design.md) (2017)
- [Questions to consider when designing APIs: Part I](https://www.earthli.com/news/view_article.php?id=2996) (2014)
- [Questions to consider when designing APIs: Part II](https://www.earthli.com/news/view_article.php?id=2997) (2014)
- [Why use an IOC? (hint: testing)](https://www.earthli.com/news/view_article.php?id=3487) (2019)

### Parameterized Tests

Investigate your testing library to learn how to write multiple tests without having to write a lot of code. In the MSTests framework, you can use `DataRow` to parameterize a test. In NUnit, `TestCase` does the same thing, and `Value` allows you to provide parameter values for a list of tests that are the Cartesian product of all values.

### Mocking/Faking

Use mocks or fakes to exclude a subsystem from a test. What would you want to exclude? While you will want to make some tests that include database access or REST API calls, there are a lot of tests where you’re proving a fact that doesn’t depend on these results.

#### Focus on what you’re testing

For example, suppose a component reads its configuration from the database by default. A test of that component may simply want to see how it reacts with a given input to a given method. Where the configuration came from is irrelevant to that particular test. In that case, you could mock away the component that loads the configuration from the database and instead use a fake object that just provides some standard values.

#### Test error conditions

Another possibility is to fake an external service to see how your code reacts when the service returns an error or an ambiguous response. Without mocks, how would you test how your code reacts when a REST endpoint returns 503 or 404? Without a mock, how would you force the purely external endpoint to give a certain code? You really can’t. With a mock, though, you can replace the service and return a 404 response for a specific test. This is quite a powerful technique.

#### How to fake?

As noted above, it’s much, much easier to use fake objects if you’ve consistently used interfaces. You can just create your own implementation of the interface whose standard implementation you want to replace, give it a fake implementation (e.g. returning `false` and empty string and `null` for methods and properties), and then use that class as the implementation.

#### Faking/mocking libraries

If you have interfaces that perform a single task (single-responsibility principle), then it doesn’t take too much effort to write the fake object by hand. However, it’s much easier to use a library to create fake objects—and there are other benefits as well, like tracking which methods were called with which parameters. You can assert on this data collected by the fake object.

For .NET, a great library for faking objects is [FakeItEasy](https://fakeiteasy.github.io/).

With a fake object, you can indicate which values to return for a given set of parameters without too much effort. Similarly, you can use the same API to query how often these methods have been called. This allows you to verify, for example, that a call to a REST service _would have been made_. This is a powerful way of proving facts about your code without having to actually interact with external services.

#### An example

The following code configures a fake object for `ITestUnitConfigurationService` that returns default data for all properties, except for `Configuration` and `GetTestUnitParameterValues()`, which are configured to return specific data.

```csharp
private static ITestUnitConfigurationService CreateFakeTestUnitConfigurationService()
{
var result = A.Fake<ITestUnitConfigurationService>();

var testUnitParameters = CreateTestUnitParameters();
var testUnitConfiguration = new TestUnitConfiguration(testUnitParameters);

A.CallTo(() => result.Configuration).Returns(testUnitConfiguration);

var testUnitParameterValues = CreateTestUnitParameterValues();

A.CallTo(() => result.GetTestUnitParameterValues()).Returns(testUnitParameterValues);

return result;
}
```

In the test, we could get this fake object back out of the IOC (for example) and then verify that certain methods have been called the expected number of times.

```chsarp
var testUnitConfigurationService = locator.GetInstance<ITestUnitConfigurationService>();

A.CallTo(() => testUnitConfigurationService.Configuration).MustHaveHappenedOnceExactly();
A.CallTo(() => testUnitConfigurationService.GetTestUnitParameterValues()).MustHaveHappenedOnceExactly();
```

### Snapshot-testing

You can avoid writing a ton of assertions and a ton of tests with snapshot testing.

For example, imagine you have a test that generates a particular view model. You want to verify 30 different parts of this complex model.

You _could_ navigate the data structure, asserting the 30 values individually.

That would be pretty tedious, though, and lead to fragile and hard-to-maintain testing code.

Instead, you could emit that structure as text and save it as a _snapshot_ in the repository. If a future code change leads to a different snapshot, the test fails and the developer that caused the failure would have to approve the new snapshot (if it’s an expected or innocuous change) or fix the code (if it was inadvertent and wrong).

The upside is that large swaths of assertions are reduced to a simple snapshot assertion. The downside is that the test might break more often for spurious reasons. Generally, you can avoid these spurious reasons by being judicious about how your format the snapshot,

- Avoid timestamps or data that changes over time
- Avoid using output methods that are too likely to change over time

See the documentation for the [Snapshooter NuGet package](https://swisslife-oss.github.io/snapshooter/).