Amazingly Simplified Automated Tests

David
8 min readAug 26, 2021
Photo by Erik Karits on Unsplash

My hope is once you read this article you understand what testing is, the types of tests we currently use, and their benefits.

Testing could feel painful at the beginning and one of my goals is for you to lose it and enjoy it.

This article has been written for beginners but also for people who want clarification, for that reason, it’s intended to be tool independent.

Content

  • What Is A Test?
  • Testing ≠ Debugging
  • Early Reasons For Testing
  • The Manual Test Way
  • Advantages of Automatic Tests
  • Anatomy of A Test
  • Types of Tests
  • Test-Driven Development (TDD)
  • Good Practices

What is a test?

To know what we are doing we first need to define what is a test.

Writing tests is writing a requirement list for the things you want to build, for that reason a test is a specific requirement we must fulfill.

Checks

Testing ≠ Debugging

Catch ≠ Kill

Photo by Thomas Oxford on Unsplash

Sometimes when we start testing, a common mistake would be to confuse debugging with testing and don’t know where one ends and the other begins.

“Testing is weaving a spider web of tests to catch bugs, when a bug is detected we get notified and then it’s time to debug it.”

Testing

It can be seen as writing a list of requirements (remember the second image) that our feature must meet, which means, we could write our tests before writing a single line of code if we want. Can be done manually or automatically.

Debugging

It is fixing an error, generally speaking, there is no error if there is no code, so it requires getting dirty with it, and only makes sense to do it manually.

Early Reasons For Testing

From the things we know until now we can say:

  • We will work towards objectives and not assumptions or in features that don’t add value, increasing the quality of our code.
  • Helps us with the early detection of errors, avoiding spending time on things that will ultimately be a disaster.
  • Acceptance criteria could be clearly defined through our tests, avoiding redundancies and misunderstandings among stakeholders.

The Manual Test Way

Now that we know what testing is and what is not it’s time to make a distinction between manual and automatic testing and see its differences.

When we modify the code by adding, deleting, or updating a feature inside of it we want to be sure we don’t introduce errors. Let’s take a look at how we handle it using manual tests and some problems that emerge:

Case 1

Feature D breaks itself

Let’s say we add a new feature, feature D, and this one is broken… as we go ahead it’s likely that we can catch the error, no problems here.

Case 2

Feature D breaks Feature E in the same component

Let’s say we add feature D, and unintentionally break feature E in the same component. In this case, it is highly likely that we might not catch this error. But suppose that we are focusing on just one component and we see that something is breaking it. A little harder… but still possible to catch the error.

Case 3

Feature D breaks Feature G into another component

Let’s say we add feature D and unintentionally break feature G which is in another component; in this case, is very unlikely that we don’t catch the error. We say that’s fine, push the changes and then, in production, we discover that feature G is failing.

With manual testing is almost impossible to check every single component and see if it is working properly, wouldn’t it be better to write tests for each feature, and once a change has been made run all or part of them automatically and then get notified if something is wrong?

Advantages of Automatic Tests

Why should we use automatic over manual tests? A few good reasons:

Saves time

Yup 😜, you read it well, we write a test once and run it each time a change is made, automatically. Manual testing implies making a test each time you visit a feature, over and over again, it’s time-consuming and we might forget to evaluate something on an iteration.

Manual test is error-prone

We might not be aware always when something fails as we saw in our previous section.

Serves as documentation

The tests we left serve as documentation (remember, each one is a requirement). If another developer wants to understand your code he/she probably prefers to read your tests to understand the code specifications.

Prevents dirty features

We can isolate tests in other files without carrying about ‘cleaning’, avoiding unnecessary checks… like console.log inside the code.

Makes debugging easier

If we use atomic tests… when a specific test fails we can know exactly what went wrong.

Anatomy of A Test

No matter what script, library, framework, or even the type of test we use it always involves the same:

“Set a domain, run it, and check if the range meets our expectations.”

Why Mocks

When we run our tests we use objects that mimic the behavior of real objects in a controlled way (mocks) so we can establish a cause-effect relationship between our inputs and outputs, isolating our features from external forces that might produce unpredictable behaviors.

For example, if our feature would make real HTTP requests the tests could be slow or unpredictable (if the server goes down or if the response changes over time) or even costly (we may not want to run our server every time a test runs).

Types of Tests

As we might expect, each part of our application leads to different types of tests. The difference between them is like the domain, range, and method of comparison:

Implementation (or Unit) testing

Brings a test over a fundamental unit of code, usually functions or classes. It’s highly attached to how the code was written.

Test algorithm

  1. Get a function or method.
  2. Insert inputs and call it.
  3. Compare the response.

Use case

  • Review code quality.

Functional or Behavioral Testing

Tests how users use our software and not our internal implementation as unit tests. Includes all relevant components that involve certain user behavior.

Test algorithm

  1. Builds a virtual representation of a UI element.
  2. Simulate a user interaction over the element: clicks, scroll, mouse over, etc.
  3. Compares the behavior with the expected result.

Use case

  • Meet user stories, 1 user story ~ 1 functional testing.

Visual Testing

Evaluate the UI according to the expected design, works over the user interface looking at what happens when clicking a button or scrolling a page, etc.

Test algorithm

  1. Builds a virtual representation of a UI element.
  2. Simulate a user interaction over the element: clicks, scroll, mouse over, etc
  3. Compare the response visually with the original design.

Use case

  • Sync UI developers and designers.

Integration (or API) Testing

Focused on checking data communication between modules of code, and API communication.

Test algorithm

  1. Get an API.
  2. Call it.
  3. Check structure and content.

Use Case

  • Validates the data structure and its content from our APIs.

Acceptance Testing

It's performed by users after testing the system and before making it available for actual use on market.

Test algorithm

  1. People outside the developer team use the app in production.
  2. Gather real metrics.
  3. Compare it with business metrics.

Use case

  • Validates if business requirements are met.

Types of acceptance tests

  • Internal Acceptance (or Alpha) Testing is performed by members of the organization that developed the software but who are not directly involved in the project.
  • Customer Acceptance Testing: performed by the customer of the organization that developed the software.
  • User Acceptance (or Beta) Testing is performed by the end-users of the software.

Test-Driven Development (TDD)

As we saw, testing doesn’t require code implementation to be done, TDD is the practice of writing our tests at first and later writing the code.

Steps:

  1. Write “shell” component or feature.
  2. Write the tests and import your shell component into them.
  3. Fail the tests.
  4. Write code until it passes the tests.

Benefits:

Testing neglect is avoided and requirements are enforced by ensuring that your code conforms to testing and not vice versa, and also helps to write better code from the beginning making us think in requirements before we write a single line of code.

Good Practices

So far, doing automatic testing sounds like the best solution, and it is, but a bad implementation can turn it into a nightmare. I trust that by having clear concepts, following the next suggestions, and practicing a little (with the tool you want), you will avoid the most common mistakes.

For testing

  • Run tests in isolation between other tests and other components. One file per component, one or more tests per feature, and one test for one requirement.
  • Test edge cases and error paths not just happy ones, because we want our tests to cover a wide range of scenarios.
  • Establish a domain and range (inputs & outputs) for each feature we are testing, in this way, when a certain input is received we can be sure of the type of result that we can expect.

For Coding

Apply SOLID principles to write, clean, modular, and descriptive software, avoiding:

  • Swamp of global variables.
  • Pointer soup.
  • Side effects (as possible).

--

--

David

Hi and welcome, I write about programming in an easy and friendly way so everyone get it