By Darrell RobertsPosted: 29/07/2024

The Importance of Testing

Crowdstrike crashThe error screen users, who were affected by the Crowdstrike crash, would receive. Source: https://www.thestack.technology/crowdstrike-outage-blue-screen-of-death/

Testing in general is considered a core pillar of best practices.


Whether it's involved in production lines to test the strength of material, screening movie previews to a test audience or running pre-production code, the value of testing in all disciplines has a common benefit. Its purpose is to identify every possible fault before the product is finalised.


This way, it minimises the potential of something going wrong when the product is live, prioritising the prevention of the worst case scenario. By doing this, it will not only save the company a lot of money but also in some cases potential lawsuits and the reputational damage that can come with it.


But does every company employ a testing-philosophy? Unfortunately not. You need to look just at the Titan submersible implosion last year, which was the result of little testing and not listening to the experts. And recently, you can also argue that the Crowdstrike disaster, which caused roughly 8.5 million Microsoft devices around the world to crash, occurred due to poor testing. So why do some companies avoid it?


Ultimately, testing requires more time and extends the production timeline, which means it cost more money. Also at first when you have tested a product, the financial benefits seem non-existent if not even counterproductive. Companies often look for short-term gains in profitability and when inspecting their finances, it would seem that cutting the testing process would save them a tonne of money and their return would skyrocket. What they don't realise however is that an untested product can cause them not only to drop in profits but even to become bankrupt. Unfortunately, the person making these decisions aren't often involved in the production itself, meaning that they can be ignorant of how valuable testing is. It is unfortunate as well that if a company does not test its product and something go wrong when it is live, upper management will blame the production team, when it is really their fault for refusing to finance the testing process. However, in the context of coding, how do you test your product?


There are quite a handful of tests you can implement , but generally it's considered that the three main ones are: Unit Testing, Integration Testing and End-to-End Testing.


Unit Testing

This is the most low level testing you can implement. Its purpose is to test specific methods or functions. Of course you'll have different frameworks for different coding languages but I like to use Jest for JavaScript Unit Testing . You can run it in a simple .js file or download modified npm packages if you wanted to, say, run it within a React app.


Integration Testing

This tests the interaction between different components and is considered the middle-stage of testing - inbetween Unit and End-to-End testing. Initially, it ensures that the frontend can communicate effectively with the backend, for tasks such as data-fetching from the API to the UI, while it can also examine the response times of such methods. Its greatest value is testing features that won't break when pushed to production. Jest can also be used in this regard for JavaScript apps.


End-to-End Testing

For me this is the most interesting type of testing and actually quite mindblowing (then again, I'm easily impressed). Its purpose is to mimic the behaviour of a user by completing tasks such as logging into a webpage, filling out and submitting a form and even making online payments. I'm a fan of the Cypress framework for end-to-end testing. Again for JavaScript, you also have relevant npm packages to run on your desired framework. It probably sounds sad but it is quite fun just writing these tests and watching the tool do its work.


Although what are the best approaches to implementing this type of testing?


According to the pillars of Test Driven Development , essentially you start by writing the test to fail. For example, when unit testing a function that always returns a string value, you can first write a test to always expect an integer - and thus, the test will fail. This essentially sets the baseline and confirms that the testing framework is set up correctly.


Then once you have written enough failing tests, you can be more adventurous. This time you write just enough code to pass the test, so in the same example, you can write a test for the function to always expect a string value - to which, the test will pass. Obviously these are just basic examples but it's good to start small. Through this method, if your tests fail, it is a lot easier to pinpoint where it went wrong as oppose to using a much more convoluted test. The latter would have more potential outcomes and thus, if it fails, it is harder to know at first where it went wrong.


Pass this stage, when satisfied you have written enough fail-safes, you can then refactor the code to make it as efficient as possible, and to ensure it abides by the linting and design rules of the codebase. For instance, if your function runs on 10 lines of code, can it run on 5 lines instead and also pass the aforementioned tests.


You then repeat this process for every feature.


From the offset, this sounds like a long process and it is more understandable why some companies may discourage such practices. Nevertheless, by working towards this methodology you have potentially inadvertedly fixed numerous bugs that can occur after the code is pushed to production. In addition testing should have a more catch-all focus so that it can be executed for future functions and features. For example, it's common to integrate it into your Continous Integration/Continous Delivery workflow, particularly for open-source projects, so that before a developer submits a Pull Request (PR), they are required to run the provided tests. This provides a further level of protection before the PR is inspected in a code review.


Testing in coding, like in other practices, is essential for producing a quality assured product. Though it extends the production timeline, it also saves an insurmountable amount of time developers would have to spend in order to fix the bugs that were not otherwise caught in the pre-production phase.

Previous

How Culture Shapes Website Design

Next

Finding A Job in Tech: A Modern Day Dante's Inferno


Back to Blog homepage