As testers, our passion for quality and taking pride in our work can sometimes be seen to conflict with other project priorities, like the need to deliver at pace, and to rapidly adapt to change. As I alluded to in my last blog, technology is changing so quickly that there is no longer a one size fits all testing approach; no two platforms are the same, and no two customers are the same. At Eclipse, while we have a defined test strategy and some core testing processes on which we base our approach, there is often a need to tailor the way we work.

I was recently discussing the importance of testing Magento Commerce, and the differences in approach to other platforms, when I was asked a question which I was very familiar with; “If we’re taking a platform that is fully tested out of the box, and we’re plugging in already tested extensions, then why do we even need testing in Magento Commerce projects?”

And this is a question that not only applies to this product, but other products or platforms which are sold as fully tested core packages. So why should we test something which is “out of the box”? Great question!

 

What does “Out of the Box” even mean?

Out of the box is a term that is “used to refer to the immediate usability or functionality of a newly purchased product, typically an electronic device or a piece of software.” So by this definition, it’s understandable that people would question why it would need an additional level of testing, but let’s think about a real life example.

 

Building from the ground up…

Let’s consider a scenario where you are investing in a new build home. You have secured your plot, seen your house plans and visited the show home. You have specified all your extra requirements in terms of materials, fixtures and fittings and you wait eagerly for the build to be completed. You are likely to visit throughout the build process to see progress, discuss any issues or changes as they arise, but you are dependent on your contractors to carry out all build work for you to a high standard.

Let’s assume for example that you have selected a standard bathroom suite but opted for a different model or brand of electric shower to those usually recommended. Before you come to move in, you are relying on your appliances to have been installed and confirmed as working. If at the point you come to use your new shower and find that it is working but there’s only cold water coming out of it, you become the person triaging defects to get them resolved.

The point here is that installing a shower is in theory low risk because hey, it’s a shower, we install them all the time, right? It’s only when you come to use it that you find something doesn’t work due to a fault with either that specific product, or how it is installed in that particular setting, that you’re then going to have cold showers to look forward to when you get the keys!

If you’d have had the option to have some additional testing done to guarantee that everything has been installed properly, is working as expected “out of the box” and fully meets your requirements, would you? You are likely to say yes to gain some additional confidence in your investment and more importantly peace of mind on move-in day.

 

How does this apply to Digital Testing?

In the Magento Commerce example, we offer development services that help clients to get set up with the core product, plus combinations of extensions from varying sources that are specific to the client and their use cases. This development also then includes elements such as data feeds and additional configuration that begins to take the standard Magento product and turns it into one that meets your requirements.  Testing here plays a vital role in verifying these requirements have been met, confirming that your now customised product does what you need it to.

As pointed out in the real-life example, the same applies. If you just install an extension into Magento and assume it’ll work as expected (because of course the extension has been tested before it is released to market) things will be great. Potentially it will and as before, the water will come out, but most of the time it’s not always possible to check every possible scenario due to time or budget constraints. Unfortunately, in the world of testing, we see that a lot of the time basic requirements are not met where things are left to assumptions. Ensuring there is a good solid test strategy defined means that things don’t get left to chance, leaving you with a shock when things go live. We build confidence in quality and reduce these risks throughout your build process.

It also allows you to get into the minds of your core users, which can be defined by identifying your user personas, as Lucy in our Experience team has identified in her post.

So, before you start to cut down your testing scope, here are some points you should think about if you are asked to reconsider your approach:

  • Any customisation or change to the standard configuration (no matter how big or small) is a change to what is tested “out of the box”.
  • Testing for base or core product packages focuses only on sample use cases – not real-life scenarios specific to those of your business or your customers.
  • Data can impact the way a system behaves, especially in terms of presentation of data, data validation rules and performance.
  • Configuration is always different depending on the client, industry, use case, and even within the same industry no two clients will be the same.

If you’re struggling to define your test approach, why not talk to us? We’re here to advise and guide you on how you can give the right things the right level of testing focus to make sure quality is not compromised.