In this murky and unpredictable world the challenges for testing and delivering the right products to the customer has become a challenge. Testers are always short of time and they need to find “Important” bugs within the same time frame and quickly. Requirements and specifications are all leaky abstractions, so the test managers and leads have to generate realistic plans in order to define the right coverage and taking care of the Cost and value of the project. The most important factor in testing becomes the “Human” tester with their abilities to use the right skill and the right tools.

What Outtabox can deliver in this press for time situation is the right results with the right custom made testing methodology which suits your organization and clients’ needs.

Heuristics Test Strategy Model

To let you know how would we go about it and what model or discipline we will follow, let us introduce you to something which is considered as a blessing for the testing profession; “The Heuristics Test Strategy Model” or commonly known as HTSM.

Why HTSM?

Well, as we said before, the scarcity of time, resources and technology makes testing and delivering the right products to the customers has become a challenge, and as wild bunch of testers and IT enthusiasts we are gladly accepting this challenge.

What HTSM does is to provide a model of heuristics and oracles to define:

  1. The project environment and its challenges
  2. The Coverage on the Product itself
  3. Helps define the right quality Criteria
  4. Assist Testers in identifying the right testing techniques

And eventually results are formed up in to the “Perceived” quality at the very start of the project or whenever you bought that product with that fancy picture in your mind.

So let’s cut to the chase and let us enlighten you with how we are going to test that problem of yours;

Functional Testing

We will test what your product can do?

We will apply right skills and set of tools to determine the “various” things that your product can do and also the capability of those problematic functions, that whether they are really capable of doing it? And then comes the real testing of the each of the identified functions.

Domain Testing

We know it sounds a little aggressive, but to know the realities behind a working application we need to see what type of data it is processing. Now, the world we live in comprise of several domains, and each domain serves a different type of data. Consider the differences between the data on “Twitter” and the data on “Facebook” – you will get what we are talking about here.

As the testers of your product, we will see what does the product process as data, and then decide upon which set of data to be tested under what contexts and conditions – remember, the Murky world!

There are several ways to go about it like boundary values, invalid values, the best representatives, and even the garbage!

Flow Testing

Creating a sequence of test which will be performed one after another in order to check the integrity of the system processes and data relations.

These tests contain multiple activities which are carried out on depended processes. To carry out this effectively we create certain system state models and then perform the test runs.

Scenario Testing

Usually while encountering interesting bugs, the fault does not lies within the functions or the logic, instead it is within the user’s actions. So the testing eventually becomes investigation. To capture these unexpected bugs we work around with several interesting stories and then put them in the system.

Because we shot in a gray foggy area, so we use cross reference stories and then move along the functions as a user would move on. It’s really interesting but very detailed.

 Claims Testing

Each product comes with a claim about what it does and how good it is in doing so. These claims are either being put forwarded by determined marketers or sometimes the developers themselves. Its good, because that shows ownership, but, for a tester it becomes kind of a challenge to test and verify these claims;

So what we do?

We identify any reference materials that include claims about the product even SLAs, EULAs, advertisements, specifications, help text, and manuals, etc. which then we Analyze as individual claims, and also clarify the vague claims.

User Testing

Users are the biggest source of information we have; so why not do testing while involving the users

What we do is that we identify different user categories and their respective roles. Then we determine what each category of user will do (use cases), how they will do it, and what they value.

In order to do this we may acquire real user data, or bring real users in to test. On the other hand we might systematically simulate a user

Risk Testing

Without sounding “Dangerous”, we imagine what problems can happen from normal to worst conditions to a system and then start looking for those with the existing system

Systems are like Volcanoes, they sit quietly and even present best of their looks, but sometimes the worst happens; so what we do is that we identify what kinds of problems could the product have? And which kinds of these will matter most? We then make a list of interesting problems and design tests specifically to reveal them.

Automated Testing, actually checking!

Automation does not work independently, it needs scripted guidance and human interaction to work, for this, one or combination of the one or more of the above testing techniques is required.

We then generate regression, smoke and sanity testing scripts for your system in order to perform those vital checks prior launching a release, or initiating a deployment cycle.

We can use a combination of tools and techniques to perform automation, such as we would use different tools for Performance and Security testing and different for regression, smoke or GUI based testing.