bug

Everything works under a process – one event happening after another, information flowing from one end to another, results forming up and then we redo the thing again. Quality gurus have expressed this in various forms, have established well defined diagrams, created heuristics and oracles to follow and adapt – we are clear, and no questions in that!
Except, that the perfection stated in the definitions, does not occur so perfectly when the human comes along. Whatever the processes are and no matter how perfectly they are formed, they are executed by the humans, and this is a challenge.
In the development, testing and deployment paradigm same things apply. Developers create something from scratch, write down the codes, compile and formulate a viable solution, the testers, test these solutions based on the contexts and draw coverage to find important bugs in a finite time space, and then this is deployed to the client.
Bugs, can appear anywhere, they are sewed in the codes, requirements and process; they are hidden! We need to detect them, find them, report and remove them – but there is never a guarantee of 100% removal, as there is no guarantee to 100% converge; thus, the only reassurance we can have, is to have that cycle running with proper inputs and outputs from and for the teams involved.
Here is a story I would like to share with you;
We were working on our ERP system, and have various clients in line with us for the release at the end of each month. The developers formulate a release with our “bugs”, “changes” and other issues and give us a go for the testing. While we test the release, another cycle of fixes occur and with this the release is sent to the client; So by the time development provide us the release (first cycle) and it is sent to the client, there is a period of almost 9 days in between.
So, we sent a release to the client, and the very next day they sent us an error with evidence on data population on grid, where it was not filtered as per the selected option, on a Goods Receipt Note.
I remember, there was an email, which was sent to the Project Manager, who came to us (testers) with that observation, and when we concluded that the client was right!
What to do?
Well, already been in the mission critical mode, we took the corrective action, and our Customer Support Department sent an email stating that we are sorry for this inconvenience, and are sending the corrected update by the end of the day!
The testers, missed the bug, due to overlay of multiple clients on one coding structure, due to time criticality, the said client was to be addressed at the beginning of that 9 day cycle, so the release was sent to the client on the priority basis (Testing team was not informed – check 1).
Then, we asked the developers, if the error is in the code? And what is really wrong. This was to determine the root cause, and here the SURPRISE came along!
The developer said “Oh yeah that one!” and smiled, “It did cross my table, and I fixed it about a week ago!”
That means that the developer, had fixed the bug at the time when the release was still under testing, but he did that on code, and did not report the bug on the tracking system. Thence, nobody knew what happened at all!
The testers missed the bug due to lack of coverage and time space.
The client suffered!
Prevention; what the least we can do!
Through a few (not several and not a couple) coordination meetings with Implementation and Development stake holders we come to the conclusions that there should be a definitive series of smoke tests and a number of coverage tests to be performed on each release so as to make it viable (and somewhat safe) to deploy at the client.
Anyone (and they mean everyone) in the team using / testing / developing the application discovers a bug is obliged to record the bug in the tracking system – NO exceptions!
So, from then on we allocated a couple of resources dedicatedly for smoke and sanity testing and distributed the modules to rest of the resources on the basic business case execution; for example, sales cycle, inventory in, out, and issuance, production cycle with respect to the business sectors, and introduced a weekly coordination meeting to discuss the new enhancements – that was 6 years ago! We were learning, and that paid off well – for us and for our customers.
This story is under our development, testing and deployment framework and context. It may not apply to yours in complete form or you may have a different way of looking at it. But, one thing is for sure;
When shit hits the fan, do not hide, face it, investigate, do the correction, and then make sure to prevent it – there are no guarantees to the contexts, but that is the only way to do it.

 

Arslan Ali is a Software Testing and Training professional; he serves his passion at OuttaBox (www.outtabox.co) as a Training Consultant for various software testing workshops, and also works as a Senior Consultant Information Solutions at Sidat Hyder Morshed Associates – a renowned software solution provided in Pakistan.

Arslan has been around ICT industry past 15 years and have diverse experience in Software Development, Quality Assurance and Business Process implementation.

You can reach him out on twitter @arslan0644.

Share Button