Friday, September 25, 2015

System Testing

More than two years ago I was invited on System Tester job interview in Motorola. As usual I wanted to prepare as good as possible and by simple googling I made surprising discovery. In the world where almost everything has detailed description available for free there is serious lack of reliable resources about 'system testing'. For example on Amazon there is only one sensible hit -> check it . Trust me, by implementing processes described in this book you will end up broken. This mean that you can become bestselling author. Just write a professional and concise book about system testing with lots of practical examples. 

According to ISTQB glossary system testing is the process of testing an integrated system to verify that it meets specified requirements. That's too general description. In my opinion:

System testing in the time & budget-limited process which ensures that:
  • system meets specified requirements
  • all system elements integrate correctly making unity
  • '*ility' levels satisfy all stakeholders 
  • customers can safely use a system
It's fashionable to analyse what Apple do, so let's check scraps of information we can find online. Before launching iPhone 6 Apple opened testing centre in Cupertino for press. I managed to find absolutely fascinating picture-rich article about iPhone testing (click). Unfortunately those are only hardware tests. If you use Apple products you probably agree that they usability levels are very high. Looks like they hire Human Factor Engineers (click). However, this looks the most interesting (click) - is it IPhone System Tester? Those are job requirements: 

  • Testing of mobile devices and cellular technologies (GSM/GPRS/CDMA/LTE/etc) 
  • Scripting skills in any of the following: Python, Perl, Shell, or JavaScript 
  • Good understanding of SQA methodologies & practices 
  • Comfortable with working on hardware and in supporting engineering in debugging and reproducing user case scenario 
  • Demonstrated ability to own a complete functional area of an application or product 
  • Thrive in a collaborative environment and can clearly communicate while confidently driving multiple projects across many teams 
  • Obsessively passionate and inquisitive, and seek to solve everyday problems in innovative ways 
  • Laser-focused on the smallest details that are meaningful to our customers 

Hopefully one day a brilliant writer will answer millions of system testing questions - how to test a pacemaker? How to test a space shuttle? When to start integration? When to start performance testing? Which test types to rely on? And so it goes!

Wednesday, September 2, 2015

Selenium maintenance hell

So often we find ourselves scratching our heads thinking 'Why did it fail? It works perfectly well locally...'. Timeout or 'Session ID is null' (my favourite) errors can be especially annoying when we are currently working on the most famous testing task: make all test pass. Let's analyse together how to fix this unhealthy situation.

First of all we need to redefine our task with management. Why so? Having dummy test methods is definitely not what we want to achieve (and not what managers expect from us). Our real task can be defined as:

Make your tests cover as many functionalities as possible, but avoid false results at ALL COSTS.

I emphasised 'all costs' because I want to make a point which you may find controversial: 

Even the most important tests should be excluded from automated suite if they are unstable. After exclusion we need to: 

a) prioritise test-fixing task (according to test importance/coverage) 
b) perform manual test until automated one isn't stable 

Certainly do not run test locally (and do not force programmers to do that...). This approach leads to nowhere. 

I know we shouldn't, but what if we depend on other services, DBs, factors?Automated test dependencies should be verified before test execution, and if such one exists test should be ignored.

There is one thing which we often forget about. As a QAs/testers we are responsible for building trust in automated tests throughout team/company. False test results are the single most important factor which reduces (or even destroys) this trust. 

Even from the most selfish perspective there is important argument against non-stable tests. Writing/maintaining tests with false results will combine you work image in the eyes of others with word 'FAILURE'. Avoid it at all costs.