Friday, September 25, 2015

System Testing

More than two years ago I was invited to a System Tester job interview in Motorola. As usual, I wanted to prepare as well as possible and by simple googling, I made a surprising discovery. In the world where almost everything has detailed descriptions available for free, there is serious lack of reliable resources about 'system testing'. For example, on Amazon, there is only one sensible hit -> check it. Trust me, by implementing processes described in this book you will end up broken. This means that you can become a bestselling author. Just write a professional and concise book about system testing with lots of practical examples. 

According to ISTQB glossary, system testing is the process of testing an integrated system to verify that it meets specified requirements. That's too general description. In my opinion: 

System testing in the time & budget-limited process which ensures that: 
  • the system meets specified requirements 
  • all system elements integrate correctly making unity 
  • '*ility' levels satisfy all stakeholders 
  • customers can safely use a system 
It's fashionable to analyze what Apple does, so let's check scraps of information we can find online. Before launching iPhone 6 Apple opened the testing center in Cupertino for the press. I managed to find an absolutely fascinating picture-rich article about iPhone testing (click). Unfortunately, those are only hardware tests. If you use Apple products you probably agree that the usability levels are very high. It looks like they hire Human Factor Engineers (click). However, this looks the most interesting (click) - is it iPhone System Tester? Those are job requirements: 
  • Testing of mobile devices and cellular technologies (GSM/GPRS/CDMA/LTE/etc) 
  • Scripting skills in any of the following: Python, Perl, Shell, or JavaScript 
  • Good understanding of SQA methodologies & practices 
  • Comfortable with working on hardware and in supporting engineering in debugging and reproducing user case scenario 
  • Demonstrated ability to own a complete functional area of an application or product 
  • Thrive in a collaborative environment and can clearly communicate while confidently driving multiple projects across many teams 
  • Obsessively passionate and inquisitive, and seek to solve everyday problems in innovative ways 
  • Laser-focused on the smallest details that are meaningful to our customers 
Hopefully one day a brilliant writer will answer millions of system testing questions - how to test a pacemaker? How to test a space shuttle? When to start integration? When to start performance testing? Which test types to rely on? And so it goes!

Wednesday, September 2, 2015

Selenium maintenance hell

So often we find ourselves scratching our heads thinking 'Why did it fail? It works perfectly well locally...'. Timeout or 'Session ID is null' (my favorite) errors can be especially annoying when we are currently working on the most famous testing task: make all test pass. Let's analyze together how to fix this unhealthy situation. 

First of all, we need to redefine our task with management. Why so? Having dummy test methods is not what we want to achieve (and not what managers expect from us). Our real task can be defined as: 

Make your tests cover as many functionalities as possible, but avoid false results at all costs. 

I emphasized 'all costs' because I want to make a point which you may find controversial: 

Even the most important tests should be excluded from the automated suite if they are unstable. After exclusion we need to: 

a) prioritize test-fixing task (according to test importance/coverage) 
b) perform the manual test until automated one isn't stable 

Certainly do not run tests locally (and do not force programmers to do that...). This approach leads to nowhere. 

I know we shouldn't, but what if we depend on other services, DBs, factors? Automated test dependencies should be verified before test execution, and if such one exists test should be ignored. 

There is one thing which we often forget about. As QAs/testers we are responsible for building trust in automated tests throughout team/company. False test results are the single most important factor which reduces (or even destroys) this trust. 

Even from the most selfish perspective, there is an important argument against non-stable tests. Writing/maintaining tests with false results will combine your work image in the eyes of others with the word 'FAILURE'. Avoid it at all costs.