Sunday, October 30, 2016

TestOps #3 - Continuous Testing

Part 3 of my TestOps series focuses on extremely important subject that spans throughout full Systems development life cycle (SDLC). Some may argue that apart from understanding obvious  TestOps benefits it's the key for successful release and effective development.

From what I saw reading various articles and books about the subject no clear Continuous Testing (CT) definition exists. In such cases it's always best to use Wikipedia:
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.
Every CT step except exploratory testing should be fully automated and run as often as possible. Basically speaking if our infrastructure allow us to run every CT step after every commit on every branch we should take full advantage of that. If not CT can be performed nightly. CT should be integrated into delivery pipeline using tools like Jenkins, TeamCity, Go, GitLab CI etc.

I believe we can distinguish few things that make CT complete:

1. Continuos Integration (CI) and unit tests

After every single commit to main branch application should be compiled and build. Unit tests should also be executed at this point to give as quickest feedback as possible. In case of failure some kind of event (mail, slack notification) is advisable for team or culprits only (the people whose changes lead to failure). It's very important that unit tests are executed quickly in parallel. Developers should see failure notification before they begin next task to avoid distractions. Unit tests should be written by developers (ideally in TDD fashion), but Google Testing Blogs advises testers to have strong skills in this area. In case of bad practices (too slow tests, no parallelism, too few tests, not following test pyramid, poor test quality/readability/maintainability) we should initiate change and improvement. Also sometimes is advisable to move integration/E2E tests to lower levels to speed up whole pipeline.

Few authors distinguish mutation testing as separate CT step. I don't agree with that. Mutation testing is a way to check how good our unit tests are doing. We basically make sure that they can report failure in case of reverted logic.

2. Code coverage and static analysis 

After developer commits code to main branch he should be informed how his feature changed overall code coverage statistics. You may be surprised here, but introduction of gamification element to your pipeline usually have very positive effect on amount of unit tests written. No developer wants to be a laggard who worsens statistics. Example code coverage tools: JaCoCo and Istanbul.

Static code analysis tools like SonarQube are also very useful. They're capable of doing non-personal code reviews and white box testing in automated fashion. They identify security issues, bugs, code smells etc. You can not only modify/remove existing rules, but also add new ones.

Ideally you want static code analysis tool integrated with code review process. This allows reviewers to focus on broad picture (architecture, maintainability) instead of debating if given variable should be final.

3. Continuous Delivery / Automated deployment

When application building finishes you want to test your deployment process. Ideally it should be done in the same fashion as production release. Performing lots of test environment deployments daily gives you confidence that your release pipeline is working fine.

4. Integration / E2E / Visual testing

After application was successfully deployed on testing environment you can begin higher level tests. I'm analysing them as a whole because actual distribution of test cases depends pretty much of your strategy. Generally speaking you want to cover as much functionalities on integration/API level in order to shrink expensive E2E/Selenium tests. This sometimes creates a void in visual verification which can be fulfilled by tools like BackstopJS or Galen Framework.

Important node here: testing on this level should be owned by whole team, not only testers. This means that your testing strategy should be discussed and agreed on by everyone in your project.

Examples of this level tests can be found on my Github Awesome Testing project.

5. Performance testing

Along with functional testing you want to check how your application is doing under heavy load. I'm planning a separate post on this subject, but at this point it's worth to note that:

Performance tests should have separate environment which is as close to production as possible - if you can't achieve that it's usually better to do performance testing on production environment

This sometimes mean that you can't add performance testing to your pipeline.

6. Security testing (DevSecOps)

Latest trend with terrible name - DevSecOps. Security is becoming more and more important as hacking is getting increasingly easier with powerful tools like Burp. Sometimes you can have quick victories by integration existing tools to your pipeline. Examples are OWASP dependency check, OWASP Zed Attack Proxy (ZAP) or Gauntlt. If you don't do security testing already it's highly recommended to start as soon as possible.

7. Exploratory testing

Time for the most controversial topic of all - manual exploratory testing. The word which triggers endless debate here is most likely 'manual'. Now I'm definitely no fan of manual activities, but I also see a value in skilled exploratory testing. I think it's worth to execute them after completing big features and before releases.

Evaluate often if there is value in this step though - you don't want to have useless manual activities slowing down your process.

If you want to expand your knowledge in exploratory testing I suggest reading Elisabeth Hendrickson

8. Testing in Production (TiP)

Set of continuous activities which test live application. I already described it fully in my previous TestOps #2 - Testing in Production post.