DevOps – Developers.. check, Ops.. check, Testers.. err

http://blogs.thinksys.com/wp-content/uploads/2017/01/testing-in-devops-world.png

DevOps is neat, I like it, I find it natural extension to Agile movement and timely progression of it. Of late while consulting and coaching for the clients it seems to have entrenched me, to a point where it made me see every aspect of software development from the vantage point of (through?) DevOps. It’s all good but it’s also good to introspect your thoughts and observations, I did that. And I realised that for every team I worked with, the common recommendation was to bring the testing team upto speed or it was me saying DevOps is not working for you guys because Testing is lacking the edge or something to that effect, each and every time. Did I focus too much on Developers and Ops team to make them work in synch, collaborate and left testing to focus on automation and making continuous testing work? And hoping that this would make the testing team develop the necessary chops to deliver in DevOps projects. I was wrong, cutting it short, here is my conclusion – You can make testing effectively in DevOps only when the team has mastered the art of Agile testing. Just doing test automation, integrate testing tools with DevOps tools, doing continuous testing will not help much if the Agile fundamentals are lacking in testing.

So I take a step back and putting together my thoughts (without being too prescriptive) on actions to take to become agile in testing. Mind you I have kept it grounded to remain within the domain of Agile QA only and not ventured into Continuous Testing or Testing in DevOps space… back to the basics team (documenting my thoughts at random so you may find the sequencing not in ideal order). Here it goes:

Agile is iterative so is the testing: In any Agile model the requirement gathering and documentation, and design activities, and development activities, are iterative in nature. So must testing activities. If Testing team is not adapting to this then it will lead to failure.

Testing activities start with requirement phase: Requirements are ever evolving and there will not be a time where comprehensive requirements specification will be made available, therefore testing strategies can not rely on complete specifications. In fact work with rest of the team during the requirements gathering and grooming activities and create test strategies on the go.

Team formation and “T” shaped testers skills – This expectation has been around for long however the practical problem faced is that team may not even know what additional skills are needed and to what extent. This needs to worked upon and gradually skills need to be introduced to the testers.

Embedding Testers into Development teams for story testing including the unit tests.

For releases and niche testing it is good idea to have independent testing teams (and even have test hardening sprints)

Test environment and test data setup – This needs to be handled upfront as requirement and technical task while requirements gathering.

Test Automation and regression testing: Test automation strategy needs to be planned in sprint zero itself and have automation done in every sprint. Planning to do big band test automation after several sprints does not work and leads to high risk of having ineffective regression pack.

Perform static analysis: Static analysis to be used to check the quality where an automated tool checks for defects in the code, often looking for types of problems such as security defects or coding style issues.

Lead finalising the Acceptance Criteria for Business User Stories: Combining Business Requirement driven approach to testing by using Behavior Driven Development (BDD) using Cucumber. The greatest challenge with adopting BDD could be the lack of skills amongst existing requirements practitioners and testers. This is yet another reason to promote T-Shaped skills within the organization versus narrowly focused specialists.

Utilize the code coverage tools to ensure that the automation tests are offering sufficient coverage on code. Not doing this will lead to pesticide paradox and “false-pass” tests.

Plan and stay with the Development cycle: Do all that can be done but do not get one (or more sprint) behind the development cycle. Always plan for n-1, n and n+1 and it becomes essential that the testing team participants in sprint and release planning sessions.

Defect Management: In Agile the defects are just another type of requirement. A defect can be documented into the requirement to fix th defect against the Story X. In such situations the customer typically needs to pay for new requirements that weren’t agreed to at the beginning of the project but should not pay for fixing defects. This leads to commitment for zero-defect goals and merging the requirements and defect management processes into a single, simple task board gives a wonderful opportunity for process improvement and defect prioritisation.

Another practice that works wonders for quality is All-hands demo: Although there might be several project stakeholders working directly with the team we could have several hundred (or thousand) of actual users that don’t know what’s going on. Thats why I advocate an “all-hands” demo/review early in the delivery life cycle, typically two to three sprints into the construction phase, to a much wider audience than just the stakeholder(s) versus the few with whom we are working.

Going by Agile-Lean principles, testers should look for defect prevention and not just focus on finding defects. By this the definition of defect from lean perspective is:

– Building something with low quality (having bugs, traditional testing perspective)

– Building something the customer did not ask for because a requirement was misunderstood (BDD can help via executable requirements)

– Building something the customer did ask for, but later realized it was not what they meant and they do not want it now (BDD can help, also frequent all-hands demo)

Metrics:

Go beyond the typical Agile metrics such as Burn-Up and Bund-down charts, velocity and story points tracking. Some metrics that can lead to high quality:

  • Running Tested Features
  • Business Value burn-up
  • Technical Debt
  • Code Metrics – Cyclomatic complexity, Coding standards violations, Code duplication, Code coverage, Dead code, Code dependencies, Abstractness

Metrics are merely indicators and trends, they should not lead to blaming individuals in the team. They helps us get the desired Agile “inspect and adapt” feedback loop and we don’t collect data just for the sake of collecting data.

Hope the above helps and when in doubt check the Agile Manifesto and the guiding Agile Principles .

Testing the system behavior which is not there

Yesterday, we were giving a demo of our mobile test automation POC to a client for their android app in the financial domain.  It was a simple scenario with login to the android app and then testing the market watch feed to the logged in user.

The screen as below:

The POC was based on the existing manual test scenario with the suggested data from client. Using the correct login id and password takes the user to the welcome screen.  If the user id or the password is not correct then the application would throw an error message and request you to try again with valid credentials.

After the demo of the successful login and further steps in the POC, the client representative asked us to update the test to use invalid password which was very long.  The data suggested was “anand long name in the password 8hj”, this string was 35 character long.  When we reran the test the login was unsuccessful as the password was invalid, however the password field didn’t accept the full string and truncated it to 12 characters “anand long n”.  So the conclusion was that the test passed as it didn’t allow to login with invalid password with an observation that the test didn’t do enough to catch the fact that the entire password string was not used to run the test and the test data for the password was actually truncated.

Now should the above test trap this condition and not allow any further characters post 12 characters to be typed into the password field?  Should it throw an error?

What do you think?

Here is my $0.02…

I would look at the behavior of the system in the requirements in the same scenario. What happens when we manually run the test?  Is it conforming to the requirement?  The manual test and thus the test automation script needs to validate this behavior of the system.

It would be wrong for a test case to validate something which is not part of behavior of the system and even worse trap and override the system to give warning and errors.

In the above test case when I manually enter the password, after 12 characters the system does not let me type any more characters, but please note that it does not stops me from entering any further characters into the field.  If I was not looking at the screen then I would simply type away all the characters and process with the next steps.  A test automation tool is like a person who does not look at the screen only interacts with the system under test.

This seems innocuous at first but consider this, if the valid password is “123456789012” (12 character string) and I used test data as “123456789012345” (15 character string), the system would allow me to login with invalid test data as it would truncate the long string into a valid password.

Type away!!

Now it is up to the users to decide what should system do if I am happily typing away even though the password field allows max 12 characters?

  1. The system should let user type more than 12 characters and not truncate (as it may pose a security risk to reveal the max length of the password)
  2. The system should pop-up an error message that user has reached max characters allowed in the password
  3. Or the existing way where the users can type as many characters but system will truncate it to max allowed characters without knowledge of the user.

Once we close on this expected system behavior from the varied perspectives of functional as well as UX testing then we need to accordingly validate this behavior using manual and automation test cases.

I would also like to bring your notice that above scenario is also an example of how testing can lead to functional as well as the user experience requirements when it is either done along with or prior to system requirement gathering. This will help us study requirements from a behavior perspective which can then evolve and lead to better test cases and eventually high quality.

Happy testing!!