| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Testers will not integrate with Team

Page history last edited by PBworks 15 years, 9 months ago

Testers will not integrate with Team

 

1. Smells

 

  • Long bug-fix cycles.
  • Testers are not colocated with the rest of the team.
  • Testing personnel are only available part-time on the project.
  • Frequent change of testing personnel involved with project.
  • Lack of respect for testers' work.
  • Developers routinely release highly buggy software to testers.
  • Developers are defensive about their code when testers find bugs.
  • Influential test manager fiercely protects testers' interests over the interests of the project team.
  • Testers insist on using tools in such a way that they minimise face-to-face interaction with developers.

 

 

2. Discussion

 

For organizations that are generally structured around specialist disciplines, a separate testing or Quality Control team may seem natural. Scrum however, calls for multi-disciplinary teams that can self-organize to employ each discipline simultaneously and take concepts to quality, delivered software as rapidly as possible. Organisational boundaries such as those between quality assurance and testers hinder such goals and can be seen as a hang-over from a waterfall-style approach.

 

Generally, organisation boundaries often serve to impede both communication and the visibility of the value stream. The latter dynamic greatly reduces the likelyhood that the team will optimise practices to improve end-to-end efficiency. In this example, having a separate QC team may obscure the cost of perfomring thorough testing early. The immediate goal of developers becomes releasing to the QC environment as soon as possible and letting QC pick up any holes rather than pro-actively create automated tests and working on practices like Test Driven Development and Acceptance Test Driven Development that may avoid defects being released outside the dev environment in the first place. One could see this as the Scrum Team using QC as a 'crutch' that reduces the incentive to "build quality in".

 

 

3. Causes

 

  • Testing personnel bonded more strongly as a testing team than as a cross-functional Scrum team.
  • Rigid organisational structure around testing/Quality Control as separate team.
  • Quality Control ispredominantly non-technical and does not find developer discussions relevant or intelligible.
  • Scarce testing specialists shared across multiple projects.
  • Testing personnel/management sees it as necessary to be independent to maintain an objective/user-centric view.
  • Incentives reward work within department/discipline over project work.
  • The organisation lacks trust in developers to produce software of sufficient quality without outside audit. 
  • Interpretation of quality standards such as ISO 9001 as requiring a separate testing group.
  • Confusion over the difference between Quality Assurance and Quality Control ie. testering group calls itself QA..
  • Part or whole of testing capability outsourced to a separate organisation and/or offshore.

 

 

4. Consequences

 

  • Big stabilisation efforts of indefinite duration impacting the release schedule
  • Customers end up testing the software and finding significant defects
  • Increased costs and lost revenue
  • The overheads of heavyweight tools being to manage communication between testers and rest of Scrum Team

 

5. Prevention

 

  • Insist on sufficient testing skills during team formation.
  • Build Quality In and lessen testing workload by investing in Test Driven Development including Acceptance Test Driven Development defined prior to implementation code.
  • Cross-skill exsting team members to cover testing work.

 

6. Example Remedies

 

A - Team takes responsibility for testing

The Scrum team may choose to built up its testing practices to reduce reliance on the separate testing team. This may involve assuming responsibility for systems testing and/or acceptance testing.

 

If system/acceptance test automation has been limited due to limited technical skills within the testing team, shifting this activity to the Scrum Team may be an opportunity to increase the degree of test automation.

 

A danger of this approach is the potential to upset the testing team and/or test manager if the testing team feels disempowered as a result of shifting work to the Scrum Team. The risk of this should be assessed before making such a change and this may need to be carefully managed at a political level.

 

It may be valuable to recruit one or more members from the testing team to the Scrum Team to add testing skills and re-balance team resourcing appropriately.

 

B - At least one core team member from the testing team is dedicated to the project for the entire duration

The continual overlapping of activity types (analysis, design, coding, testing etc.) is central to the Scrum model and this requires consistent availability of a testing capability adequate to provide feedback as early as possible and ensure that PBIs can be pursued to 'DONE' within a single sprint. If the testing team's involvement is required to achieve this then the minimum required is one core testing team member dedicated to the project and consistently available to work with the rest of the Scrum Team on a day-to-day basis.

 

In negotiating a dedicated testing resource, it may be useful to point out how this constant involvement can be expected to reduce resourcing variability ie. it will be less likely that multiple testers will be required at peak periods as constant effort should lessen the likelyhood of rapid testing debt build-up which occurs when testing personnel are unavailable.

 

 

C - Find and highlight pain points to both testers and rest of team

This involves consulting separately with both the testing team and the Scrum Team to identify their respective pain points before identifying common issues for resolution. It is best to have such a discussion as a combined group focused on real examples using retrospective activities to brainstorm potential solutions and have the group formulate actions.

 

It is likely that the two groups have pain points that correspond. For example, testers may be frustrated that the Scrum Team is often pushing to demote the priority of bugs in the defect tracking system. The Scrum Team may be frustrated about that may of the description on bugs assigned to them are vague an not readily reproducable and as a result seem like low priority issues. The underlying problem may be a communication issue that could be highlighted and may result in both groups agreeing to communicate face-to-face about unclear defects including providing constructive feedback on how to provide the necessary detail to make it clear how to reproduce them and why resolution of the defect is valuable to the customer.

 

 

D - Raise the visibility of quality metrics

Metrics can be used to raise the awareness of both teams as to their performance in the area of concern. A key metric is time from defect creation to detection. If such data is captured over multiple sprints or releases and then analysed with both Scrum Team and testers, it may reveal a correlation between defect detection speed and time to release readiness. This may

 

 

E - Highlight the cost in time-to-market to management

As per the Discussion section (see above), the organizational barrier between the Scrum Team and Quality Control may well be costing the organisation in terms of time-to-market through significant delays between defection creation (in analysis, design, coding etc.) and defect detection. This may be leading to bug-fix cycles of unpredictable length toward the end of sprints or release cycles.

 

One way of explaining this to management is that the longer that defects go undetected, the longer the organisation lacks information on work remaining to deliver a high quality product to market. Thus, time-to-market can be improved through practices that bring defect detection earlier. Such industry standard practices may include:

  • analysis workshops involving the client and/or domain experts,
  • design reviews,
  • regular PO review of features in progress,
  • Test Driven Development (TDD) including Acceptance Test Driven Development (ATDD) and
  • early User Acceptance Testing. 

 

F - Use and immersive simulation exercise

An immersive simulation exercise can be used to highlight inefficiencies in current practices. Such a simulation exercise might simulate queues of work through the teams and the processing times.  Times from defect creation to detection to resolution could be measured and the group could be challenges to optimise the process to reduce these figures and thereby improve time-to-market.

 

7. Case Studies

 

Case Study A

 

The organisation had a rigid organisational structure that emphasised testing/Quality Control as separate team. There were several reasons for this including the following organisational impediments.

  1. Scarce testing specialists were shared across multiple projects.
  2. Testing personnel/management saw it as necessary to be independent of the coders to maintain an objective/user-centric view for the purposes of black box testing from a user-centric viewpoint.
  3. QA was predominantly non-technical and did not find developer discussions relevant or intelligible.
  4. The organisation prided itself (particularly at management and marketing levels) on its Quality Management System which included a Quality Assurance team marketed as a stand-alone service.

 

QA was responsible for Quality Control in addition to QA activities and in the minds of most others, QA was synonymous with testing.

 

The value of having testers involved in analysis was well understood but, due to understaffing, this excabated the problem of QA staff being tasks with multiple project including anaylsis-only projects.

The unintegrated testing team problem became was raised in visibilty by two smells:

  1. target release dates were repeatedly missed due to protracted bug-fix cycles, and
  2. there were repeated delays resulting from inabiilty to find sufficient testing resources at short notice when needed.

 

The second problem resulted in substantial backlogs of testing work building up, often requiring more than one tester to work through in reasonable time which put further pressure on resourcing. This issue was felt acutely by the testing team and the test team manger in particular and consituted a common pain point between the Scrum and Test teams.

 

In addition to surfacing this common pain point, a further remedy was to seek agreement that at least one tester would be dedicated to the project at all times. This remedy seemed like the most logical solution based on the second smell (above) and was the most readily implementable as overcoming the organisational impediments (above) was difficult and would take some time.

 

As could be expected, a release retrospective highlighted how the release was substantially delayed due to late defect detection and hold-ups due to lack of tester availability. The Scrum Team felt that the project would be better served by a dedicated tester being available at all times. Such a tester would be able to detect defects much earlier by being more involved in analysis and design as well as reviewing 'pre-QA' releases made every day.

 

A big side-effect of the 'pre-QA' testing activity that directly addressed the team integration issue was that it encouraged the tester to collaborate much more closely with the Scrum Team to understand the latest changes and discuss possible defects without using a heavyweight process and/or tool that was much more clearly overkill for testing that was percieved as being much less formal. The dedicated tester moved into the Scrum Team room for significant periods rather than walk to the other side of the building or make a phone call every time a question came up. This effectively made the tester a permanent Scrum Team member with the same level of involvement as other Scrum Team members.

 

This strategy proved very successful in reducing defect detection delays and cutting release stabilisation time to a fraction of what it had been.

 

 

 

Credit: this is based on material from A Playbook for Adopting the Scrum Method of Achieving Software Agility, 53 p. version, 2005.

 

Comments (0)

You don't have permission to comment on this page.