Monday, March 19, 2012

Findings from Sprint Retrospective

In this blog I will mention several common issues encountered during Sprint execution and possible actions that can be taken to avoid these during software development.

This is obtained from my experience working on SCRUM in few projects in different organizations and is the result of brainstorming by different team members.

Knowledge of these will help address the common issues right from the beginning. Following issues are covered in more details in this blog:

  1. Uncontrolled Changing Requirements
  2. Majority of team members are not aware of overall functionality
  3. Too many defects post release - Lack of Testing done by Developer pre-release
  4. Too many defects post release - Unavailability of Test cases to developers
  5. Lack of Reviews or Delayed reviews
  6. Lack of Analysis during implementation or Bug fixing
  7. Lack of Unit test cases
  8. Not planning performance and concurrent test scenarios
  9. Lack of separate Testing Environments
  10. Sprint Retrospectives is done for namesake
  11. Team not co-located
  12. Developer skill set not up-to-date
  13. Lack of knowledge on Newer unexplored technology

Issues and Prevention Mechanism: 

1.       Uncontrolled Changing Requirements:

  • Uncontrolled changing Requirements results in frequent changes and missed implementations.
  • Keep Sprints short (say 2-3 weeks). This way we can choose only a limited set of requirements to begin with. This gives time to Business Analysts / Product Owners to work without pressure to come up with detailed requirements for subsequent sprints.
  • If a requirement within the current sprint changes, we must either take it as a new requirement which can then be taken in a subsequent sprint or if it is critical, we must re-estimate and then move some other requirement out of the sprint.
  • Requirements must be tracked in some SCRUM tool, eg. TFS, rally, JIRA etc. This helps in identifying and shifting efforts, effortlessly. In my opinion Excel does not work in these situations.
  • Changing a requirement means a lot as it involves re-analysis, re-design and re-implementation. It also means we need to cleanup previous implementation. This time must be estimated properly otherwise overall quality will suffer.

Note: I have another blog on Change is constant.

2.       Majority of team members are not aware of overall functionality
  • Sprint planning session must involve all team members
  • Sprint planning involves two parts, first involves analyzing user stories to be included in Sprint and second involves estimation and task break down. Involvement of entire SCRUM team helps all to be on same page.
  • High level system design when done collectively by the team in a room helps all members to understand every part of current sprint implementation. In my opinion, this is highly recommended, as it makes up for issues like resource movement, resource unavailability, lack of analysis, lack of skill set.
  • Demo of existing functionality and complete understanding of what team wanted to achieve must be clarified upfront to the entire team.
  • Any changes to the items agreed upon (requirements / design) must be communicated during stand-ups. During stand-ups this should be communicated in 1-2 lines so that stand-up meeting does not exceed planned time.

3.       Too many defects post release - Lack of Testing done by Developer pre-release
  • Yes, developer must test all positive scenarios that they are responsible for before delivering the code for testing. They do not perform tests like boundary conditions or impact on other developers code, but they must verify that whatever they promised to deliver must be in working condition when they handover for testing.
  • At times this is not possible, as a developer might be implementing just a part of the overall functionality, so their work might not be testable. For this, we must keep a bucket to do developer testing once that functionality is complete (by all involved developers) and then any one developer can complete a round of scenario testing before handing it over for testing.
  • Sprint done criteria must include developer testing and must be conveyed and agreed upon in Sprint planning.
  • This eventually saves time because a defect identified at testing duration will involve fixing, build, deployment and re-testing, it’s better to fix it at the first step itself. Moreover developer knows the possible internal problems then the tester who does black-box testing. Using continuous builds, writing unit tests save some time, but nothing can beat a guarantee by developer. 

4.       Too many defects post release - Unavailability of Test cases to developers
  • Test cases must be prepared by testers (as they have a different mindset towards software). These test cases must be provided to developers even before development begins, so that they will have fair idea of how their code fits in.
  • Test cases must be reviewed by developers as they know the white box aspect of their code and can identify miss at testers end. Test case review must be added as done criteria for User story.
  • For re-opened defect fixes, I have seen it helpful for a tester to test on developer’s machine to avoid re-re-opening of the defect as a re-open can mean a serious issue with analysis.

5.       Lack of Reviews or Delayed reviews
  • Reviews must be done for everything, including Requirements, Design, Analysis, Code, Test cases, approaches, process and methodology. Lack of review and delay in review amounts to same sort of losses.
  • Reviews must be given topmost priority
  • If review needs to be done by someone outside team then it must be planned in advance and there must be at least one backup for the reviewer identified.

6.       Lack of Analysis during implementation or Bug fixing
  • This is often the cause of defect induction and re-opening of other defects. Proper time must be allocated to analysis.
  • Analysis must be reviewed by experienced developers before the fix is made to ensure that all related areas are getting covered.
  • A mapping of test cases to application component helps in running test scenarios against the component being developed/fixed

7.       Lack of Unit test cases
  • Many managers think that Unit testing is a time investment, however for a not throw away code, Unit tests saves time later during development/testing cycles.
  • Unit testing helps to identify defects early in cycle and provides a safety net. You definitely won’t benefit from a safety net which has large holes, so unit testing must be planned from beginning or implemented incrementally to existing application.
  • Code refactoring estimate must include fixing failing unit test cases estimate as well. In fact, code refactoring exercise must not begin with coding, but should begin with Unit testing
  • Continuous build must include unit test cases run
  • Developers must be made aware of importance of Unit testing

8.       Not planning performance and concurrent test scenarios
  • Most applications today are used by hundreds and thousands of users. Test cases must include covering the concurrent situations for each such possible flow
  • Time for performance testing of the application must be estimated as some functional issues which are not evident during normal testing are discovered during performance testing.
  • The sooner the performance testing can be done, the better.  Identifying performance issues and fixing them may take time. Also detecting such issues requires extensive logging in system.

9.       Lack of separate Testing Environments
  • The developer’s machines are well equipped with all necessary and extraneous components and are nowhere similar to the actual environment where the application will be finally deployed. Software that runs on developer machine will not necessarily run on a fresh machine. So, the testing environment must be different.
  • Separating development and testing environments gives testers their own sweet time to complete and keep track of components that require testing and offers maximum available time to test.
  • Performance testing environment should be as close to the actual production environment as possible.

10.   Sprint Retrospectives is done for namesake
  • Sprint retrospective is feedback check on the SCRUM process. It must be done seriously
  • Action items identified must be assigned owners and a time to accomplish issues identified must be agreed to
  • Action items in previous retrospective meetings must be discussed so that the findings are not lost.

11.   Team not co-located
  • Team co-location is utmost important for swift communication within team. Testers / Architects / Developers / Managers all must be at the same area or same room if can be arranged for.
  • If cannot be achieved due to distributed teams, using phone or video conferencing should be preferred over emails

12.   Developer skill set not up-to-date
  • Maintaining a competency matrix and training team members on newer technologies is a win-win situation
  • Knowledge of newer technologies equips team members to think in efficient and unique ways to achieve faster or informed results
  • Continuous relevant trainings keeps staff motivated, offers job satisfaction to many and employee retention increases.

13.   Lack of knowledge on Newer unexplored technology
  • Hiring a consultant for the required period can help achieve mastery in unexplored areas and staff can gain that knowledge.

With time, I will try and update this list and possible resolutions. I would like to know any other issues that you would have identified during Sprint retrospectives and what steps you took to counter them. Please leave them as comments and I would like to add it to the above list.