Are developers responsible for testing?
I strongly believe that the future of software development will involve more developer responsibility for quality. A little justification for this:
- In my experience, the expectations of what level of quality control QA teams can provide often are unrealistic; they are often used as scapegoats, with little supervision or direction - set up to fail
- Often QA teams don't have the depth of exposure to business requirements which developers have, so developers are better equipped to find bugs
- Anecdotally, there seems to be a tendency of developers to be more lax with their work if they know there is another team of people checking that their code actually works
- Catching bugs earlier is generally regarded as a cheap way of improving software productivity, so having greater emphasis on developers writing code which is "right first time" helps to this end
- Developers *know* where the weak points and edge cases are in their implementation, so are often the best people to spot flaws
- Developers are more inclined to be mindful of the business value of what they are producing, if they have more of a stake in its success; in my opinion this leads to more useful developers!
- Often developers are strongly involved in assisting QA teams with understanding what the business requirements of a feature are, during the testing phase. In some cases you might be tempted to consider whether the QA team are actually inputting much value here (given that, in extreme cases, the development team are pretty much setting out how the QA operative should complete their job...)
- The move towards more automated UI/integration testing necessitates a greater involvement of those with a developer skill-set in the testing process
That's not to say I don't believe that dedicated QA personnel don't have a part to play in modern software development - it's just that I believe this resource needs to be used perhaps more sparingly, and in a more targeted fashion. At the very least, your team should be aware of exactly *what* the QA team is doing so you can establish which aspects are being missed, and which therefore may need to be covered by other process/personnel.
There are a few tricks which I think development teams often fail to take advantage of, which can increase quality (and by implication, increase productivity.) One of these is to ensure you are getting maximum value out of your code level tests. Additionally, think that well thought-out peer review goes a long way to improving the quality of software before it even appears in front of a QA team - however, I often see developers focussing on reviewing aspects of implementation which are *easy* to review, rather than things which *matter*! Being a bit pedantic, I also prefer to refer to this process as “peer review” rather than “code review” - as code may not be the only output from the development process which can go wrong!
To this end, I've created a suggested checklist of things which can be used when a developer is reviewing another developer’s work when working on a feature in a Sitecore application. These are arranged in approximate “priority order”, with the “harder, more time consuming, more important” stuff at the top and the “easier, less time consuming, less important” stuff at the bottom.
Consider whether you are falling down the trap of doing the things which are quickest and easiest, rather than the stuff which has most business value!
[IMPORTANT STUFF / HARDER STUFF TO REVIEW] ...
- Do you fully understand the requirements? Do the requirements make sense? Are there no obvious gaps in the requirements information?
- Given the above, are you confident that the implementation you are reviewing has any chance of meeting the requirements? This is not necessarily a full "QA-ing" of the ticket - just maybe a few basic checks (at least view the feature in a browser) to determine that the implementation is not fundamentally flawed with regards to meeting the business requirements
- Does the IA make sense for the client’s requirements? Is it consistent with the existing IA? Is it going to be maintainable?
- Are code tests in place where appropriate?
- Do all tests pass (new and existing)?
- Are code components composed in a maintainable fashion?
- Is there a sensible approach for deploying and testing the feature on a QA environment? Especially concerning "messy" stuff like Sitecore content?
- Is there a sensible approach for deploying the feature to the production environment? Especially concerning "messy" stuff like Sitecore content?
- Are the code components themselves maintainable?
- Will the client need to populate any content on their environments before the feature is deployed? Is this documented?
- Has appropriate example dev content been created for future use, where relevant?
- Are code components free of whitespace / spelling errors / etc? This stuff is great, but if the points above it are not dealt with correctly, a spelling mistake is largely irrelevant!
... [LESS IMPORTANT STUFF / EASIER STUFF TO REVIEW]