So you outsourced your organisation’s Sitecore development …
… problem solved? Things aren’t usually that easy. If your supplier isn’t allowed access to your Production/UAT environments, then your organisation will have to handle the progression of each release once it’s handed over. If your Sitecore Application comes within the scope of PCI DSS, your organisation is ultimately responsible for ensuring each new release meets the requisite security guidelines. In addition, your organisation will need a defined process to assess whether each release from your supplier is accepted or rejected on other functional and “non-functional” criteria. And we’ve not even started to consider the day-to-day support which the supplier will need from your organisation to ensure they can complete their work. Below I’ve summarised what capabilities your organisation will need in place to be able to competently assess and deploy a release from your supplier - feel free to ignore the sections which don’t apply to your situation…
Build/deployment tooling and processes
Assuming your supplier “hands over” a release to your organisation prior to the UAT stage of the deployment lifecycle, your organisation is going to have to manage the storage/retrieval/build/deployment of the code, to ensure it is delivered to the UAT and Production environments in one piece. This potentially means ownership and support of a source control repository, build tooling and configuration, deployment tooling and configuration, and a coherent plan for how all of these things are to be used. I would suggest that, at minimum, you have a checklist of all activities which need to be performed from the “handover” point right up until the release has made it up to Production. It should be clear who owns each step on the checklist, and at any point it should be clear what the current status of a release is.
UAT and Production environments
Patching your OS on your application servers is probably taken care of by an internal team or hosting provider, but that still leaves lots of other application-level concerns to be maintained by your “Acceptance and Delivery” team. Either use an IaC approach to script as much of your environment specification as possible, or at least document the manual setup steps (think IIS features, deployment software, file permissions etc.) You never know when you’ll be asked to spin up another application server at short notice! Also, beware of snowflake servers emerging. Up-to-date infrastructure topology diagrams are also invaluable. Cloud storage, CDNs and other application dependencies will also likely be owned by your “Acceptance and Delivery” team. Keep your Sitecore license file up to date! And remember to follow Sitecore’s best practises by routinely performing optimisation tasks such as cache tuning.
Application monitoring strategy
No matter who is in the line of fire for outage alerts, it is likely that the “Acceptance and Delivery” team will be involved in deciding *what* constitutes an outage. For instance, if there’s an issue with Solr, your web application may be operating in a “degraded” state, but without its pages delivering 500 error code responses - so monitoring that Solr servers are happy might add some value. Instrumentation tooling such as New Relic can help with pinpointing the cause of outages and helps with recording various metrics of your application across the longer term.
Support for internal teams and your external supplier
Regardless of the fact that your development is outsourced, your “Acceptance and Delivery” team is likely to be the first point of call when UAT starts running slowly or a potential bug is discovered in Production. If an internal dependency of your application’s code is malfunctioning, you’ll probably need to resolve this on behalf of your external supplier. Your supplier may need performance metrics, information about infrastructure, feedback from the internal DBA team, log files, etc - to help with both sprint development and troubleshooting
Application security strategy
This is particularly important if you need an audit trail to show a PCI QSA. There are several complementary approaches to demonstrating that a release is sufficiently secure. Probably the least practical is to review the source code - there will likely be so much source code (assuming a sizeable outsourced team) that it’s impossible to meaningfully and thoroughly complete such an endeavour. However, it feels negligent to avoid this totally, so in practise maybe a time-boxed eyeballing of key portions of the release’s code changes should be sufficient to notice anything *obviously* unusual or suspicious. Static Application Security Testing tools can also be used to identify areas of the source code which are insecure or require further analysis. Finally, web application scanning tools can be used to verify the compiled code by subjecting it to various penetration tests. If you only use these tools on UAT, and not Production, make sure you are aware of any differences between the environments (and therefore any “gaps” in your security strategy.) Assessing application security can be very time consuming, so there is a balancing act to be had between coverage/effort and the need to keep pushing out regular releases.
Performance testing/NFR strategy
“Performance“ and how to define/measure it is the cause of much pain, as there are so many variables which affect how an application performs, and so many perspectives to view performance from. It may be that your organisation has some clearly defined NFRs which your outsourced team are fully aware of and test against - in which case it is slightly easier to justify rejecting a release from your supplier, if it can be proven not to meet these requirements. Even if performance requirements are somewhat less defined, it seems a missed opportunity to not perform at least some rudimentary load testing, which can be used to see if there is any unexpected variance compared to the previous release. Free tools such as JMeter can quickly generate load and measure response times from a local machine, but for serious load and distributed testing you’ll want to consider cloud-based load testing services.
Technical acceptance strategy
By technical acceptance, I means things like:
- Is the release delivered such that your organisation is easily able to build it?
- Is the release supplied with adequate technical documentation to enable you to deploy it? With Sitecore this can be especially finicky (manual content changes anyone?)
- Is the release compatible with your deployment automation pipeline?
With build and deploy requirements, one option is to ensure you have matching or shared tooling (using the same TeamCity/Octopus instances, or sharing configuration data between separate instances, etc.) This very well may not be possible or impractical - one approach you could try is to give a “build specification” and “deployment specification” to your supplier based on your current tooling setup. This way, your supplier is free to organise their internal build and deploy processes independently of yours - as long as they meet the specification contract. For a “build specification” I would describe how a deployment package should look - .NET version, expected Sitecore data packages with folder locations, etc. The “deployment specification” extends this by specifying how your application file system and databases should be modified as a result of deploying your build package - this would include things such as the vanilla Sitecore files being in place, any Sitecore modules, any special file copy rules, etc. A true specification is "declarative" rather than "imperative" - i.e. it will specify the objectives and results rather than prescribing the method by which the results are achieved. Similarly you should be able to give your supplier a template deployment plan which covers 99% of cases, with gaps for them to add deployment specific information. The whole point of this section is to ensure that your supplier doesn’t spring a surprise on you without sufficient notice - you can constrain the number of ways they can cause you extra work!
Business acceptance strategy
Here I would describe “business acceptance” as the process of deciding whether the functionality being delivered meets the needs of the business - and doesn’t introduce any problems for the business. There may be a multitude of owners for the different aspects of this process - usually a combination of Product Owners, Content Editors, QA engineers and Project Managers - so the challenge is to ensure the right people have a chance to sign off their parts of the process, and to work out who mediates when there are clashes of priorities. These outcomes need to be aggregated into an overall go/no-go decision. Assuming that the business signs off the release to be pushed to Production, the deployment may then need to be represented at a Change Control meeting to be cross-examined and approved. Relating to “business acceptance” is the process of actually coordinating/negotiating when releases are set to arrive at your organisation, ensuring that these schedules reflect business needs, and communicating the relevant information to the relevant stakeholders.
In summary, your organisation needs a solid “Acceptance and Delivery” team with proven and agreed processes in order to handle a supplier’s release being “thrown over the fence”.