Getting feedback from running code

With unit tests and integration tests we can get feedback from the changes we have just made to code. The larger question is how long does it take in an organization to get feedback from real users interacting with that code?

  • With Software as a Service, it is feasible to set up continuous deployment and as long as the tests in the pipeline pass, do a deploy out to production
  • With other implementation models, On Premise, Device Firmware etc, it can take longer to get feedback

With the standard approach of pushing out incremental releases and a semi traditional approach, each release goes though a quality assurance cycle before being made available as a general release. Customers then choose when to upgrade to the available releases, or have the release pushed towards their machines. In these cases, the time from the first changes to the code in the release until it is made available in a release can be significant.

With an 10 week release cycle, code can have been changed at the start of the release and will not get any feedback until the first customer installs and uses it at least 10 weeks later.

Obviously the way to get real feedback in a shorter time is to have shorter release cycles, but that is not always possible. A workaround for this if to find customers who are willing to take a release as soon as it is available, effectively canary customers. These customers benefit from increased monitoring a scrutiny of the running system, as well as earlier availability of the features in the release.