Digital Garden of Paul

There is no need to squeeze on software quality

Organisations are under pressure

Most organisations are in the midst of their digital transformation while coping, or recovering, from the COVID-19 pandemic. Both things require organisations to be flexible and agile. However, new trends and practices are emerging. For example, Gartner sees the future of business in composability. It puts more pressure on IT leaders to transform their organisation and enterprise design to be able to drive the digital transformation and composability.

In 2018, Vector Consulting Services conducted a survey among 2,000 decision makers about trends and challenges in software engineering in recognition of the 50th anniversary of software engineering. The study revealed that organisations continue to struggle to achieve quality along with cost and efficiency (Ozkaya, 2021). Regarding quality Gartner (2020) reckons that organisations has to focus less on whether applications fulfil a long list of requirements. Instead the focus should be on delivering a compelling customer or user experience. Moving towards an outside-in, customer-driven perspective of quality.

On its own this displays an interesting paradox. On one side organisation struggle to balance quality with cost and efficiency. On the other side quality has become an increasing factor of the customer experience. @@ add example here of the paradox. Facebook outage? Coolblue?

So, how to cope with this paradox? In our views automation is paramount in balancing this paradox as a high-performance delivery organisation. Before looking towards the mitigating effects of automation, let's first get our basics right.

Are we talking about control or assurance?

Often the terms Quality Assurance and Quality Control are used interchangeably. In the perspective of quality management it are two different things. Quality assurance looks at the process whereas Quality control is concerned about the output. Or, in simpler terms it is about process quality vs product quality. The distinction might not seem important, but the underlying activities and philosophies are different. It is not a question of doing the one or the other, but about effectively applying both to increase the quality of the product.

To clarify the distinction between the two it is worthwhile to look at the activities. Quality Assurance is focussed on preventing defects, where Quality Control is about identifying the defect.

Quality AssuranceQuality Control
It is a process which deliberates on providing assurance that quality requests will be achievedIs a process which deliberates on fulfilling the quality request
Aims to prevent detectionAims to identify and improve the defects
The process to create the deliverablesThe process to verify the deliverables
Responsible for full software development life cycleResponsible for software testing life cycle
Quality auditWalkthrough
Tool identification and selectionInspection
Proactive measuresReactive measures
Process focusedProduct focused

Distilled from Quality Assurance QA (PQA & SQA) vs Quality Control (QC) | by Hoipt | TrueMoney Engineering | Medium

What about Testability?

Software testability has many definitions. A recent literature study of Garousi et al. (2019) already displayed 6 definitions from various institutes. Next to these, they collected 25 different definitions from the studies researched. The research makes an interesting classification on the definitions. It classifies if the definitions is focused around the facilitation of testing and/or the facilitation of revealing faults. Primarily the definitions from the institutions are centred around the facilitation of testing. For example the ISO standard 12207:2008 defines software testability as:

“extent to which an objective and feasible test can be designed to determine whether a requirement is met”

ISO, "ISO Standard 12207:2008 – Systems and Software Engineering – Software Life Cycle Processes," 2008.

ISO 25010:2011 mixes both classification in their definition:

“degree of effectiveness and efficiency with which test criteria can be established for a system, product or component and tests can be performed to determine whether those criteria have been met”

ISO, "ISO/IEC 25010:2011 – Systems and Software Engineering – Systems and Software Quality Requirements and Evaluation (SQuaRE) – System and Software Quality Models," 2011.

Given these definitions and the findings of Garousi et al. it is safe to say that testability is primarily focused on ensuring the product can be tested to see if it meets the set criteria. This is in sharp contrast with Quality Assurance, which primary goal is to ensure continuous improvement of the quality process. Quality control is the actual verification of the product. It opens the argument that having a product that is testable will lead to a faster and more efficient quality process, resulting into a higher quality product.

The literature study shows the most mentioned techniques to improve testability. These are:

  • Testability transformation
  • Improving observability
  • Adding assertions, increasing chances of defects
  • Improving controllability
  • Architecture and test interfaces supporting testability
  • Manipulating dependency (coupling etc.)

Observability in this context is defined as determining how easy it is to observe the behaviour of a program in terms of its outputs, effects on the environment, and other hardware and software components. It focuses on the easy of observing outputs. Observability directly influences testability, since if it is not easy to observe the behaviour of a program in terms of its outputs, testing will be more challenging.

Controllability is the degree to which it is possible to control the state of the component under test as required for testing. Another definition is that, it determines how easy it is to provide a program with the needed inputs to exercise a certain condition or path, in terms of values, operations, and behaviours (Garousi et al., 2019).

The Iron Triangle

Given the problem statement at the start of this note, we reckon the concepts of time, cost & quality. These are also the three fundamental concepts of the Iron Triangle. The Iron Triangle is a fundamental aspect of how we understand success in a project (Pollack et al., 2018). Being a representation of the most basic criteria by which project success is measured. Research by Pollack et al. in 2018 showed that the Iron Triangle still has merits in the modern day and age. The concepts of time, cost and quality have consistent and significant relations with each other. In other words, strong focus on cost and time will effect the last concept of the Iron Triangle, quality.

Ozkaya (2021) asked questioned whether we can really achieve software quality. In an article for IEEE Software (volume 38, issue 3) he writes:

The project management triangle, also referred to as the iron triangle, suggests that the expected quality of any work is constrained by the project’s budget, schedule, and scope (features implemented). If we believe the quality triangle to be correct, achieving software quality is always incomplete. We accept that we always deliver below par as we always have to trade off one aspect of the cost, schedule, and scope triad. There are legitimate challenges that make it quite difficult to break the tight coupling among these elements and their influence on software quality.

Given the constraints of the Iron Triangle one could argue that in order to achieve quality the impact on time and cost should be as low as possible. Which basically means how can we make quality assurance and quality control as efficient and cost-effective as possible? Edgar W. Deming already argued to "stop depending on inspections" as inspections are costly and unreliable. Inspections merely find a lack of quality. It is imperative to not only find what you did wrong, but to eliminate the "wrongs" altogether. Automation can play a big role here.

Built-in quality

The Toyota Production System is based on two concepts: "jidoka", which can be translated as "automation with a human touch" and the just-in-time concept. Jidoka refers to tools and visual aids in closely monitoring quality during the production process (Fitzgerald & Stol, 2014). In software delivery we see this in the form of build status. Being 'red' in case of a broken build or other problem.

Poka Yoke is another element of the Toyota Production System. Its goals is to eliminate, correct or emphasise human errors. Poka Yoke has been defined as consisting of checklists, test plans, quality matrices, standard architecture, shared components, and standardised manufacturing processes [42, p.95]. Poka Yoke, or Baka-Yoke are fool proofing mechanisms to help eliminate mistakes, and assist an engineer in identifying problems as soon as possible (Fitzgerald & Stol, 2014).

These elements clearly belong to the Quality Assurance process. As such it is aimed to optimise the delivery process by improving the ability to detect faults sooner whilst maintaining efficiency.

@@add continuous delivery

Applying these elements on Quality Assurance

Software delivery holds various practices that are inspired by the concepts of the Toyota Production System. In Accelerate lead scientist dr. Nicole Forsgren displays the practices of high performing technology organisations and their implications. The research found that high performers spent 49% of their time on new work, compared to 38% for low performers (Forsgren et al., 2018, p. 213). Regarding unplanned work 21% of the time is spent by high performers, against 27% for low performers.

The difference between these performers is found in the adoption of, among others, technical practices. The name for these set of practices is continuous delivery. These practises are:

  • Version controls
  • Deployment automation
  • Continuous integration
  • Trunk-Based development
  • Test automation
  • Test Data Management
  • Shift left on security
  • Loosely Coupled Architecture
  • Empowered Teams
  • Monitoring
  • Proactive notification

Given the perspective of quality assurance & quality control. particular interest and emphasise lies on test automation, test data management. One of the fundamental foundation for continuous delivery is continuous testing. Accelerate describes continuous testing as "testing [..] should be done all the time as an integral part of the development process". It requires automated unit and acceptance testing to be run against every commit to version control. This allows for fast feedback towards the developers on their changes. Fitzgerald & Stol (2014) describe continuous testing as:

A process typically involving some automation of the testing process, or prioritisation of test cases, to help reduce the time between the introduction of errors and their detection, with the aim of eliminating root causes more effectively.

It does not mean organisations don't need testers anymore. Tester are still required for the hard problems and exploratory testing. The repetitive tasks however, these should be checked by computers. Allowing for fast feedback towards developers and to find human errors as soon as possible.

Regarding test automation Accelerate discusses important nuances. To gain the value of test automation the tests have to be reliable. A passed test suite need to bring the team confidence the software can be released. Flaky test suites are killing for this confidence and as such the practice of test automation will not yield the uptick as found in high performing organisations. Another important nuance is that developers primarily create and maintain acceptance tests. They can easily reproduce and fix them on their development workstations. The study has not found a correlation with IT performance when these test suites are created by either QA or third-parties. In other words, the value only is found when developers create the tests. This has a strong relation with testability of the code. When developers write tests the code becomes more testable. Secondly, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them (Forsgren et al., 2018, p. 54).

The question whether continuous testing is more time consuming has been debunked in an experiment trial as well. Fitzgerald & Solt (2014) report in their trend analysis an experiment of Saff & Ernst. In this experiment continuous testing resulted in a maximum 15% reduction time of overall development time. It suggests that continuous testing can be an effective tool of reducing waiting time.

The impact of continuous delivery on quality

Accelerate dives into the questions whether continuous delivery increases quality. Quality is hard to measure as it is very contextual and subjective. The research tested for the following variables:

  • Continuous delivery predicts lower change failure rates
  • Quality and performance of applications, as perceived by those working on them
  • Percentage of time spent on rework or unplanned work
  • Percentage of time spent working on defects identified by end users

All these measures have been found to correlate with software delivery performance. The strongest correlation is seen in the percentage of time spent on rework or unplanned work. Continuous delivery predicts lower levels of unplanned work and rework in a statistically significant way. In other words, investing in continuous delivery will bring you more time to work on new features.

Rework or unplanned work includes break/fix work, emergency software deployments and patches, responding to urgent audit documentation requests, and so forth (Forsgren et al., 2018, p. 51).

References

Fitzgerald, B., & Stol, K.-J. (2014). Continuous software engineering and beyond: trends and challenges Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering, Hyderabad, India. https://doi-org.ezproxy.elib11.ub.unimaas.nl/10.1145/2593812.2593813

Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations (1st ed.). IT Revolution Press.

Garousi, V., Felderer, M., & Kılıçaslan, F. N. (2019). A survey on software testability. Information and Software Technology, 108, 35–64. https://doi.org/10.1016/j.infsof.2018.12.003

Ozkaya, I. (2021). Can We Really Achieve Software Quality? IEEE Software, 38(3), 3–6. https://doi.org/10.1109/ms.2021.3060552

Pollack, J., Helm, J., & Adler, D. (2018). What is the Iron Triangle, and how has it changed? International Journal of Managing Projects in Business, 11(2), 527–547. https://doi.org/10.1108/ijmpb-09-2017-0107

There is no need to squeeze on software quality