Trade/Off
minute read
We are proud to welcome Dr Simon Field as a guest who will be sharing his thought on a range of subjects at the heart of architecture and some of the key principles in architecture that impact us all.
In this first article, Simon explores the idea of the Trade-Off. At its core, architecture is about making decisions that are a delicate balance of trade-offs.
“Everything in software architecture is a trade-off”. This is the First Law of Software Architect according to Mark Richards and Neal Ford, in their book “Fundamentals of Software Architecture”. I agree – every architectural choice you make involves a trade-off where there is a price to be paid for the advantages gained. Their books examine many individual trade-off decisions, giving a good guide to architectural thinking.
But there is a danger in taking each architectural decision independently. By their nature, such decisions are interconnected, and if a solution is to acquire its own conceptual integrity, its architecture must be understood as a whole.
Looking at Quality
Achieving a high level of quality in one characteristic will typically cause a loss of quality in another. Alternative architectures will offer different trade-offs, and the architect’s job is to find a set of compromises that best suits the quality requirements of the solution. A high level of security, paid for with lower levels of usability, might suit a mobile banking solution, but not a bus timetable app. I’ve found that the adoption of a standard quality model helps an architecture team bring their stakeholders to a common understanding of how to think about architecture trade-offs. It becomes a common language that business owners, developers, users, project managers and, of course, architects, can share when considering change.
ISO 25010 contains one such quality model. It divides system and software quality into eight characteristics, such as Security, Reliability and Functional Suitability. These are further sub-divided into thirty-one sub-characteristics. It is a refinement of ISO 9126, the aim of which was to define six to eight characteristics that would “cover together all aspects of software quality…with a minimum of overlap”. So the model provides a common language that gives a 360 degree view of system quality, perfect for exploring the aspects of quality that will vary with different architectures. And if some of the words don’t suit the language of your business, you can adapt the model, keeping in mind that original goal.
Quantifying Trade-offs
A standard quality model helps us to identify the significant aspects of quality that are in competition with each other in an architecture. But how do we express the trade-offs? One can list advantages and disadvantages, but as there may be many competing for our attention, this might not help us find a preferred solution. The economic concept of Utility is often used – scoring each characteristic out of 10 for each architecture. But I have seen business colleagues isolated by the abstract nature of utility – “why does this one score six, and that one seven”?
A language that I have found to be widely understood by all stakeholders is risk. Most businesses use a model that divides risk into Impact and Likelihood, with a simple 1-5 scale for each, Exposure being the product of the two. Of great value is the mapping of each level to a word or phrase, where an impact score of 1 might be described as “Negligible”, and 5 as “Catastrophic”. This language is usually well understood across a business and can be used to explore architectural trade-offs in an inclusive way. And when used to rank requirements, I’ve found it produces a much more honest appraisal from business stakeholders than the more commonly used MOSCOW model, where everything quickly becomes a Must.
Applying Trade-offs in practice
As an architect, you might be asked to lead a health-check of a key business system. You can use your quality and risk models to structure the review, starting with an assessment of the significance of each quality sub-characteristic. For each one, ask the question “what is the impact if the solution fails to satisfactorily address this sub-characteristic?”, and use the risk impact scale to answer the question. You quickly understand the relative importance of different aspects of quality for this system, and can now explore the solution architecture from the perspective of each sub-characteristic with an understanding of its significance, and knowing that by working through the model you have taken an all-round view of the system’s quality. Your report can highlight key strengths and weaknesses by quality characteristic, and recommend possible improvements or mitigations.
When leading the architecture development of a new business system, use the quality model to classify the architecturally significant requirements. The model helps you to cover quality from every angle. Assign a level of impact to each requirement and use the model to gain a more abstracted view of the requirements landscape. If you have more than one solution architecture in mind, you can assess the likelihood of each requirement being satisfied by each candidate architecture, using the likelihood scale of your risk model. With the shared languages of quality and risk, your stakeholders are engaged throughout, giving all a shared understanding of the architectural considerations. Each solution option will present a different risk profile against the quality model and the underlying detailed requirements. No solution will be a perfect fit, and the “right” one will depend on the value you’ve collectively placed on each quality characteristic for that system.
Putting it all together
Let’s finish with a realistic scenario. The architecture team of an insurance company have been asked to review the current, rather elderly, claims admin system, as there are concerns that it is no longer fit for purpose. Working with the business owner, they evaluate the significance of each quality sub-characteristic, summarised in the table below:
They examine the system from the perspective of each sub-characteristic, highlighting in their report concerns regarding the three most significant characteristics, Maintainability, Security and Reliability. As a result of their report, the company decides to explore options to replace the system.
Workshops with key stakeholders identify over sixty architecturally significant requirements, and the quality model is again used, this time to classify each of these high-level requirements. A procurement exercise identifies three candidate solutions, and the team work with the vendors to understand the architecture of each of the proposed solutions. These are considered alongside the option of continuing with the existing claims system. Having understood the solution options in some detail, the evaluation team comes back together to consider the ability of each option to address each architecturally significant requirement. They use a spreadsheet to record their considered views, and by combining impact and likelihood scores, they are able to produce the following summary of how the options compare with each other:
The numbers represent average risk scores, so a higher number (and darker red) indicates a higher level of risk. Whilst Option 4 (stay with the current claims system) is clearly a poor option, the other three all show competing strengths and weaknesses. Of particular concern is that none of the new options appears to offer a substantial improvement in the key characteristic of Reliability, though all offer a big gain in Security.
This analysis has not told the team which solution to choose, but it has indicated where the trade-offs are to be found in each option. If, for example, the functional strengths of Option 2 are favoured by the business stakeholders, the solution’s weaknesses in Reliability, Maintainability and Portability must be further explored, understood and mitigated or accepted.
In the next article, I’ll explore how a quality model can be moulded to your business and join the debate on whether it helps to call some requirements “non-functional”.
Dr Simon Field led the e-business research team at IBM’s Zurich Research Lab before taking up the role of CTO at the Office for National Statistics in 2004. He has since been responsible for architecture at Emirates Group in Dubai and Admiral Group in the UK and spent five years with Gartner advising CIOs across the Gulf region. His work on architecture analysis over the years has involved exploration of system and service architectures, resulting in the publication of the Solution Architecture Review Method.
+++
Enterprise Blueprints is a specialist IT Strategy and Architecture consultancy helping clients create business value by solving complex IT problems. If you would like to discuss how we can help you to advance your platform thinking, bolster your operational resilience, accelerate your cloud migration programme, drive out costs from your legacy estate, or accelerate your digital transformation then please contact [email protected]