By Uwe Friedrichsen
Currently we can observe three mega-drivers that force IT into a dramatic change:
- Economic darwinism
- Disruptive technology
All of them are extremely important and the companies that ignore these drivers will sooner or later fall short of their competitors (at least as long as the companies live in an efficient market – non-efficient markets are a different story).
Let’s begin with the first driver and start with a quick explanation: “Darwinism” in short means “Survival of the fittest” – and the “fittest” is not the strongest but whoever can adapt quickly enough to the changing demands of the surrounding environment.
This is also true for companies: Only the companies that adapt quickly enough to the changing needs and demands of their customers will survive in the long (or even short) run.
You might say: “Hey, where are the news? This is true since the beginning of economy.” And you are right.
The key part is that the markets most companies live in changed dramatically some years ago. For almost a century most markets were broad, almost unlimited and sluggish: There was always a larger demand than supply and the biggest challenge for companies in those markets was to scale the production of their goods in a cost-efficient way.
This started to change back in the 80ies: Slowly but surely one market after the other became narrower. Globalization, market saturation and the internet led to highly competitive and dynamic markets (Actually there were more drivers for the market change but that is outside the scope of this post). The markets became highly demand-driven and the new biggest challenge of the companies was to adapt to the changing demands of the customers quickly enough. The key driver changed from “cost-efficient scaling” to “responsiveness”.
Changing role of IT
You might say: “Yeah, I know all this. But what has it got to do with IT?”
The point is that business and IT are tightly interwoven these days. You cannot consider them independently anymore even if it started quite differently:
If you think about the relation between business and IT, back in the 50ies and 60ies it started with IT supporting tiny fractions of business functions – a little report here, a small analysis there. It was very selective because the computing power of the mainframes back then was quite limited (every cell phone today has orders of magnitude more computing power than the computers back in the 60ies).
Yet, it was so cool for the business departments that they did not have to create their reports and analyses manually anymore that they wanted more of that – which eventually led to the software crisis: Demand for software became a lot higher than supply.
The reaction to the software crisis was the invention of software engineering: An engineering practice to scale the production of software in a cost-efficient manner. Sounds familiar, doesn’t it: Industrialization had a very similar problem in the beginning of the 20th century and the solution approach back then was Taylorism and its variants. Thus, it is not surprising that software engineering picked up many tayloristic concepts and practices and adopted them to software development.
And that was perfectly fine back then. But over time the role of IT changed essentially. Sometime back in the 80ies the PC started its triumphant success and networking became widespread. Also Moore’s law did its job and computers became more powerful all the time. As a result IT did no longer only support selective business functions, it started to support whole business processes. It moved closer and closer to the business with all its complexity.
This development continued, the internet started its success story in the 90ies and eventually IT became the nervous system of each non-trivial company: The whole business domain coded into IT, IT outages of just a few hours considered existence-threatening by many companies. And this also implies that you cannot change your business anymore – no new products, features or process changes – without touching IT.
The need for speed
Combining the two observations described before, we can conclude that IT has become a critical success factor for any company to hold its ground in the economic darwinism. No matter how good you understand the needs and demands of your customers, if your IT cannot deliver fast enough you will go away empty-handed.
Yet, most companies still stick to the ideas of traditional software engineering where the primary goal is scaling the production of software in a cost-efficient manner. The problem is that scaling software production and IT cost-efficiency are not the most crucial drivers anymore (we actually write way too much software these days, it just takes too long and very often just delivers a fraction of the value expected).
The most crucial driver for IT today are cycle times, the time needed to bring an idea to the customer. This means the time needed to traverse the whole IT value chain beginning in picking up the requirement until it is live in production.
The shorter these cycle times are, the better we can adapt to the ever-changing demands of the highly dynamic markets. We need “speed”!
If we look at digitization, the second mega-driver, we get an additional twist: With digitization IT no longer just supports the business, it becomes the business. We create new business products which consist partly or completely of IT. On the one hand this means the need for innovation as we just start to develop vague ideas what future business models trends like Internet-of-Things (IoT) will enable. On the other hand this means that our IT needs to be even more robust than before as it becomes continuously exposed to our customers.
The third mega-driver are disruptive technologies like cloud, mobile, big data and IoT. These technologies provide very different ways to implement business needs up to a level where they enable completely innovative business models. As we are still just at the beginning of understanding what we can do with those technologies, they tighten the conditions described before:
We need to understand and learn, we need to innovate, we need to adapt to ever-changing markets and we need to do it fast incorporating the new possibilities provided by disruptive technologies. And we will not succeed if we try to accomplish all this clinging to ideas developed many years ago in a very different business and IT era.
We need to re-think IT!
This was the short version of the much longer story why cycle times became the most important driver for IT today, why we need to re-think IT.
But, if the old wisdom we still base most of our IT processes and operation on is not appropriate for re-thinking IT, what is appropriate instead?
This is a very valid question. A good starting point usually are organization and processes: Organizational and system theory showed that decentralized organizations with mostly autonomous units can adapt much better and a lot quicker to ever-changing demands than hierarchical organizations based on tayloristic principles. Also the decentralized organizations tend to be a lot more robust against unexpected demands.
If we look around for a moment we will quickly realize that DevOps is exactly about this issue: We create mostly autonomous teams that take over end-to-end responsibility for dedicated business capabilities and we let them respond to all change demands in their capability area. Actually there is a lot more to DevOps like holistic optimization of cycle times, amplifying feedback loops in order to provide knowledge where it is needed and a culture of continuous learning. These are also very important traits of DevOps which exactly point in the direction of the drivers we have seen before.
But DevOps alone is not enough. We need to make sure that team autonomy will not be compromised by the software and systems the teams are working on. For example, if we have many teams all working on a big monolith (which unfortunately has the tendency to eventually degenerate into a big lump of spaghetti code), the teams will usually need to coordinate every single change and the autonomy is gone. If we can release software only in big all-or-nothing releases, if we are confronted with a runtime mega-monolith, the teams need to coordinate all their changes and agree on a release date and the autonomy is also gone.
This means we need an architectural style which support autonomy in terms of development and deployment of software. Microservices are such an architectural style. Their self-containment and fits-in-one-brain properties support team autonomy very well. They also support quick changes and short cycle times. Actually microservices are not the only style that supports the given needs and admittedly it is quite easy to screw them up and end up with a distributed ball of mud (a.k.a. “distributed hell”) but if done right, they support autonomy and quick adaption very nicely.
Autonomous DevOps teams that respond to ever-changing demands also need to release their software frequently, whenever it suits their needs. And if they need to release their software multiple times per day, it must not be a problem. Deploying software independently on demand that often requires automation. A manual build, test and deployment is way too expensive and error prone to execute it frequently. This is the domain of continuous delivery which is exactly about consequently automating build, test and deployment.
The last part of the picture is the infrastructure the software runs on as it also needs to be very flexible. If it takes days, weeks or months to get the required infrastructure for development, build, test or deployment, any team autonomy and responsiveness is toast. The teams need self-service access to the required infrastructure whenever they need it and they also need to get rid of it as soon as they don’t need it anymore. The cloud provisioning model exactly meets those demands. Additionally the API-based approach enables elastic scaling at runtime as well as better automation of the build, test and deployment process.
Some more re-thoughts
We have seen that DevOps, microservices, continuous delivery and cloud computing are core building blocks of a new IT. But is that all we need for our new way of IT or are there missing pieces?
Actually some supporting pieces are missing. The first one is Enterprise Architecture Management (EAM), but not the way we know it. There is not any value in drawing lots of pictures ahead and centrally trying to pre-plan and prescribe the overall IT landscape in detail right from the ivory tower. This will be a bottleneck by default and is quite the opposite of what DevOps is about.
Yet, there are some important tasks in a DevOps organization where EAM can create a lot of value: Even though the teams should be as independent and autonomous as possible they still need to work towards a common goal. EAM can help to spread the common goal and help the teams not to lose track of the big picture.
Secondly, total team autonomy is not possible as all team work towards a common goal and the software they produce need to work together. This means coordination. The teams need to agree mutually upon how their software interacts with each other. This can become quite cumbersome and inefficient if every single interaction is agreed upon from scratch. Thus, it makes sense to agree upon some interaction standards across all teams. Finding and maintaining this minimal set of standards is also a good task for EAM.
A different definition is that EAM takes care about the big picture and the space between the responsibilities of the teams. One last remark about EAM: Usually this is not a full-time job and there should not be an ivory tower EAM team in charge. Instead the EAM team should consist of agents from the DevOps teams that meet once in a while or whenever needed.
The second big supporting topic is which governance model to use. We still need to make sure that the top level management’s vision and the work of the teams is in sync. It is important that we have transparence what is going on in the teams and it is also important that top level management can transport changes in direction in an efficient way to the teams.
The traditional governance models of hierarchical organizations are not suitable for a DevOps organization which optimizes for cycle times and market responsiveness. This means we need a different governance model. The good news are that those governance models are known for quite a while and that they are successfully battle-tested by some big companies which adopted them. Those models are known under the name “Beyond Budgeting” or “Beta codex” (which is a further development of “Beyond Budgeting”).
The third big supporting topic is Craftsmanship. Yes, it is about people (unfortunately we forget that way too often)! We need to help all the affected persons to develop a different understanding of the way they need to work.
On the one hand it is important to help people becoming continuously better in the way they do their work and collaborate with their peers. This is also know as “mastery”. An “everything will be all right” attitude is just not sufficient anymore. This is the core idea of Craftsmanship.
On the other hand people do not need a robust work model anymore, they need a robust change model. People need orientation – some rules and anchors they can use for orientation. In the past people were provided with a model how to do their work, either using detailed process descriptions or less formal guidelines.
The problem with that approach is that the way work is done changes continuously as companies continuously need to adapt to the ever-changing needs and demands of their customers. Therefore it is not possible to provide a working model that is robust over a longer period of time. Therefore, it is necessary to provide people with a model that makes it easier for them to understand when change is needed and why. So, we go one level up from providing a robust work model to providing a robust change model. Even though this is not a core aspect of Craftsmanship it definitely belongs in that area.
We need to re-think IT – the markets the companies live in have changed, IT itself has changed. The conditions in which our traditional IT models were developed do not exist anymore. Our main IT driver is not cost-efficient scaling of software production anymore, it is minimizing cycle times in order to maximize the company’s responsiveness.
The core building blocks for this new IT are DevOps, microservices, continuous delivery and cloud computing which need to be augmented by some additional topics like EAM (also re-thought!), Beyond Budgeting and Craftsmanship.
After all, this is just the “big picture”, a 30.000ft point of view. There are many shades of grey I have left out for the sake of keeping it short – and yet it became quite lengthy. Also each building block I mentioned consists of many, many more details on different granularity levels. And there are quite a lot of details that need to be considered for not screwing up everything badly. But those are different stories …