You don’t need new staff or tools to make DevOps work

Fin Goulding tells Computing that DevOps is just about increasing efficiency – not about new tools or job titles

Despite an apparent IT skills shortage in DevOps, Paddy Power CIO Fin Goulding believes that existing staff can be trained to work in a DevOps environment, but acknowledges that it may take some time for them to adapt to a new way of working.

Goulding told Computing at Splunk Live in London yesterday that Paddy Power was in the midst of shifting from being an agile company to a “super agile” company adopting lean and kanban methodologies to go along with scrum.

When asked by Computing if existing staff – both developers and engineers within the operations team – can be trusted to move to a DevOps environment without the need for further recruitment, Goulding said “yes, exactly”.

He said that changes within the organisation, including the move to DevOps, did, however, prove a stumbling block with some staff.

“Some people don’t like to let go of things they’ve built up skills in and be shifted to something else brand new,” he said.

Goulding said the problem can be to do with the lack of soft skills that many of the staff have.

“Often if you haven’t got those people skills you have to bring them in and have people help you – these will be occupational type psychologists or design experts, for example, and you have to go through a change and bring people on that journey and that’s really hard for us.

“But once you start saying to people ‘this is how it’s going to work, it’s going to make your job more enjoyable, we’re going to give you access to things you’ve never had before and break down some barriers and silos’, you start to hit a tipping point where people get really interested,” he said.

At Paddy Power, Goulding had a development team and an engineering team on different floors who communicated through tickets and emails for about two or three years.

He said that when the teams were put together, it was the first time they had ever met, despite working in the same building. The firm is now seeing big productivity increases because they don’t have to go through the laborious ticket system, and because engineers can show developers how to get things done on their own.

“They’re actually enjoying it for the first time, and we’re seeing a lot of changes in behaviour but it all comes down to a cultural change, you hear a lot about people talking about DevOps tools and job titles, but that’s all rubbish – it’s really about putting people together in a more efficient way,” Goulding explained.

Microservices — a DevOps architecture

by feidhlim o’neill

DevOps is all about breaking down the barriers between development and operational teams and thereby unlocking a wide range of benefits in how we design, build, deploy and support software. DevOps has been the key driver behind the adoption of a whole raft of technologies, architectures, processes and skills and one of the best of these is continuous delivery — delivering business value in a continuous stream without creating chaos as you do so. There are a number of aspects to continuous delivery but a key point, certainly from an Ops perspective, is that new features or functionality get deployed into production as part of a continuous process and isn’t just “thrown over the wall” to be installed (and supported) by a separate team.

Complementary to continuous deployment has been the rise of the microservices architecture. Microservices are, as the name suggests, breaking down your business logic into 10s if not 100s of small independent services and ideally suited to the low risk changes that continuous deployment promises. To see why lets look back when code was deployed in large single application with separate operations teams.

Big Bang!

In the past, besides handing the code off to a separate Ops team, we had to go through a change process to assess the risks associated with the change as well as things like compliance. Production had to be protected from developers who considered their job done once the change ticket was created. In larger companies there was a function specifically created to oversee this process. It was a slow process and because getting approval for a deployments was onerous it was natural to get as much into each release as possible. Therefore, ironically, the change process designed to manage risk was in some cases actually increasing it. Not good.

Continuous Delivery

So along came DevOPS. Continuous delivery is about deploying small changes continually so the risk on each change is lower. This certainly helped lower the risk of each deployment but when dealing with monolithic systems the business impact, of a bad change, could still be large and difficult to diagnose and the the number of releases needed to achieve the same results as a bigger release meant there was more scope for human error. In response to this DevOps engineers used their development skills to create tools to make deployment easier.

Automation tools allow us to better manage the change process and de-skill making the change itself so engineers could focus on the impact and do more releases. With better tools to both orchestrate the change and monitor more widely in real-time the impact of the change we could react quicker to issues. The automation even allowed you to automate the release process formalities too if needed — to the satisfaction of compliance.

Microservices

Now I, finally, get to the point of micro-services and change management. Microservices decomposes the business logic into smaller, independent chunks of code that are loosely coupled and ideally suited to continuous deployment. Of course a misbehaving microservice can still cause significant business issues but with correct tooling should be easily identified and reverted and most importantly can be done without slowing down other independent changes — so it scales.

Because microservices decompose the business logic, having 100s of Microservices is to be expected so good tooling and automation is essential for both release and debugging issues. If you invest upfront in good monitoring both at the macro (metrics, graph) and the micro (traces, correlation) level as well as orchestration tools then the actual process of releasing is low risk with limited scope of change. Related services in the architecture can help identify issues and in more mature platforms, automated rollback can be trigged if issues detected via the extensive monitoring.

Of course these microservices will have dependencies on infrastructure services and platforms that need updating and may not lend themselves to decomposition in the same way but they also need to change less and may indeed, in the next round of architectural enhancement, follow a similar path.

In short, microservices combined with comprehensive monitoring and orchestration eliminates the need for manual change management processes even at scale.

Jenkins vs. Teamcity – The Better CI Tool

by Joseph Lust.

Let’s dispel the myth about Jenkins being the gold standard continuous integration tool. I’m sorry, TeamCity is much better. 

Dispelling the Jenkins CI Myth

I started using Jenkins when it was called Hudson, before the Oracle naming spat. Recently, I downloaded and installed it again and was shocked to see that little appears to have changed in so many years. What’s in a UI? Not much if you’re technical, but geeze, Jenkins still has the aura of an app knocked together during an all night hackathon in 1997 .

Let’s knock the legs from under this myth.

1. Jenkins is Open Source

Many Jenkins fans are FOSS fans. If there is an open source solution, perhaps buggy or poorly maintained, they feel compelled to use it. Much like one can imagine RMS foregoing a life saving treatment if the medical apparatus didn’t run open source code he’d compiled himself.

Be careful though as there are few absolute FOSS purists in practice. Inevitably, people use the best tool for the job at hand. Why does a company write code with 23 FOSS tools/languages on closed source Windows desktops? Probably because it works for them and that special accounting application or antiquated, but stable, engineering software that’s core to the business. Just because other options are Open Source doesn’t make the whole tool chain better in practice.

2. Jenkins is FREE!, TeamCity is Expensive

The Jenkins fan will note that Jenkins is free, but TeamCity costs money. Hiss! Boo!

They’ll not mention you can use the TeamCity CI server and three (3) build agents for FREE. And that you’re only out $100/agent thereafter and $1000 for the CI server. Anyone bought Visual Studio lately? Anyone use the many $5K/seat tools out there? Anyone…use Windows (Debian lover myself) ? They all cost a ton more than Jenkins. Why do you use those rather than the FOSS solution? Perhaps it’s for the quality of the tool or the paid support behind it. Remember, many of us work for profit.

3. We’re an OSS Project, We Can’t Afford Paid Anything

I’m a huge fan of open source projects. I contribute to several. And I frequently spar over what CI tool to use. CloudBeesBuildHive, Travisor your own Jenkins Instance? Fatuously such groups write off TeamCity since it would cost cheddar they don’t have. But that would completely ignore the fact that JetBrains gives away everything for FREE to open source projects.

4. But There’s a Plugin For That!

My first production encounter with Jenkins was a comedy of errors. The team I joined had a mature Jenkins install, but all of quotidian tasks were either manual or cumbersome. For example, hand written jobs to do nothing but free up space from other jobs. Hacks and hacks and duct tape scripts to make the build chains we used. And throw in a monthly inopportune crash for good measure.

I was aghast. Everything folks had wasted their time on via various scripts and manual efforts was a standard, default, out of the box feature in TeamCity. But stand back if you ask a Jenkins fan about this. They will recant “but there’s a plugin for that!” Perhaps there is. A non-code reviewed plugin that does part of what you want and was last updated 19 months ago and a few major releases hence. Or, there will be three plugins to do almost the same task, and most of it might work, but check the GitHub page and recompile if you want that functionality.

This is sad given that the configurations TC has out of the box could have skipped $10K in developer efforts over the last two years. But, you know, TC isn’t FREE!

Other Bones to Pick

Some other things that Jenkins could correct to burnish their product:

Jenkins…

  • NO SECURITY by default? Why? TC secures out of the box. Common man.
  • No PreTested Commit – a TC standard that’s integrated with Intellij/Eclipse – Jenkins no intention to add
  • Defaults to port 8080 … way too common a port for DEV’s. Will conflict with all Java devs
  • Startup logs are to .err.log? Why?
  • Lack of timestamps in 2 of 3 logs
  • Plugin install still triggers server restart, even if no plugins were updated/installed
  • Coarseness of “Auto-Refresh” – keeps reloading documentation pages! Is it 1998? XHR Anyone?

Conclusions and Disclaimers

Give TeamCity a try. I’ve been loving it for 4 years now and use it on every project. Do I work for JetBrains? Nope. Then why write this? Because everyone I talk to claims Jenkins is God’s gift to integration. It makes me think I’m must be taking crazy pills, so I’ve written this so someone out there can make a more informed CI tooling decision.

 

Don’t Take My Word For It

For all your know I’m a shill that screams at fire hydrants in the night. Read the top hits for “TeamCity vs Jenkins” and you’ll discover the same thesis.

NY Times IT capitalizes on continuous delivery to move faster

 

by Marc Frons

“I think we really need to slow down.”

That is one sentence you probably will never hear uttered by a typical CEO. In fact, almost all companies want to move faster. They want to develop more products more quickly and otherwise radically shorten the time it takes for an idea to become a product.

Yet many companies make the mistake of assuming moving faster is merely an act of will, like going on a diet or saving more of your paycheck. In fact, the reasons product development seems to drag on forever or technology projects take too long have more to do with underlying processes, technology and culture than a failure of will. That’s what I discovered after spending a few months analyzing our technology and product development processes at The New York Times. It was then that I had an uncomfortable conversation with my CEO. “You know,” I said, “I think we really need to slow down.”

The second part of that sentence, I suppose, is why I’m still CIO of The Times. “We need to slow down so we can speed up,” I said. And to move faster, we need to stop a lot of what we’re doing so we can implement ‘continuous delivery’ (CD). CD and its cousin, ‘lean product development,’ have been all the rage in Silicon Valley since 2011, shortly after Jez Humble and David Farley published a book by that name, and Eric Ries came out with “The Lean Startup.” But those concepts were still fairly new to most East Coast companies two or three years ago, when I first proposed that The Times adopt these methodologies.

Slowing down in order to speed up

The complicating factor, however, was that to implement CD, we would have to stop at least half of our new development projects. That was initially a tough sell to a business hungry for new products and enhancements to existing ones. But fortunately, The Times CEO and other executives agreed, persuaded by the promise of at least 30 percent faster development times.

We laid the groundwork during the summer of 2014. Then, in October, with the help of ThoughtWorks, a consultancy that specializes in CD, we began to retrain our teams. For the next three months we throttled back new development on many, but not all, teams as we standardized and automated as many aspects of our process as we could.  And while we are by no means finished (not that anyone is ever finished), the results have far exceeded our most optimistic expectations. Here’s a bit of what we’ve seen:

  • Dramatic improvements in the number of releases we put into production.
  • Significantly faster speed of release and delivery. One team reduced its release time from seven days to 35 minutes.
  • Higher quality of code in terms of lower errors in production. We cut the number of errors in production by more than half.

As the rest of our teams implement CD throughout 2015, we expect to see even greater speed in releasing new code. But CD is also supporting the next phase of our evolution as a lean product development organization.

Moving faster with lean methodologies

The dream of many large companies is to operate more like a startup. We want small, dedicated teams to build new products unburdened by legacy code and the myriad dependencies and roadmaps of other teams. Yet we also want them to take advantage of all of the technology services we already have in place and not reinvent the wheel.

It seldom works out that way. These teams either go off on their own and end up wasting time replicating existing technology with a few important differences or grow frustrated waiting for other groups to get their acts together. But the problem is almost never the people involved – it’s the underlying technology and processes, which haven’t been designed for that kind of structure.

As our technology teams began to implement CD, we had a parallel project whose goal was to train our product development staff on lean product development. Even though we had already implemented Agile, our product development process still wasn’t hypothesis driven, data driven or incremental enough. Product managers as well as our journalists, who are also deeply involved in product development, needed to learn how to work with our engineering teams in a much more fluid, iterative way than they had even in an agile environment. Adopting those best practices is an ongoing initiative.

CD and lean product development have helped us move faster. But we still had the problem of interdependencies among various teams. To solve that, we have started to implement a microservices architecture, something we couldn’t have done well without first rolling out CD. It’s true that we were probably the first major media company to develop APIs for our content. But you can’t have a well-defined API architecture for all of your major services when tools and processes aren’t standardized across teams. CD imposes its own levels of standardization, and has raised the overall level of maturity of our technology organization.

CD is a journey, not a destination

One of our initial fears in implementing CD was that the standardization and rigor it imposes would dampen the freewheeling spirit of technology innovation at The Times. We were worried that people would feel constrained, or worse, we might meet resistance or skepticism that increase our risk of failure. But in fact, our developers enthusiastically embraced CD. Part of that, I think, was because the positive feedback loop was so immediate. But there was another important factor: Far from stifling innovation, CD has helped unleash it. By allowing developers to test and release code faster than they ever could before, we are beginning to see more rapid advances in our products – and more time for experimentation.

CD is also helping us expand our measures for success. It’s not only about shipping code more frequently but also avoiding rework and time wasted correcting errors that should have been caught much earlier in the process. For example, our monitoring tools uncovered a few lines of bad code in a release that, left unchecked, would have degraded performance over time – code that would have been difficult to isolate if it was a small part of a much larger release. But we spotted and fixed it before it did any damage. So yes, when we talk about CD, it’s certainly the big advances in speed and performance that get all the attention. But sometimes it’s the little things that matter most.

What the Internet of Things means for DevOps

What the Internet of Things means for DevOps

The Internet of Things (IoT) is expected to grow massively over the next few years, bringing online access to a huge variety of devices. According to Gartner, the number of connected “things” will reach a figure of 25 billion by 2020 and is sure to have a disruptive effect on numerous industries.

With more smart devices and a huge increase in the amount of data that these devices can collect, DevOps teams must be prepared to manage any potential problems that emerge.

Read more: Gartner Symposium 2014: Internet of Things will disrupt businesses and society

Currently, businesses are not prepared for the sheer scale of applications development and backend integration required by the growth of IoT products, while fragmentation is another major pitfall that DevOp teams must guard against.

The Internet of Things could lead to a wide variety of devices, made by different manufacturers and with different architectures, so DevOps must be agile and flexible enough to manage this diversity. Major companies like AT&T are already beginning to offer IoT support to facilitate the effective creation and management of IoT products. These range from pre-packaged solutions to a suite of developer tools and more.

The goal of retrieving data from IoT devices also poses hurdles for DevOps teams, who may experience disagreements with IT management. For example, management may have a specific data set that they want to acquire and, indeed, may have already identified the hardware required to collect it. However, it will be the task of DevOps teams to see if this hardware is compatible with existing software infrastructure. The scale of new IoT hardware and its compatibility with existing systems has the potential to cause major headaches for DevOps teams.

One of the ways to avoid compatibility issues is for DevOps teams to customise IoT devices as little as possible. While this may limit flexibility, it avoids potentially costly and time-consuming overhauls of existing hardware and software. Instead, businesses are advised to leave IoT products largely unmodified after receiving them from the supplier and instead manipulating and analysing the data to meet their goals.

However, the most widespread problem expected to emerge from the growth of the Internet of Things is regarding security. With so many smart devices being created, many of which will be constructed by relatively new companies, the potential for hacking and data theft is huge. Hardware suppliers will, of course, have their own role to play in preventing this, but DevOps teams must also ensure that security is built in to any IoT software solutions.

Moreover, businesses must assess which IoT products they allow to handle sensitive information and reject devices that could compromise their organisation. Similarly, if IoT products are collecting sensitive data businesses should promote transparency, particularly if personal data is being collected and being used by third-parties.

Although the Internet of Things poses a great number of challenges to DevOps, particularly when it comes to security, businesses can ultimately overcome them and embrace this emerging technology.