Posted 3/24/2015 8:02:44 AM by STUART PARKERSON
Excerpts from Rich Hein article Mar 17th
Finding people with this almost magical skillset is difficult. Unfortunately, there isn’t yet a clear career path for talent to follow. “Many DevOps people come up through the infrastructure technology path because of the heavy reliance on scripting and configuration management in DevOps. But then QA analysts well-versed in automation may end up running DevOps as well,” says Angela Yochem, Global CIO at BDP International.
So where should you be looking for talent in the DevOps arena? “It [DevOps talent] typically doesn’t come from IT; my experience is that engineers, working in IT, are a better fit. If they have manufacturing experience where LEAN has been instantiated for some time, it’s even better,” says Michael Henry, senior vice president and CIO at Rovi. This market is a competitive one in regards to talent acquisition. “It’s been easier to grow my talent internally for two reasons: Competition is fierce, and everyone wants to wear the DevOps tag on their resume,” Henry says.
Experts are split on certifications. “DevOps, to me, is more about on-the-job training vs. certifications. Companies want to know that you have ‘been there, done that’,” says Cashman.
Yochem agrees for the most part. “Most of the certifications are still platform/discipline-specific, and many of these are part of the DevOps world. But, overall, no certifications are necessary here.”
There are some certifications out there worth looking into, according to Goli andHenry, although they aren’t the silver bullet you may be looking for. Lean certifications or certifications offered that can express a background in certain technical tools, such as configuration management tools or cloud certifications.
DevOps isn’t something you can just decide to do. Much like big data, it requires a culture change and a breaking down of the functioning silos within the IT organization. It needs to start at the top. The end-game is to have your development and operations team working in a collaborative fashion toward the collective goal of continuous delivery of better software.
Published by: Benjamin Wootton
When people ask about where to get started with DevOps, the common advice is to start with the fast moving digital systems that face off to customers – ‘systems of engagement’ over ‘systems of record’.As the main driver behind DevOps is delivery speed, there is a lot of sense in this. However, I am increasingly finding that DevOps thinking has a very strong case as applied to the older legacy systems. So much that perhaps we should be looking there first.These are the systems which are like oil tankers – old, heavily integrated, very hard to change, to test, and deploy. The technologies will be last generation or worse, making them inherently harder to work with. When they do need to change, they will operate on slow release cycles with lots of regression testing and big, weighty release processes.
These are also the systems with the cost associated. Think big teams of niche skills, many testers involved in getting releases out of the door and those big teams of people working over weekends to get the code released into production. Perhaps there are license costs associated with old middleware or database platforms which the team would dearly love to move away from if only they could. Maybe they are running on last generation infrastructure that is not well virtualised. I’m sure many of us have worked on platforms like this.
The business case for getting these systems operating more efficiently is huge. We might never need to get them into a state where we can release them multiple times per day, but we can reduce cycle times, dramatically reduce the effort it takes to keep the lights on, add a bit of agility into the legacy platforms and save significant cost.
We view legacy as an iceberg of opportunity for DevOps!
DevOps & Continuous Delivery Do Not Imply Risk!
At the same time, these systems of record are obviously the ones that are critical to the business. Outages have significant operational and reputational risk – think accounting, payment, or fulfilment systems. Why does it make sense to go there first with an IT transformation like DevOps which is most often driven through ‘the need for speed’?
We view DevOps as very applicable to legacy technology because we wholeheartedly reject the idea that teams practicing DevOps & Continuous Delivery are working in a way which implies more risk.
Closing the gap between developers and operations around these big complex systems reduces risk by removing key man dependencies.
Automating releases and management processes reduces risk and adds rigour. Codifying infrastructure and configuration as code reduces risk and improves consistency.
Moving a system towards into automated testing frameworks reduces risk and drives up quality too, as does enabling some form of canary releasing or better rollback or any other number of initiatives associated with a DevOps approach.
If we accept that DevOps practices can reduce risk, drive up quality, drive efficiencies and take cost out of legacy platforms then we need to get serious about the scope of how DevOps can be applied to legacy applications.
Take A Business Case Driven Approach
What you need to do in these systems is take an extremely business case driven approach. Getting these systems under automation or changing long established processes under legacy systems can require a fairly substantial investment, but the rewards for doing this can be huge with payback in a short space of time, particularly if we find enough low hanging fruit during the discovery phase. If we can carefully quantify investment required vs likely payback, you may find that it is worth starting DevOps initiatives in the most unlikely of places!
A question to take away: What percentage of IT spend goes on legacy or currently operational systems over more modern strategic initiatives and yet we do not look to apply modern best practices to them? If we were to apply some of these practices within the legacy estate, how could we re-apply the resources that go into keep the lights on back into more strategic IT initiatives?
I have problems with Automation
For me the mantra of achieving speed via automation tools is nothing new. In fact I was ‘automating’ Citrix Metaframe builds using windows scripting techniques back in 2004. The market though, has become awash with different automation products and it’s fair to say that many enterprises now suffer from ‘automation sprawl’. This results in a tactical rather than the strategic approach to automation required, meaning that benefits are never fully realized. In fact, I have spoken to many companies for which the word ‘automation’ leaves a bitter taste in the mouth as they have been burned by failed implementations.
Now before everyone says “John, what about solutions like Chef and Puppet? They have been very successful so far” Yes, they have, I agree. They have both captured the market when it comes to turning infrastructure into code, so that environments can be rapidly and automatically built. I have no doubt that their solution or platform is easy to use but I would argue that the success of these solutions does not lie just with the product alone but with their execution. They made it very easy to share and encourage sharing of automation scripts and modules.
Here are my problems with Automation in relation to DevOps.
Problem 1: Speed can be dangerous
Speed is the nirvana for many enterprises when it comes to DevOps initiatives. The issue is that for many, their processes around release and change are centered on rigor and control, at the detriment of speed. I know I have sat through countless, pointless Change Approval Boards (CAB) in my time – one of the symptoms of this approach.
But going in all guns blazing in regards to release and change automation specifically, can damage your business. A widely accepted stat, backed up by formal research at Gartner, is that 80% of business service outages (read – application outages) are caused by release, change and configuration processes. Therefore the need for speed could easily amplify this stat. The solution is to make sure that you proactively monitor what matters in regards to business services. This means the end-user’s experience, the application itself through to the infrastructure workload, including the database. Our Application Intelligence platform provides this, hence why it’s a perfect partner to any infrastructure or application release automation tools.
Problem 2: Automation success isn’t about the tools but the people and process
I have heard two great comments in regards to automation in last couple of years. Firstly, “Don’t automate what you don’t understand” and secondly, “A bad process automated, is still a bad process”. I think these comments summarize nicely the problems which enterprises face in any strategic automation or DevOps initiative. While the automation product may be simple to use, understanding what to automate and what effect this will have on the organization and its people is a different matter.
If you are going to be successful with automation and DevOps, it’s essential to address how an automation script or series scripts/modules will interact with, or change your current processes – processes around which jobs, roles and emotions are centered. This becomes difficult in an enterprise, which has defined processes for areas such as release, change and configuration, plus all the politics which come with this. Any strategic enterprise automation initiative has to overcome FUD (Fear, Uncertainty and Doubt) amongst employees in which automation will change their job activities. By the way, automation may well improve an employee’s role but this does not mean that FUD will not be present initially. To overcome this barrier you need to think about the people and process first. Involving people in automation initiatives (I would even suggest shying away from using the word automation) from the outset and being as transparent as possible through each stage of the adoption is vital.
Problem 3: Automation is more than just infrastructure and application automation
In my discussions with IT professionals in regard to DevOps, I hear a lot of excitement, quite rightly, in regards to infrastructure as code, application release and configuration release automation solutions. But this should not be your only focus for DevOps. It’s still essential to be able to respond rapidly to emerging application and associated infrastructure workload issues before they impact the end-user (customer or employee). This means that automated issue response or remediation is an essential capability that you should look for in any APM solution.
One of our driving principles at AppDynamics is to enable businesses to act fast. This means that we make it easy to run a response to an issue or event. Another key capability is that we help enable modern application architectures by providing cloud auto-scaling. Our platform understands application demand via real user monitoring so it automatically knows when to scale up or scale down application components in a cloud-based environment. This ensures that the customer or employee continues to have great performance even during heavy utilization periods. This feature is essential for those organizations who have to deal with periodic events such as Black Friday or Cyber Monday.
By Andrew Phillips
Getting new features to users fast is the overriding imperative for many, many organizations right now. That’s why we make a deployment automation tool and a pipeline orchestrator to help teams implement Continuous Delivery and Devops and realize the promise of Agile.
But there’s no point figuring out how to deliver software at warp speed if everything starts breaking. Nothing will kill your Continuous Delivery as effectively as a couple of high-profile production failures.
We’ve built XL Test to allow you to deliver at breakneck speeds without breaking your neck. XL Test aggregates all the test results from all your test tools so you can make sense of all your test data in one place and make efficient, reliable and, ultimately, automated go/no-go decisions as quickly and as often as you need:
Put simply: trying to do Continuous Delivery without a tool like XL Test is like skydiving without a parachute. Luckily, I have a copy to take with me:
If XL Test sounds interesting and you’d like to join our beta programme, please drop me (aphillips at xebialabs dot com) or any of us an email.
Start making sense of all your test data and make more rapid reliable, efficient go/no-go decisions today!