Which Fate Will DevOps Choose?

Posted on

By Stephen Fishman


Comic books have a certain math: for every superhero, there is a villain.

But superheroes and super villains don’t only exist in the comics. They roam the halls of the business community. The heroes are called “movements” and, if you follow them, they promise to benefit not just the bottom line, but the people creating and managing their systems.

In comics, the heroes always win in the end (except for the characters killed temporarily only to be reborn in the sequel).

If only it were that easy in corporate America.
Movements Transform or Fade Away
In the current corporate world heroic movements rise and fall — the villains however seem to have the staying power.

The core of the user experience (UX) movement has already left the building and embraced “service design” and/or “design thinking.” Agile is experiencing an identity crisis with critical books and articles popping up everywhere.

DevOps appears to be an unstoppable juggernaut inside of IT shops of all sizes and shapes, but it seems unlikely that DevOps will be the exception to the rule. All movements meet one of two ends: either their demise or their transformation.

Modern movements in enterprise development, inclusive of UX, Agile and DevOps, are no exception to the rule.

The Flawed Plots and Schemes of Flawed Organizations
Once a movement starts to gain traction, the seeds of it’s eventual demise or transformation have already been planted.

All of the below villains are described in terms of how they are battling with the hero of the day: DevOps. Notice how each one has been around in almost the same form for forever.

If you are young enough to see DevOps as your first paradigm change in technology development and management methodology, don’t fret. Just wait a decade and the cycle will start all over again.

Villain 1: Planting the flag too early, aka ‘We got the DevOps’
What’s the easiest way to kill something in corporate America? Claim “Mission Accomplished!” during an in-process transformation.

Ian Andrews, VP of product for Pivotal, captured the concept best when he recalled overhearing an IT executive in the early phases of a transformation say “Oh yeah. We got the DevOps.”

When the powers that be think of transformation as something you buy in a fiscal year or in a few quarters, the teams on the front lines of development, operations and customer support will pay the price.

A leadership team that underestimates the amount of work and time that goes into completing an enterprise mindset shift will often move the conversation, along with the budget and focus, to the market-facing concern of the day. Once the enterprise gaze shifts, it puts the movement at its most vulnerable state because leadership expectations have not shifted along with the enterprise commitment.

Once a leadership team thinks a goal has been achieved, unwinding expectations on benefits and future required effort to cement the shift becomes nearly, if not, impossible in a system bereft of empathy and shared context.

Once management deems an effort to have overpromised on its value proposition, the machine moves on and the effort is discarded.

Villain 2: Vendor/Community Overplay, aka ‘We are the DevOps’
If I see one more vendor claim to be the DevOps company, I’m going to scream. New vendors. New products. New conferences. New titles. I know DevOps is popular and I also know it is meaningful but I don’t think people understand that nothing can hold its meaning or value if everyone claims it as their own.

This is the poseur paradox. As a movement finds its base and grows in popularity, it by definition attracts people seeking value in it. As more people join the movement and identify themselves within it, the more watered down the movement gets.

A movement cannot be both a) universally adopted, and b) universally consistent.

Given that some people who adopt the DevOps moniker as their title will unavoidably not measure up to the original spirit of the movement, many of the audiences engaged by these people will walk away with a poor impression of the movement.

As more watering down happens and more audiences are left unfulfilled, it leaves the most skilled practitioners with no choice but to start a brand new movement.

The broken hearts and indefatigable souls of the original founders and acolytes team up to make the new movement both as inspiring as the first movement, and harder for the watering down process to repeat. This goal often works against itself, because the abstraction and complexity added to prevent the watering down, just so happens to have the side affect of making it harder for people to attach to.

Villain 3: Class warfare, aka ‘We are the Totem Pole’
The brilliance of Werner Vogels, CTO of Amazon, isn’t as much in the API and PaaS areas as it is in one simple directive — “you build it, you run it.”

This injunction has effectively become religion in DevOps shops everywhere — its the organizing principal that allows DevOps to live up to its promise.

When a shop adopts this motto, automating CICD pipelines and stations becomes simple because you can only reliably achieve the “you run it” part of the equation with the fast feedback loops provided by automated integration and deployment.

Shops not yet following “you build it, you run it,” still embrace the separation of duties concept known as “the Totem Pole.” If you’ve ever worked in a fully siloed shop, you understand the corporate caste system I’m referring to, but if not, allow me to explain.

There are two conceptual areas in totem pole land; upstream and downstream and the borders of each of these territories is defined by the location of the observer. Regardless of vantage point, the job model remains the same: take the partially thought out work product given to you by the teams upstream and make it ready for handoff to the poor souls downstream.

In Development groups, UX and Product shops are “upstream” and Operations shops sit “downstream.” Customer support teams fall “downstream” of Operations, and so on.

Teams view any silo that is not theirs as the perceived source of what’s wrong in the enterprise either because they don’t understand the complexities of your silo (i.e., the people upstream of you) or because they’re not willing or able to hire capable and qualified people (I.e., the people downstream of you).

Once discipline-centric silos form in a shop, totem pole narratives (i.e. Us vs them) become the norm — and this is where the seeds of division bloom into the thousand flowers of disunity, stagnation, blame and “over the wall” delivery to the next group one step down on the totem pole.

Given that the aim of DevOps is to reduce or eliminate these symptoms, a DevOps team set aside in a separate organization (i.e., my group designs it, your group builds it, and their group runs it) has the same chances for thriving as West Berlin did when it was surrounded by iron curtain states: no chance without an airlift from the west.

Divide and conquer is the mantra of the totem pole and this villain is always waiting for the right moment to reintroduce the corporate caste system that starts with “role x is more important than role y” and ends with more siloes, each of which are necessary and none of which are ready to accept that they are mutually interdependent.

Given that DevOps and hardened organizational siloes are mutually exclusive concepts, organizations that attempt to do both will harm the movement, because it can never deliver on its promises without elimination of the upstream/downstream culture.

What Comes Next for DevOps?
In superhero films, the villain gets vanquished, the credits roll and sometimes we get a post-credits preview of the next film for the heroic protagonist.

In the large corporate enterprise, it’s the heroes who are forced to choose between adaptation (UX to service design, agile to lean, etc) or slow death.

One or the other is coming for DevOps.

The only things you can do to hold fate at bay are to measure expectations, ignore the hype cycle and not get too attached to titles. Most of all, don’t let the powers that be divide and conquer. Remember that we are all the same: individual workers trying to get by and make the workplace a better place.

Docker, Red Hat & Linux: How containers can boost business and save time for developers

Posted on

image

Analysis: A lot of work is being put into making container technology vital to your business.

Container technology is in a period of explosive growth, with usage numbers nearly doubling in the last few months and vendors increasingly making it available in their portfolios.

Docker, which has popularised the technology, reported that it has now issued two billion ‘pulls’ of images, up from 1.2 billion in November 2015.

But containerisation technology isn’t new, back in 1979 Unix V7 developed chroot system call, which provided process isolation, but it took untill 2000 for early container technology to be developed. In 2000, FreeBSD Jails developer Derrick Woolworth introduced jails an early container technology.

Google is perhaps the biggest name user of container style technology, it developed Process Containers in 2006. Google’s creation of Borg, a large-scale cluster management software system is attributed with being one of the key factors behind the company’s rapid evolution.

Borg works by parcelling work across Google’s fleet of computer servers, it provides a centrail brain for controlling tasks across the company’s data centres. So the company hasn’t had to build separate clusters of servers for each software system, e.g.: Google Search, Gmail, Google Maps etc.

The work is divided into smaller tasks with Borg sending tasks when it finds free computing resources.

At their most basic this is essentially what containers do, it offers deployable bits of code that are independent so that applications can be built.

Unix and Linux have been using containers for a number of years but the technology’s recent growth can really be attributed to Docker, whose partners include the likes of IBM, Amazon Web Services and Microsoft.

This isn’t to say that Docker is the only company playing in the field, nor that it has perfected the technology.

As mentioned earlier, Google is a major player with technologies, it has contributed to numerous container-related open source projects such as Kubernetes. Microsoft is another big player having added container support with Windows Servers Containers and Azure Container Service for its cloud platform.

AWS has developed a cluster management and scheduling engine for Docker called EC2 Container Services off the back of the popularity of container deployment on the EC2 platform.

Despite these deployments there are areas that vendors have been looking at with the technology that needs to be improving.

Security is one of the areas.

Gunnar Hellekson, director, product management, Red Hat Enterprise Linux and Enterprise Virtualization, and Josh Bressers, senior product manager, security, wrote in a blog post how containers are being fixed in the wake of the glibc vulnerability.

The flaw means that attackers could remotely crash or force the execution of malicious code on machines without the knowledge of the end user.

The blog post points out that the development of container scanners is just a “paper tiger” that may help find a flaw but doesn’t really help to fix it.

Red Hat says: “We provide a certified container registry, the tools for container scanning, and an enterprise-grade, supported platform with security features for container deployments.”

This has been a common message that I have come across when reading about container deployments, vendors integrating with Docker and other container technology – typically there is a focus on improving security.

Although there are security improvements to be made, containers hold significant promise for the enterprise, here are three examples of where containers can help.

 

Containers can help with the rise of bring your own devices

The problem is that as BYOD increases it has become more important to separate work from play. This can be done through a secure container that provides authenticated, encrypted areas of a user’s mobile device.

The point of doing this is so that the personal side of a mobile device can be insulated from sensitive corporation information.

More distributed apps

Docker can distribute the application development lifecycle by enabling distributed apps to be portable and more dynamic. This means that those distributed applications are composed from multiple functional, interoperable components.

Basically this is the next generation of application architectures and processes that are designed to support business innovation.

 

More innovation

Continuing on the innovation front, because the developer no longer has to focus on the labour-intensive work of typing the application into hardware and software variables, the developer is free to develop new things.

This is similar to one of the benefits of using a cloud vendor for Infrastructure-as-a-Service. Rather than dealing with the time consuming tasks that don’t add a great deal of value, the developer can do other things.

 

The top three Docker application areas have been seen in test and quality assurance, web facing apps and big data enterprise apps.

Its rising popularity can be linked to the rise of cloud, partly because web-scale players have used the containerisation technology to such good effect.

However, unlike cloud, Docker adoption has been driven primarily from the middle out; nearly 47% of Docker decisions have been made by middle management and 24% by a grassroots effort. Cloud meanwhile started primarily as a top down decision being pushed by CIOs.

Given its growing popularity and the benefits it can deliver to the organisation and developers, the technology is likely to continue on its rise to the top, particularly when it is taken into account the numerous big name vendors that are working on perfecting it.

DevSecOps: How DevOps and Automation Bolster Security, Compliance

Posted on

DevOps bridges the gap between Development and Operations to accelerate software delivery and increase business agility and time-to-market. With its roots in the Agile movement, DevOps fosters collaboration between teams and streamlines processes, with the goal of breaking silos in order to “go fast”.

Information Security (InfoSec) and compliance are critical to businesses across the globe, especially given past examples of data breaches and looming cybersecurity threats. InfoSec has long been thought of as the group that “slows things down” – the wet towel to your DevOps efforts – often requiring a more conservative approach as a means of mitigating risk. Traditionally, DevOps was viewed as a risk to InfoSec, with the increased velocity of software releases seen as a threat to governance and security/regulatory controls (these, by the way, often require the separation of duties, rather than the breaking of silos.)

Despite some initial pushbacks, enterprises that have taken the “DevOps plunge” have shown – consistently – that DevOps practices actually mitigate potential security problems, discover issues faster and address threats more quickly. This has led  to InfoSec increasingly embracing automation and DevOps practices more and more, as the “security blanket” that enables — and enforces — security, compliance and auditability requirements. This makes DevOps a resource for InfoSec, rather than a threat.

As a philosophy, DevOps focuses on creating a culture and an environment where Dev, QA, Ops, the Business and other stakeholders in the organizations work in collaboration towards a shared goal. We now see DevOps evolving to DevSecOps – with InfoSec aligning with your DevOps initiative, and security requirements made a key tenant of your DevOps practices — and your DevOps benefits.

The Security Opportunity of DevOps

DevOps provides a huge opportunity for better security. Many of the practices that come with DevOps — such as automation, emphasis on testing, fast feedback loops, improved visibility, collaboration, consistent release practices, and more — are fertile ground for integrating security and auditability as a built-in component of your DevOps processes.

DevOps automation spans the entire pipeline- from code development, testing, to infrastructure configuration and deployment. When done right, DevOps enables you to:

  1. Secure from the start: Security can be integrated from the early stages of your DevOps processes, and not as an ‘afterthought’ at the very end of the software delivery pipeline. It becomes a quality requirement – similar to other tests ran as part of your software delivery process. In a similar way to how CI enables “shifting left”, accelerating testing and feedback loops to discover bugs earlier in the process and improve software quality, DevOps processes can incorporate automated security testing and compliance
  2. Secure, automatically: As more and more of your tests and processes are automated – you have less risk of introducing security flaws due to human error, your tests are more efficient and you can cover more ground, and your process is more consistent and predictable- so if something does break, it’s easier to pin-point and fix.
  3. Secure throughout: By using tools that are shared across the different functions, or an end-to-end DevOps Automation platform that spans Development, Testing, Opsand Security – organizations gain visibility and control over the entire SDLC, making the automated pipeline a “closed loop” process for testing, reporting and resolving security concerns.
  4. Get everyone on the same page/pipeline: By integrating security tools and tests as part of the pipeline used by Dev and Ops to deploy their updates, InfoSec becomes a key component of the delivery pipeline, and an enabler of the entire process (rather than pointing fingers at the very end!)
  5. Fix things quickly: Unfortunately, the occasional security breach or vulnerability might come up – requiring you to act quickly to resolve the issue (think Heartbleed, for example.) DevOps accelerates your lead time – so that you can develop, test and deploy your patch/update more quickly. In addition, the meticulous tracking provided by some DevOps platforms into the state of all your applications, environments and pipeline stages greatly simplifies and accelerates your response when you need to release your update. When you can easily know exactly which version of the application, and all its components/stack, is deployed on which environment, you can quickly pin-point the component of the application that requires the update, identify the instances that require attention and quickly roll out your updates in a faster, consistent, and repeatable deployment process by triggering the appropriate workflow.
  6. Enable developers, while ensuring governance: DevOps emphasizes the streamlining of processes across the pipeline to have consistent development, testing and release practices. Your DevOps tools and automation can be configured to enable developers to be self-sufficient and “get things done”, while automatically ensuring access controls and compliance. For example, as a resolution to the growing “shadow IT” phenomena, we see a lot of organizations establishing an internal DevOps service for a dev/test cloud – with shared repositories, workflows, deployment processes etc. This allows engineers on-demand access to infrastructure (including Production), while automatically enforcing access control, security measures, approval gates and configuration parameters – to avoid configuration drift or inconsistent processes. In addition, it ensures all instances across all environment – no matter whether in Development, QA or production – are identified, tracked, operating within pre-set guidelines, and can be monitored and managed by IT.
  7. Secure both the code, and the environments: By creating manageable systems that are consistent, traceable and repeatable you ensure that your environment is reproducible, traceable and that you know who accessed it and when.
  8. Enable 1-click compliance reporting: Automated processes come with the extra benefits of being consistent, repeatable, with predictable outcomes for similar actions/tests, and they can be automatically logged and documented. Since DevOps spans your entire pipeline, it can provides traceability into traceability from code change to release. If you have a DevOps system system you can rely on, auditing becomes much easier. As you’re automating things – from your build, test cycles, integration cycles, deployment and release processes – your DevOps automation platform has access to a ton of information that is automatically logged in great detail. That, in effect, becomes your audit-trail, your security log, and your compliance report – all produced automatically, with no manual intervention or you having to spend hours backtracking your processes or actions in order to produce the report.

DevSecOps is enables organizations to achieving speed without risking stability and governance. Security and compliance controls should be baked-in as an integral part of your DevOps processes that manage the code being developed all the way through to Production. By implementing DevOps processes that incorporate security practices from the start you create an effective and viable security layer for your applications and environments that will serve as a solid foundation to ensure security and compliance in the long run, in a more streamlined, efficient and proactive way.

Can We Stop Talking About DevOps?

Posted on

By Tom Hall

I first heard the term DevOps from a friend in the software business when I was working at Dell HQ in Round Rock, TX. He recommended that I read The Phoenix Project, by Gene Kim, Kevin Behr and George Spafford. I did read it and I found it a fascinating story, but as a long-time network administrator (to oversimplify as much as possible) and having never been involved in developing software, I wasn’t quite sure how it applied to me. It was enough to pique my interest though and drove me to volunteer at a DevOpsDays event in Austin. Within weeks I left Dell and joined a small software company, where I could have a better chance of getting my arms around this slippery pig called DevOps and more hands-on practice with the related tools and techniques.

So imagine my surprise when a year later, the same old friend who introduced me to DevOps suggested that the end-goal of a DevOps transformation should be that we stop talking about DevOps. Uh-oh, I thought, he’s lost the plot. But he made his case by pointing out that adding testers to dev teams–a typical step in any Agile transformation–doesn’t result in new devtest or agiledev teams, they remain simply “dev teams”. Once a business/group has fully internalized the Agile or DevOps values and behaviors, he argued, ‘working in an Agile or DevOps way’ becomes simply ‘working’. This is why many DevOps promoters, myself included, have a negative reaction to seeing ‘DevOps’ in job titles, job descriptions, or department names. While it simplifies the job of identifying which job candidates have a DevOps mindset or skillset, it effectively excuses everyone else in the organization from taking responsibility for change. “Doing DevOps” becomes the job of a particular individual, team or department instead of a philosophy internalized by the organization.

Of course the horse is already out of the barn on that one. Each year we see an increase in the number of businesses advertising for DevOps Engineers, forming DevOps departments or listing DevOps among the required skills for a position. It doesn’t matter whether we think this is a good thing or bad thing, evolution will take its course and there’s little we can do to stop it. What we can do is observe, acknowledge and adapt. Sure, the dangers are real. A DevOps department can easily become just a third silo or the DevOps Engineer might become the scapegoat for everything that goes wrong between development and operations. But if a DevOps Engineer, VP of DevOps or DevOps department can form, contribute to and/or nurture an environment of increased empathy, faster feedback and continual growth, that’s awesome; put the focus there. Don’t spin your wheels arguing that they just “don’t get it” or “they’re doing it wrong”.

I’ve spent the past couple years researching, promoting and attempting to practice this thing we call DevOps. While it’s widely believed that DevOps is impossible to define, I am firmly in the camp of those who believe that empathy is, to quote Jeff Sussna, “the essence of DevOps”. Just as the sales team needs to have empathy for its customers, everyone in the value chain needs to have empathy for each other. I’m aware that many people find this term too “touchy-feely”, but the business benefit of more harmonious interaction between one’s teams and between the organization and its customers is clear. The faster and more effective one is at converting solutions into real value for customers, the more successful one will be in the market. Thus being empathetic isn’t just being a good citizen, it’s being a good businessperson.

Taking it a step further, someone on Twitter recently suggested that “DevOps has nothing to do with technology”. The statement seemed ridiculous on its face. After all, the term ‘DevOps’– literally a portmanteau formed from the words ‘developers’ and ‘operations’–was coined by Patrick Dubois, a technologist intent on improving the process of building and deploying software. But as I thought about it I started to understand. As the DevOps movement has evolved, the most fundamental components its promoters have identified, from Gene Kim’s “Three Ways” (Systems Thinking, Amplify Feedback Loops, Culture of Continual Experimentation and Learning), to John Willis and Damon Edwards’ CAMS (Culture, Automation, Measurement, Sharing), to Dave Zweiback’s ICE (Inclusivity, Complex Systems, Empathy), are primarily about how people work. It doesn’t matter if one is building software or running a school, DevOps principles are sound.

I recently tweeted the observation that “anyone who attempts to define DevOps in a paragraph is trying to sell you something”, adding “not necessarily a bad thing, just buyer beware”. This wasn’t a criticism of tool providers; many tools, from build automation to log monitoring, are essential to an effective DevOps transformation. It was a reminder that the problems and solutions DevOps is concerned with are complex and nuanced. There is no prescribed toolset or checklist of actions one can take to be successful; an effective DevOps transformation requires thoughtful, committed action across the whole range of people, practices and products one uses in their work. It might take months or years of practice, but maybe one day we can all stop talking about DevOps.

DevOps Needs Great Managers

Posted on

by David Fredricks

There has been much discussion about DevOps and the benefits it offers to organizations. A lot of what I’ve read discusses defining what DevOps is and how it will improve the bottom line. Much of the information is general in nature, more broad stroke, theoretical type content. According to Wikipedia, “Getting Developers and Operations talking to one another”, “Tearing down the wall of confusion”, “Collaboration between different departments”, “Automating your systems”, etc, etc. This really makes it difficult for one to take specific actions.

From my experience, defining DevOps is personal. The direct challenges that one deals with on a day-to-day basis really shape the concepts of DevOps and the benefits one hopes to accomplish. This makes it difficult to plug and play the process. What works for one organization is not necessarily the right direction for another.  The best case scenario is starting from scratch and building your people, process, and policies around core best practices. (This is why most of the success stories you read about come from greenfield startups). Unfortunately most do not have this luxury.  Reality is companies have layers upon layers of legacy systems, process and people interconnected with technology running in production. Herein lies the challenge.

Achieving a successful DevOps transformation means complete transparency, disclosing all of the cultural bias, loyalties, motivations and scars that internally exist. This is a huge undertaking. Many of the failures in transformation happen because companies are not transparent with themselves.  Having a deep understanding of systems, process, people and customers is vital to being successful. What? Where? When? Why? These questions must be addressed, understood, and accepted before you can move to the “How”.

This is why managers are the key to success. No one else understands the complexities of the business more than the managers. They know where the “skeletons” are buried, who is in bed with who, what systems work, what systems do not. Why some process is done one way, and in other cases done another. Managers are the bridge between what is being told and what is actually getting done. Great managers define the realities for their team, and communicate that back up to leadership. They understand information sharing is vital for success in transformation and growth within the organization. Great managers provide work environments focused on continuous learning for their employees. They create clear and focused workflows for their teams. Shielding them from naysayers or negative process and time sucks. They influence executive direction and gain acceptance. Great managers are able to keep their teams unified and committed. Great engineers stay with and work for Great Managers.

DevOps need More Great Managers!

Key Characteristics of Great Managers:

1. Emotional Intelligence/Empathy: Great managers understand their team. They have personal connections with each member. They know how and when to engage teammates. Great managers accept their role as coaches, mentors, therapist, and sometimes friends. They are able to build  mutual trust and respect from their teams. Great managers lift team members up to achieve more than even they, themselves, believed they were capable of.

2. Teaching and Learning: Great managers create an environment of “Teaching and Learning”. They understand the distribution of information is a strength not a weakness. Dustin Collins wrote a great blog highlighting the risk when information is not shared in “From Zero to Hero, Or There and Back Again”. Great Managers believe in continuous learning, learning with the intent to teach others. This single methodology builds an expectation for everyone on the team to learn with the  intent to teach. Every member of the team is always learning and teaching.

3. Strong Communication/ Listening skills: Great managers create environments of transparency and sharing simply by listening. They understand that all information is important. Being a great communicator also means understanding what questions to ask and why. QBQ by John Miller is a great reference. Uncovering true motivations behind one’s dialog is essential to being a great manager. They deflect white noise and focus the team on the main message. They define clear goals and expectations. This is especially important when organizations are shifting culturally. Change is only scary when information, expectations and direction are not fully and properly communicated and understood. Leadership voids cause fear and fear kills innovation.

4. Protectors/Blockers: Great managers understand their role to block and shield their teams from issues outside of their influence. They take accountability for the actions of their team. If something goes wrong they take the blame. When things go well, the praise goes to the team. Great managers care more for their team’s success and well being than their own. They create a “No Fault Environment”, allowing employees to try new things without the fear of failing.

Great managers are honest with their team, leadership, and especially themselves. They are constantly defining realities for the people around them. Great Managers have the respect of executive leadership. They are able to influence real change at the very top of an organization. They create stable work environments for their teams. PuppetLabs State of DevOps “Engineer do not leave companies, they leave managers.” Great Managers build and retain cohesive teams. Employee churn is one of the most disruptive elements to successful DevOps transformations. DevOps is hard! Having great leadership in place is essential for guidance and reassurance when something doesn’t go as planned, and it WILL happen. Great managers know how to respond in these situations without sounding the alarms. Great Managers keep the ship calm in rough waters to ensure smooth sailing and success for all.