Uncategorized

DevOps Defined

Posted on

DevOps Defined | Thomas Theakanath

Uncategorized

4 DevOps survival tips for security specialists

Posted on

I’ve never been a big fan of reality TV, but I do find idea behind the UK show “I’m a celebrity…get me out of here” kind of amusing.

For those unfamiliar, the show places up to 12 B-list celebrities into a jungle setting where they compete to be crowned king or queen. Bereft of creature comforts or the trappings of fame, they survive by enduring a series of viewer voted trials – usually involving eating exotic food (think insects and spiders).  Ok, it’s not sophisticated, but it does raise money for charity.

In many ways the role of IT security professionals is analogous to being a contestant on this reality TV show. Think about it – they’re increasingly being asked to perform a series of unpleasant tasks while operating in extreme and unfamiliar territory. Like, for example, maintaining compliance and mitigating risk in a DevOps style operation, where software is being built and deployed at dizzying rates.

At first glance, it appears that the goals of DevOps and security are at odds. Whereas DevOps calls for increasing the pace at with software is delivered, security and compliance seeks careful and deliberate oversight to ensure the business isn’t opening itself up to vulnerabilities. And, with a mountain of rules and regulations to support, it’s not surprising that security could easily end up being regarded as the bottleneck in any release process – killing DevOps benefits faster than you can say “continuous deployment.”

So against this “jungle’ backdrop, how can security pros adapt and automate their own processes to support DevOps without the business being eaten alive from non-compliance, hacks and exposures?

I see four essential practices for security professionals:

Engage Dev and Ops as fellow Survivalists
 in DevOps, the aim is for shared responsibility and accountability. Therefore, security pros should seek to establish relationships with Dev and Ops teams and engage them as active stakeholders in security. This doesn’t mean continually enforcing rigid and inflexible security policies, but actually working collaboratively to assign security responsibilities to the team’s best positioned to act on them. For example during every application security incident, developers responsible for the actual code ‘implicated’ should really be the first group called to help address the problem. After all, they’re intimately familiar with the software workings, plus the lessons they learn will help harden future application security.

Show off how security enables DevOps and vice versa

As organizations increasingly embrace DevOps, there’ll be many new tools and processes introduced. But as with everything new, these elements could introduce new threats and risk. Rather than see this as a problem, highly collaborative teams work proactively to identify where additional advice and controls are needed and can be applied without causing friction. For example, during development of a new mobile or IoT application, security can provide critical guidance on new threat surfaces, API governance and vulnerability testing. Remember too, however, that new toolsets in areas such as configuration management and release automation provide an opportunity for teams to bake security into the continuous delivery pipeline. This could be as simple as automatically invoking static code analysis during every application build, or providing development teams with comprehensive and fully automated security testing services that can be used repeatedly. 

Shift Security processes left into the Dev undergrowth.

As with the traditional development to operations code handballing, the tendency has been to engage security very late in the development process – nothing could be worse. Too often, security teams are seen as the bottleneck police; holding up deployment with snap code audits and “slap on the wrist” compliance checks. But it needn’t be this way. Now, DevOps enables security to be baked right into the development mix, right alongside parallel testing. So, as code is produced, automated tests can be invoked to continuously check compliance controls, like separation-of-duties and privileged user access – meaning security becomes established and seen as a key element driving positive and not a despised roadblock.

Recognize that Jungle survival demands special skills

As applications become more complex and the threat landscape more severe, highly skilled security specialists will become highly prized and a critical element of the well-oiled DevOps machine. Don’t make the mistake of assuming that developers themselves with a smattering of web application security experience can take on the job (or will even want to), or conversely that existing security pros (more used to maintaining security in legacy applications that infrequently change) can suddenly think like an agile developer. Most likely these skills will need to be developed over time by leveraging the DevOps style collaboration. This means for example, developers inviting security to participate in user story development, stand up meetings and retrospectives. While for security it means gaining credibility with more involved understanding of modern coding practices, providing faster feedback and gaining an active voice in all security related discussions. So, when the proverbial crap hits the fan and a major security incident occurs, security teams will be listened too and their advice followed.

Unlike B-list celebrities craving attention on a pretty dumb TV show, security professionals operate in the real world jungle of risk, compliance, threats and vulnerability. And, as the pace of software development increases, security pros will increasingly need to evolve and adapt their practices according to DevOps principles.

Uncategorized

Microsoft’s Unwanted Win: Cloud Downtime

Posted on

​In the last year, Microsoft’s Azure cloud service trailed its two biggest competitors–Amazon Web Services and Google Compute Engine–in uptime. Or, put another way, it was the “winner” when it came to downtime.

According to cloud benchmarking company CloudHarmomy, Amazon took the uptime crown in 2014. Its EC2 Compute Service provided 99.9974 percent uptime performance, meaning it was down for a total of 2.01 hours during the year. Google’s Compute Engine, meanwhile, offered 99.9814 percent uptime for the year, down for a total of 3.46 hours.

If you look at it in terms of service levels, Amazon provided four-nines uptime, while Google provided three-nines uptime. Microsoft also provided three-nines uptime, with its compute service available for 99.9388 percent of the time, but this translates to downtime of 42.94 hours.

In terms of number of outages, Amazon had 12, Google had 88, and Microsoft had 103.

Looking at the data from a storage perspective, Google did better than Amazon, but Microsoft still came in third of the three. Google Cloud Storage offered 99.9996 percent uptime, with eight outages and 14.23 minutes downtime; Amazon’s S3 service provided 99.9952 percent availability (with 23 outages and 2.69 total hours of downtime); and Microsoft Azure Object Storage provided 99.9853 percent uptime (138 outages and 10.89 hours of downtime).

Uncategorized

Health Care Industry Puts a Price Tag on Unpatched Software

Posted on
Last week it was reported that federal regulators have issued a sanction against an Alaskan mental health service provider, due to, of all things, not being up-to-date on software patches. Fined $150,000 by HIPAA, Anchorage Community Mental Health Services failed to apply available software patches and was subsequently infected with malware that led to personal information being absconded from 2,700 individuals.

The HIPAA fine against the Alaskan health provider is just the first for the health industry, but more and more, companies are being held financially responsible for security breaches. And, it’s foreseen that more fines will be lobbied in the near future.

This is truly an unexpected angle for software patching where a litter of bad vendor patches could result in a fine due to lost data and regulatory oversight.

In the report, Susan A. Miller, a HIPAA and healthcare attorney says…

The lesson here is that when a software patch or update is sent by a vendor, they should be applied immediately. That includes operating systems, electronic health records, practice management – and any electronic tool containing PHI.

This comes on the heels of a year of patching woes for most Microsoft customers. Many customers have altered their patching policies so that critical updates are delayed by weeks and sometimes months because Microsoft hasn’t been able to deliver an error-free month. So, what happens when a company delays a critical update because of fear of botched updates? Should IT deploy anyway and expect to just deal with the fallout of lost revenue due to downtime and crashing applications? Is Microsoft at all to blame? And, who should be responsible for paying the fine if an attack was successful due to a patch that didn’t work? There have been several instances this year where a critical, zero-day patch was flawed, had to be recalled, fixed, and rereleased.

Its one thing to be flat-out irresponsible with security like the recent Sony hack and other reported cases like Target, but quite another entirely to get caught waiting for a workable patch. In the case of the Alaskan health provider, the organization was negligent, but let’s hope it doesn’t set a precedent where it’s difficult to determine who is really at fault. This is a slippery slope, particularly if Microsoft can’t figure out how to fix its QA processes.

 

Uncategorized

Rocket vs Docker and The Myth of the “Simple, Lightweight Enterprise Platform”

Posted on


Blog image Rocket vs Docker and The Myth of the Simple, Lightweight Enterprise Platform

With seemingly everyone who’s ever written an app or booted a VM jumping on the cargo ship at the moment, it’s hardly surprising to see the launch this week of Rocket; a credible challenger to Docker in the container space.

Reading through the launch announcement, I found two of the motivations given by the CoreOS team especially interesting. They set me wondering: perhaps looking for “simple, lightweight enterprise solutions” is actually a pipe dream, and we’d be better served by looking for “enterprise features” from Day 1.

But back to the Rocket announcement. First, the following:

“From a security and composability perspective, the Docker process model – where everything runs through a central daemon – is fundamentally flawed. To ‘fix’ Docker would essentially mean a rewrite of the project, while inheriting all the baggage of the existing implementation.”

This flaw has been known and discussed pretty openly for a while, but hasn’t quite so bluntly been labelled a “fatal flaw” up until now. To me, the main takeaway here is a reaffirmation of a fact every experienced operations engineer already knows: building a reliable, secure runtime is hard!  So hard, in fact, that even the very smart team at Docker have yet to get it right.

So why are so many companies with a fraction of the experience of the Docker engineers trying to build their own container-based PaaS platform right now? With the general state of being dazzled by the shiny new toy starting to wear off, we can hopefully dare to accept the fact that trying to build your own runtime environments, by stringing together a combination of brand new technologies, is Just Not A Good Idea. Not unless you have:

  1. People in your organization who understand the complexities of building and maintaining these systems
  2. A lot of cycles on your hands
  3. Are working in a timeframe that has room for a couple of painful iterations until the technologies mature, anyway.

And then this one:

“Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned.”

My first thought was: so, in a simply, composable, non-platformy Rocket world, will all this complexity

  • …be handled by CoreOS (and how is that not simply a different choice of platform)?
  • …be handled by the App Container Runtime, but then somehow in a “less heavyweight” manner?
  • …simply go away?

To be clear: I’m not trying to be critical of Rocket or the Rocket team in any way here. I think that added diversity in the container space comes not a day too soon, and am really looking forward to see what Rocket will bring to the table. What does interest me, however, is the notion of simplicity as an achievable trait. Appealing as it may be, is it realistic in an operational context?

As a developer, I totally appreciate the desire to have “simple” and “lightweight” solutions for everything. I get that “urgh” feeling when working with clunky, slow, antiquated tools and processes just as everyone else does. But working day-in, day-out with teams introducing enterprise tools for Continuous Delivery and Devops, I’ve come to accept that some of the “weight” of enterprise tools is actually necessary. It was a painful truth to me: those “enterprise requirements,” the ones that cause much of that “heavyweight” feeling, the ones that we all want to banish…they’re real. They’re never going away. Delivering a business service at a certain scale (not just running the underlying tech, which is hard enough in itself!) just comes with someintrinsic complexity that doesn’t simply evaporate “if only you use the right tools”, much as we would like to believe that.

A colleague, a highly experienced architect and consultant, put it this way:

I do understand the path that Docker is taking: just having a Docker container is almost as exciting as having a Java EAR file.  The major difference that most operations departments know what to do with an EAR file (sort of). To run Docker-based applications 24×7 in production requires a platform, and any container is just a building block.

Sure, if you’re experimenting with in a lab environment, or if you’re a startup with a handful of people and a couple of VMs in the cloud, the “simple”, “lightweight” and “latest and greatest” tech will work for you. But once you try to move out of the lab and into production, or once your startup grows from 5 to 50 and then 500 people, all those boring, enterprise problems – scalability, auditability, reporting, managing multiple versions across many release pipelines, ease-of-use etc. – will be right there, just when you have absolutely no time to get to grips with them.

Of course, there’s always the option of staying in the lab and throwing your app over the wall, or of leaving for a new startup once you hit 50 or 500 people, but that’s just shifting the problem to someone else. Let’s call a spade a spade: the “Simple, Lightweight Enterprise Platform” for consistently delivering your applications doesn’t exist.

Don’t get me wrong: I’m tremendously excited by the potential of microservices as an application paradigm and containers as an implementation technology for them. I firmly believe we need new ideas and, yes, new tooling to make proper use of the potential of microservices and containers and to address the new challenges (such as runtime dependency management) that they introduce. And of course we should avoid and eliminate unnecessary complexity and weight wherever possible.

But let’s stop this mad rush to operationalize a bunch of “we-called-it-1.0-because-0.2-sounds-scary” tooling. Let’s stop making believe that we can grow our projects and companies to any kind of meaningful size with a bag of operational tools of which many are “simple” because they are simplistic and “lightweight” because they lack essential features down the road. This time, let’s try to ensure that security, scalability, reporting, usability etc. is in place before we’re scrambling to hook up hundreds of new servers a day, or getting hammered with audit requirements, and need to divert precious time to get our shiny, “simple”, “lightweight” tooling to cope.

Enterprise features may be boring and “heavyweight”, but for your next round of Devops, Continuous Delivery, microservice and container tooling, look for them from Day 1. Because you will need them.