Let serverless solve the technology problems you don’t have

Posted on

Stop with the undifferentiated heavy lifting already

Luckily for all of us a lot of really bright people have been hard at work figuring out how to spend more time solving our business’ problems, and they’ve been sharing this information publicly for years. Entire conferences are dedicated to it. These practices and cultural changes are commonly referred to as DevOps. Practitioners and evangelists don’t always frame DevOps as a way to spend less time on technology problems and more time solving your business’ problems, but it’s all there. What is faster time to market if not the ability to try and solve your business problems more quickly than before?

The thing is you probably know all of this. Most of the people in your organization probably know all of this. So why aren’t you spending more of your time solving your business’ problems?

Everybody wants to spend less time performing undifferentiated heavy lifting

Serverless is the most compelling catalyst out there to help your organization make these changes. As with all buzzy words serverless can mean different things to different people. By serverless I mean ephemeral functions which encapsulate your business logic and expose organizational capabilities. These functions integrate with and consume managed utility services, and all of this is supported by tooling that structures your solutions and takes care of boilerplate, undifferentiated heavy lifting for you.

The Serverless Framework is the secret sauce that was missing before

Guard rails for the yellow brick road

If DevOps Oz is the destination, getting there entails following the yellow brick road. As many organizations have discovered, this road has sign posts but no guard rails. Tools like Docker have helped some companies along their journey, but the experience of many has been that doing the wrong thing is still easier than making the right choice. These kinds of tools and patterns work great and expedite the process for organizations that were going to make the change with or without the tool’s help, but do little for the organizations that need a tool to enable the change.

Serverless, and especially the Serverless Framework, make doing the right thing easier. Trying to put too much functionality into a single service? While nothing will technically stop you, you’ll feel the pain right away and get nudged into decomposing into multiple, smaller services. Having a hard time restructuring into embedded product teams? The Serverless Framework combines infrastructure and code into a single package, making it easier to deploy in complete, self-contained units.

There are still technology problems to solve with serverless — it’s a catalyst, not a panacea. The important thing is that even with these new problems you’re miles ahead of where you were before, and the self-correcting nature of the framework and tooling helps you stay on the right track for your entire journey.

No more death stars

Friends don’t let friends build death stars

Have you ever been part of a project that was planned to last over a year? How well did that go?

These types of projects happen to organizations caught in a painfully reinforcing cycle of shortsightedness. Common ingredients include monolithic systems with overlapping capabilities, integration as an afterthought, and organizational silos within which decision makers only maximize against ultra-local consequences — tragedy of the commons be damned. In this type of environment a boiling-the-ocean approach appears to be the only option for solving Today’s Most Pressing Problem. The problem is a hard one to address because death stars breed more of the same — each ocean boiling leaves the enterprise less able to deal with Tomorrow’s Most Pressing Problem and the cycle is perpetuated.

By helping you decompose your teams and systems into smaller ones, serverless helps you escape this death spiral. API design is a first-class citizen of serverless architectures, and the tools make it dead simple to create clean, simple, and friendly APIs for others to consume. This clean decoupling helps you break down the monoliths and frees up your teams to operate and iterate more independently of each other. This type of decomposition has always been the right way to do things, and by making the right thing the easy thing serverless saves you from yourself.

Breaking the technology diversity & governance dichotomy

Running and maintaining your software is critical to the ongoing success of your company and it would be irresponsible to not have some form of governance in place to support it. At many organizations software governance takes the form of limiting developers’ choices by standardizing — becoming a Java shop, for example. Too often this results in a dry and barren technology landscape, even while the outside technology world grows in complexity and diversity. The key to falsifying the governance-diversity dichotomy is to decouple the cost of governance from the process of exploring new and innovative technologies.

Through built-in support for automation, extensibility, functional decomposition, and code-driven access control, serverless can help keep governance costs constant even while the overall system complexity grows. Code-driven governance is a paradigm shift for what is often the most conservative group within an organization, but it’s one that serverless makes easier.

Don’t let command and control governance keep you in a technology wasteland

Data-driven decision making & ongoing experimentation

Many times organizations have a two-tiered model of decision making, where complex analysis and the tools to support them are available for customer facing applications while backend systems languish in a decision making stone age, relying on HiPPO (highest paid person’s opinion) or anecdotes and stories about what people want. And some poor souls are working in a place where all decisions get made in these types of haphazard ways. These decision making processes could make sense in a simple or even complicated domain (see the cynefin framework), but they hold no muster for solving complex problems where patterns and causality are not clear and ultimately must be disentangled through probing and experimentation. If you don’t see the problems your teams face as complex that generally means you haven’t solved the simple or complicated ones yet, which itself can be a result of stagnating in a state of undifferentiated heavy lifting — stay there too long and you’ll find that your competitors have solved the complex problems for you.

By breaking systems down into small, decoupled services bounded by APIs, serverless builds a strong foundation for collecting usage information and meta data on a per-service or even per-function basis. By tying this information back to SLAs, KPIs, or other measures of business outcomes teams have the information they need to iterate and experiment — and the loose coupling of the underlying architecture gives them the space and freedom to do so.


Embracing Automation While Letting Go of Tradition

Posted on

While I am all for traditions like Thanksgiving turkey and Sunday afternoon football, holding onto traditions in your professional life can be career limiting. The awesome thing about careers in technology is that you constantly have to be on your front foot. Because when you’re not, someone, somewhere, will be and when you meet them, they’ll win.

One tradition that has a limited lifespan at this moment is waterfall-native development and the security practices that go along with them. While the beginning of the end might have first been witnessed when Gene Kim and Josh Corman presented Security is Dead at RSA in 2012, we have more quantifiable evidence from the 2017 DevSecOps Community Survey. When asked about the maturity of DevOps practices in their organizations, 40% stated that maturity was improving, while 25% said that it was very mature across the organization or in specific pockets.
How mature is your adoption of DevOps practices - DevSecOps

In a waterfall-native world, traditional application security approaches are bolted-on late in the lifecycle, performed manually, and can take hours to days to receive feedback. In DevOps-native worlds where SDLC stages shrink to absurdly short windows, old world technologies won’t be able to cross the chasm into this high-velocity realm.

For those of us in the security profession, there is an awesome opportunity in front of us. Our brothers and sisters in the Dev and Ops realms are calling on us to innovateSecurity is an inhibitor to DevOps Agility - DevSecOps

When faced with the chance to build security into a new and exciting development model, security practitioners must not miss the opportunity to make positive change. Interestingly, 65% of security respondents are in agreement that security is seen as an inhibitor to DevOps agility (Q31). It feels as if we may be letting the opportunity to drastically fix our application security woes pass us by.

Moving from an inhibitor to an enabler of best practices requires a mindshift. The solution to these difficulties is security automation at the speed of DevOps. Successful application security has been defined as increased automation that doesn’t slow down the development and operations process. Imagine a scenario where developers embrace security rather than find ways to work around it.

When the cycle times shrink, it’s time to rethink how we continue to refine and improve application security. As enterprises adopt and enhance DevOps, application security teams should focus on decreasing the amount of time it takes to detect an attack in progress and respond to an identified issue. In a DevOps native world, automation of attack, anomaly, and application security protection at runtime is paramount.  Hanging on to traditions is non-essential.

One example of where DevOps and Security are sprinting at the same pace is with runtime application self protection (RASP) and next generation web application firewall (NGWAF) technologies. RASP and NGWAF technologies allow enterprises visibility into application security attacks and data at runtime giving security, operations, and development teams a chance to improve application security results beyond just increased speed of assessment. By taking the results of runtime security visibility and protection and feeding that information back into all stages of the development cycle we are able to increase velocity while simultaneously increasing the security of our entire development effort.

DevOps practitioners will lead the charge to implement new application security technologies that meet these requirements, moving beyond traditional WAF deployments to modern application security technologies that embed into the heart of the application itself. The closer the protection gets to the core of the application, the stronger and more accurate the results. Automation is one of the fundamental keys to DevOps success and security can’t be overlooked. Automation of application security will democratize security data breaking down silos between groups helping the entire organization operate more efficiently.

We can always just stick to tradition. Stick to what we have held to be absolute truths in application security for the last decade. Or we can choose to innovate our application security practices to incorporate learnings from the changes that are occurring around us. I think it’s pretty clear that innovation is required if we are to properly secure the modern application environment and that innovation will come in the form of application security automation.


DevOps still lacking definition

Posted on

by James Bourne

DevOps still lacking definition despite big usage figures – but the cultural element is key

The way in which DevOps is being tackled by different organisations can be defined in one of two ways. You’ve either got the ‘teenage sex’ approach – everyone’s talking about it but hardly anybody’s doing it – or you’ve got the Eric Morecambe method; ‘all the right notes but not necessarily in the right order.’

In other words, there is no one unified guide; companies and departments are taking their own strands and doing it their own way. This has been backed up by a study from B2B research firm Clutch released earlier this week, which finds that while 95% of organisations polled either already use or plan to use DevOps methodologies, nobody can agree on a proper definition.

The most popular definition at the time of the survey, according to the almost 250 respondents, came from Wikipedia, which states that DevOps “is a culture, movement or practice that emphasises the collaboration and communication of both software developers and other [IT] professionals while automating the process of software delivery and infrastructure changes.”

35% of those polled plumped for that, ahead of definitions taken from Rackspace (24%) – “uniting development and operations teams” – Hewlett Packard Enterprise (21%) – “grounded in a combination of agile software development plus Kaizen, Lean Manufacturing, and Six Sigma methodologies” – and Amazon Web Services (20%) – “combination of cultural philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services at a high velocity” – respectively.

Riley Panko, the author of the report, said she wouldn’t have been surprised by a 25% split across all four from respondents. “We did hypothesise that there would be a lack of consensus among the definitions,” Panko told CloudTech. “We were a little surprised that Wikipedia’s definition secured even an 11% edge over the second-place definition.”

Organisations need to take time with DevOps to align the processes with their own goals – but not too much, for fear of being left behind

When asked whether employing DevOps has improved organisations’ development process, the average score out of 10 was 8.7. 87% of respondents put their score between eight and 10, with only one respondent putting ranking it four or below. According to survey respondents, Docker was the most useful tool for employing DevOps, with Microsoft Azure the most effective of the ‘big three’ cloud providers.

Panko noted that, in conducting the research, one clear element stood out. “I believe that culture is a key element to DevOps,” she said. “Successful DevOps implementation seems to require more than just tools and new processes – it involves encouraging a culture of greater communication and collaboration. Using those tools and processes with an antiquated mindset is counterintuitive – however, it’s all a matter of personal opinion and how you define DevOps for your own organisation.”

This is a view backed up by Brian Dearman, solutions architect at IT infrastructure consulting provider Mindsight. “It being a cultural movement is true,” he said. “15 to 20 years ago, DevOps consisted of two separate things, with operations and development consistently complaining about each other. The culture is being changed, removing the animosity between each side.” David Hickman, VP of global delivery at IT services firm Menlo Technologies, said that it was ‘more of a methodology trend than a culture’, adding it “pertains more to social elements of organisations and how people relate to each other from a professional and business perspective.”

So for the vast majority who are partaking, what should be done from here? As the various definitions overlap and consensus is less than universal, Clutch recommends organisations should take time to align the processes with their own goals, but not too much, for fear of being left behind. “Leaving a team without those guidelines means that they might develop conflicting ideas about DevOps, since many differing ideas about the philosophy already exist,” said Panko.

A study released earlier this month by F5 came to a slightly different conclusion; while the poll, of almost 2,200 IT executives and industry professionals, said usage was increasing, only one in five said DevOps had a strategic impact on their organisation.


Which Fate Will DevOps Choose?

Posted on

By Stephen Fishman

Comic books have a certain math: for every superhero, there is a villain.

But superheroes and super villains don’t only exist in the comics. They roam the halls of the business community. The heroes are called “movements” and, if you follow them, they promise to benefit not just the bottom line, but the people creating and managing their systems.

In comics, the heroes always win in the end (except for the characters killed temporarily only to be reborn in the sequel).

If only it were that easy in corporate America.
Movements Transform or Fade Away
In the current corporate world heroic movements rise and fall — the villains however seem to have the staying power.

The core of the user experience (UX) movement has already left the building and embraced “service design” and/or “design thinking.” Agile is experiencing an identity crisis with critical books and articles popping up everywhere.

DevOps appears to be an unstoppable juggernaut inside of IT shops of all sizes and shapes, but it seems unlikely that DevOps will be the exception to the rule. All movements meet one of two ends: either their demise or their transformation.

Modern movements in enterprise development, inclusive of UX, Agile and DevOps, are no exception to the rule.

The Flawed Plots and Schemes of Flawed Organizations
Once a movement starts to gain traction, the seeds of it’s eventual demise or transformation have already been planted.

All of the below villains are described in terms of how they are battling with the hero of the day: DevOps. Notice how each one has been around in almost the same form for forever.

If you are young enough to see DevOps as your first paradigm change in technology development and management methodology, don’t fret. Just wait a decade and the cycle will start all over again.

Villain 1: Planting the flag too early, aka ‘We got the DevOps’
What’s the easiest way to kill something in corporate America? Claim “Mission Accomplished!” during an in-process transformation.

Ian Andrews, VP of product for Pivotal, captured the concept best when he recalled overhearing an IT executive in the early phases of a transformation say “Oh yeah. We got the DevOps.”

When the powers that be think of transformation as something you buy in a fiscal year or in a few quarters, the teams on the front lines of development, operations and customer support will pay the price.

A leadership team that underestimates the amount of work and time that goes into completing an enterprise mindset shift will often move the conversation, along with the budget and focus, to the market-facing concern of the day. Once the enterprise gaze shifts, it puts the movement at its most vulnerable state because leadership expectations have not shifted along with the enterprise commitment.

Once a leadership team thinks a goal has been achieved, unwinding expectations on benefits and future required effort to cement the shift becomes nearly, if not, impossible in a system bereft of empathy and shared context.

Once management deems an effort to have overpromised on its value proposition, the machine moves on and the effort is discarded.

Villain 2: Vendor/Community Overplay, aka ‘We are the DevOps’
If I see one more vendor claim to be the DevOps company, I’m going to scream. New vendors. New products. New conferences. New titles. I know DevOps is popular and I also know it is meaningful but I don’t think people understand that nothing can hold its meaning or value if everyone claims it as their own.

This is the poseur paradox. As a movement finds its base and grows in popularity, it by definition attracts people seeking value in it. As more people join the movement and identify themselves within it, the more watered down the movement gets.

A movement cannot be both a) universally adopted, and b) universally consistent.

Given that some people who adopt the DevOps moniker as their title will unavoidably not measure up to the original spirit of the movement, many of the audiences engaged by these people will walk away with a poor impression of the movement.

As more watering down happens and more audiences are left unfulfilled, it leaves the most skilled practitioners with no choice but to start a brand new movement.

The broken hearts and indefatigable souls of the original founders and acolytes team up to make the new movement both as inspiring as the first movement, and harder for the watering down process to repeat. This goal often works against itself, because the abstraction and complexity added to prevent the watering down, just so happens to have the side affect of making it harder for people to attach to.

Villain 3: Class warfare, aka ‘We are the Totem Pole’
The brilliance of Werner Vogels, CTO of Amazon, isn’t as much in the API and PaaS areas as it is in one simple directive — “you build it, you run it.”

This injunction has effectively become religion in DevOps shops everywhere — its the organizing principal that allows DevOps to live up to its promise.

When a shop adopts this motto, automating CICD pipelines and stations becomes simple because you can only reliably achieve the “you run it” part of the equation with the fast feedback loops provided by automated integration and deployment.

Shops not yet following “you build it, you run it,” still embrace the separation of duties concept known as “the Totem Pole.” If you’ve ever worked in a fully siloed shop, you understand the corporate caste system I’m referring to, but if not, allow me to explain.

There are two conceptual areas in totem pole land; upstream and downstream and the borders of each of these territories is defined by the location of the observer. Regardless of vantage point, the job model remains the same: take the partially thought out work product given to you by the teams upstream and make it ready for handoff to the poor souls downstream.

In Development groups, UX and Product shops are “upstream” and Operations shops sit “downstream.” Customer support teams fall “downstream” of Operations, and so on.

Teams view any silo that is not theirs as the perceived source of what’s wrong in the enterprise either because they don’t understand the complexities of your silo (i.e., the people upstream of you) or because they’re not willing or able to hire capable and qualified people (I.e., the people downstream of you).

Once discipline-centric silos form in a shop, totem pole narratives (i.e. Us vs them) become the norm — and this is where the seeds of division bloom into the thousand flowers of disunity, stagnation, blame and “over the wall” delivery to the next group one step down on the totem pole.

Given that the aim of DevOps is to reduce or eliminate these symptoms, a DevOps team set aside in a separate organization (i.e., my group designs it, your group builds it, and their group runs it) has the same chances for thriving as West Berlin did when it was surrounded by iron curtain states: no chance without an airlift from the west.

Divide and conquer is the mantra of the totem pole and this villain is always waiting for the right moment to reintroduce the corporate caste system that starts with “role x is more important than role y” and ends with more siloes, each of which are necessary and none of which are ready to accept that they are mutually interdependent.

Given that DevOps and hardened organizational siloes are mutually exclusive concepts, organizations that attempt to do both will harm the movement, because it can never deliver on its promises without elimination of the upstream/downstream culture.

What Comes Next for DevOps?
In superhero films, the villain gets vanquished, the credits roll and sometimes we get a post-credits preview of the next film for the heroic protagonist.

In the large corporate enterprise, it’s the heroes who are forced to choose between adaptation (UX to service design, agile to lean, etc) or slow death.

One or the other is coming for DevOps.

The only things you can do to hold fate at bay are to measure expectations, ignore the hype cycle and not get too attached to titles. Most of all, don’t let the powers that be divide and conquer. Remember that we are all the same: individual workers trying to get by and make the workplace a better place.


Docker, Red Hat & Linux: How containers can boost business and save time for developers

Posted on


Analysis: A lot of work is being put into making container technology vital to your business.

Container technology is in a period of explosive growth, with usage numbers nearly doubling in the last few months and vendors increasingly making it available in their portfolios.

Docker, which has popularised the technology, reported that it has now issued two billion ‘pulls’ of images, up from 1.2 billion in November 2015.

But containerisation technology isn’t new, back in 1979 Unix V7 developed chroot system call, which provided process isolation, but it took untill 2000 for early container technology to be developed. In 2000, FreeBSD Jails developer Derrick Woolworth introduced jails an early container technology.

Google is perhaps the biggest name user of container style technology, it developed Process Containers in 2006. Google’s creation of Borg, a large-scale cluster management software system is attributed with being one of the key factors behind the company’s rapid evolution.

Borg works by parcelling work across Google’s fleet of computer servers, it provides a centrail brain for controlling tasks across the company’s data centres. So the company hasn’t had to build separate clusters of servers for each software system, e.g.: Google Search, Gmail, Google Maps etc.

The work is divided into smaller tasks with Borg sending tasks when it finds free computing resources.

At their most basic this is essentially what containers do, it offers deployable bits of code that are independent so that applications can be built.

Unix and Linux have been using containers for a number of years but the technology’s recent growth can really be attributed to Docker, whose partners include the likes of IBM, Amazon Web Services and Microsoft.

This isn’t to say that Docker is the only company playing in the field, nor that it has perfected the technology.

As mentioned earlier, Google is a major player with technologies, it has contributed to numerous container-related open source projects such as Kubernetes. Microsoft is another big player having added container support with Windows Servers Containers and Azure Container Service for its cloud platform.

AWS has developed a cluster management and scheduling engine for Docker called EC2 Container Services off the back of the popularity of container deployment on the EC2 platform.

Despite these deployments there are areas that vendors have been looking at with the technology that needs to be improving.

Security is one of the areas.

Gunnar Hellekson, director, product management, Red Hat Enterprise Linux and Enterprise Virtualization, and Josh Bressers, senior product manager, security, wrote in a blog post how containers are being fixed in the wake of the glibc vulnerability.

The flaw means that attackers could remotely crash or force the execution of malicious code on machines without the knowledge of the end user.

The blog post points out that the development of container scanners is just a “paper tiger” that may help find a flaw but doesn’t really help to fix it.

Red Hat says: “We provide a certified container registry, the tools for container scanning, and an enterprise-grade, supported platform with security features for container deployments.”

This has been a common message that I have come across when reading about container deployments, vendors integrating with Docker and other container technology – typically there is a focus on improving security.

Although there are security improvements to be made, containers hold significant promise for the enterprise, here are three examples of where containers can help.


Containers can help with the rise of bring your own devices

The problem is that as BYOD increases it has become more important to separate work from play. This can be done through a secure container that provides authenticated, encrypted areas of a user’s mobile device.

The point of doing this is so that the personal side of a mobile device can be insulated from sensitive corporation information.

More distributed apps

Docker can distribute the application development lifecycle by enabling distributed apps to be portable and more dynamic. This means that those distributed applications are composed from multiple functional, interoperable components.

Basically this is the next generation of application architectures and processes that are designed to support business innovation.


More innovation

Continuing on the innovation front, because the developer no longer has to focus on the labour-intensive work of typing the application into hardware and software variables, the developer is free to develop new things.

This is similar to one of the benefits of using a cloud vendor for Infrastructure-as-a-Service. Rather than dealing with the time consuming tasks that don’t add a great deal of value, the developer can do other things.


The top three Docker application areas have been seen in test and quality assurance, web facing apps and big data enterprise apps.

Its rising popularity can be linked to the rise of cloud, partly because web-scale players have used the containerisation technology to such good effect.

However, unlike cloud, Docker adoption has been driven primarily from the middle out; nearly 47% of Docker decisions have been made by middle management and 24% by a grassroots effort. Cloud meanwhile started primarily as a top down decision being pushed by CIOs.

Given its growing popularity and the benefits it can deliver to the organisation and developers, the technology is likely to continue on its rise to the top, particularly when it is taken into account the numerous big name vendors that are working on perfecting it.