DevSecOps Will Ensure That Time-to-Market and Security Don’t Clash
Container security for DevSecOps
By Rani Osnat
According to a recent survey by Veracode, 52% of developers worry that application security will delay development and threaten deadlines. This is huge percentage, especially considering how crucial finding, fixing and preventing security vulnerabilities is to any development effort.
Any way you look at it, ensuring that quality code is also secure is complex. Traditionally, surfacing security vulnerabilities during design, development, deployment, upgrade, or maintenance were mitigated via:
Design review – involves creating a threat model of the application, usually together with a spec or design document, even before the code is created.
Tooling – using automated tools can lower human overhead, but you need to beware of false positives.
Blackbox audit – involves security testing through another application
Whitebox review – manual review of source code by a qualified engineer
Each of these techniques has its advantages, and each involves varying levels of time, effort, and cost – especially time. It’s exactly these types of before-the-fact or after-the-fact security reviews that developers fear – especially when time to market can make or break a project.
That’s Why They Invented DevSecOps
When it became clear that a streamlined yet secure build and ship process was a must have – AppSec and/or InfoSec teams began taking a hard look at how and when security enters the development process. The idea is to integrate security measures at numerous points in the DevOps workflow, but to do so in a way that is as transparent as possible to developers. This keeps DevOps teamwork, agility, and speed intact – while still ensuring application security throughout the entire lifecycle, from production to deployment.
Transforming to a DevSecOps model isn’t simple, though. First off, it requires a change in the way your organization works and thinks. Namely, DevSecOps requires:
Increased focus on customers – app security experts aren’t necessarily used to thinking about customers. Rather, they’re rightfully used to seeking vulnerabilities and mitigating threats. With the advent of DevSecOps, AppSec needs to adapt security programs and practices to client needs and business demands.
Scaling toward innovation – In the context of DevSecOps, application security isn’t a gatekeeper. It’s an innovator – a partner that needs to keep pace with business demands and DevOps methods. To scale accordingly, application security may have to streamline processes and adopt automated tools to lower overhead.
Creating objective criteria – To ensure the fast security decision-making that facilitates rapid time to market, application security needs to create objective security criteria, then adopt the tools to measure them.
Working proactively – With the goal of identifying potential attack targets before they become actual targets, application security should proactively hunt, surface, test, and remediate. This not only ensures that the business impact of weaknesses discovered is minimal, it also helps inject security into core business processes.
Continuously detect and respond – In the DevSecOps model, application security needs to constantly detect, compare, correlate, and respond to threats. Moreover, detection models need to channel information to internal teams for more effective trans-enterprise responses based on real-world business goals.
Different Approaches to DevSecOps
The best approach to implement DevSecOps into your organization depends on a mix of organizational culture, tools, and goals. Some of the primary avenues organizations are taking include:
Creating a bilateral task force – Even as you’re working to facilitate better cultural and professional integration, you can still get down to actual work. A joint DevOps/AppSec task force can start addressing pressing matters right away, or tackle more basic DevSecOps issues like defining a joint set of measurements that facilitate continuous collaboration.
Training DevOps in security – Understanding comes from both sides. To help DevOps team members better understand their new security colleagues, encourage them to learn more about security. From practices to jargon – there is no shortage of online resources or conferences that can promote DevSecOps coexistence.
Add security to your DevOps mix – Security thinking is very different to DevOps thinking. To jump-start what will hopefully become a smooth-running team effort, add security staff to your DevOps team. By breaking down the internal cultural barriers, you can smooth and expedite the integration of these two teams into a cohesive DevSecOps group.
The Bottom Line
By adopting the DevSecOps model, those 52% of worried developers can rest easy. Finding, fixing and preventing security vulnerabilities – ensuring application security throughout the entire lifecycle, from production to deployment – is possible without degrading productivity or time to market.
Serverless Computing Explained - Comparing Features and Pricing to SaaS, IaaS, PaaS
When you’re thinking about hosting your app, you want it to be as hassle-free as possible.
After all, you’re on your way to create software that will transform your organization, your community – possibly the world. (No need for modesty here!)
On that path to greatness, there’s no room for hosting frustrations.
Everyone wishes for a hosting solution that makes it easy to deploy features rapidly. It should also be cost-effective, keeping your bottom line happy and freeing up resources to put towards development.
First, let’s answer the most pressing question…
Why should you care about Serverless?
Because it’s coming. The hype is mounting.
first the hype grows…
then it peaks…
then people start seeing issues and criticizing…
until finally they find ways to make it work. The tech matures.
For Serverless computing, the hype is just starting, meaning you can get in on it early.
To be fair, the Serverless computing model is not an entirely new idea – it’s 3-4 years old at least. But up until recently, it was only discussed between technical experts, developers and DevOps professionals.
Now, Serverless is starting to enter the broader IT conversation. How should you react?
At the very least, you should understand what’s going to be on everyone’s minds very soon. But more importantly, you might want to implement Serverless architecture in your project.
So to stay up to date, you should know about Serverless.
What is the Serverless model?
The road to Serverless
The name Serverless might be a little misleading. When we talk about Serverless, we’re not talking only about servers, but about the entire cloud ecosystem.
The easiest way to explain Serverless is to take a historical view.
Serverless Architecture Comparision
A long time ago in the yonder days, you dealt mostly with dedicated servers. To host your app, you had to buy a whole server which would be physically situated in a server room. The whole server was yours and you were responsible for making it function properly.
As you can imagine, that was a bit of a bother to do, especially when all you really wanted was to build your app, not spend time updating and maintaining your server(s).
As a response to that, IaaS – or Infrastructure as a Service – was born. In IaaS, the server is no longer yours – it’s the provider’s. All you have to worry about is setting up the OS, the app itself, its functions and the service. An example of an IaaS solution is AWS EC2.
But if you’re like me, that still sounds like too much ‘Ops’ in your DevOps.
So the next step is Platform as a Service – PaaS. Here, the OS falls on the side of the provider. All you have to do is create the app, while the provider worries about updating the OS and keeping it secure.
Now we get to Serverless, the next logical step.
When you use Serverless architecture for your software, you don’t have to create the whole app. Instead, you only create single functions of the app, while the app layer, the part that manages the functions, is on the side of the provider.
That means the provider handles scaling and ensures proper exchange of information between different parts of the app – so you don’t have to worry about that. In Serverless, you and your devs only care about creating functionality. And isn’t that what development should be all about?
Serverless vs Software as a Service (SaaS)
Finally, the last model on the image is SaaS, or Software as a Service. Here the whole software is on the provider’s side. As the buyer, you get the service, i.e. what the software actually does.
SaaS apps are very popular these days, and you’re probably using some of them. Think about Dropbox, Salesforce, Netflix, Google Apps, and so on – if you pay for them, you only get the service they offer.
However, we need to make a key distinction here between using an app and building an app.
From a user’s perspective, Netflix might fall under SaaS – after all, you just want to watch Stranger Things.
But when you’re building a service like Netflix, you need to use at least a Serverless model to add more functionalities to the app. If you want more control over how the app is built and hosted, you could use PaaS or IaaS instead.
Scaling in Serverless vs other models
Now let’s consider how your costs will scale in each model.
Foodpanda has a range of functionalities: you list restaurants, filter to your taste, select your dish, choose additional ingredients, and finally process your payment.
With PaaS/IaaS, you would build one app that has it all: listing, menu, and ordering.
With Serverless, you would break that up into several functionalities (or Lambdas for Amazon Lambda). You don’t combine them into one app, but send them separately to the provider, and the provider builds the app.
The provider also handles scaling. If the menu function is used very often, but ordering doesn’t see that many requests, the provider can scale each function individually. So the popular menu function would get more processing power, but ordering would still have the same level.
Whereas in PaaS/IaaS, you are responsible for configuring the app to handle the load and be scalable. The difference is that to ensure proper scaling, you need DevOps personnel on your side while in Serverless, a provider such as Amazon handles all of that.
TL;DR: Serverless architecture allows you to focus on the application’s code, and not on how the code will perform on the server.
Pricing: How Serverless can save you money
Comparision of Serverless Architecture Pricing
The final reason to consider Serverless is its flexible pricing model.
In IaaS/PaaS, you pay for the time when your app is working and available to users. If you own Foodpanda and want it to be available 24/7, then you pay for each hour when it’s online and waiting for connection from users. Crucially, you keep paying regardless if the server/app is in use or not. To scale, you have to add new virtual machines (IaaS) or create new app instances (PaaS).
For Foodpanda, that is fine; the site is probably used by someone every minute of every day.
But what if your app isn’t Chairman of the Popularity Club yet?
In Serverless, if it ever happens that Foodpanda is used by no one for half an hour – you don’t pay for that. More realistically, you could have an office app that employees mostly use during business hours. That would ‘sit around bored’ all night long, but it should still be available for that one employee that desperately needs to check something at 2 AM. For such cases, Serverless ideal because you only pay for how much your app is actually used.
What do I mean by ‘how much the app is used’? With Serverless, you pay for the amount requests the app gets and for milliseconds of CPU and RAM work.
Let’s use Amazon’s AWS Lambda as an example for pricing.
Lambda is currently the most popular Serverless solution. Lambda is compatible with both Python 2.7 and 3.6.
So what’s the pricing for AWS Lambda? Here’s an overview straight from the official AWS Lambda page:
AWS Lambda Pricing
Approximation to 100ms
The first 1 000 000 requests each month are free
After that, you pay $0.0000006 per request
The first 400 000 GB-seconds are free
After that, you pay $0.00005001 per GB-second
Pay special attention to the ‘free tier’. Using Lambda, your first 1 000 000 (that’s one million) requests and the first 400 000 GB-seconds are completely free. After that, each request and every GB-second used by your app is counted – and you pay only for that.
This limit is reset every month. Quite generous, isn’t it?
Comparing costs between Lambda and EC2 (IaaS)
Of course, Serverless isn’t a solution for every situation. In some cases, a IaaS solution like EC2 could serve you better. That depends on the amount of attention your app is getting.
What’s the breakpoint for Serverless vs IaaS? Take a look at this cost – https://www.trek10.com/blog/lambda-cost/
The most important part is the far right: if your app gets over 81.9 requests per second (24/7) then IaaS becomes the preferred solution. If it’s anything less than that, Lambda is more cost-effective.
Let’s do the math on that. Take the top row, in which each request takes 100 ms and 128 MB RAM to process. Every day, you need an average of 81.9 requests per second, times 60 seconds in a minute, times 60 minutes in an hour, times 24 hours…
81.9 * 60 * 60 * 24 = 7 076 160 daily requests
For those assumptions, your app needs over 7 million daily requests for Serverless to be more expensive than IaaS.
In other words, your app needs to be really, REALLY popular for Lambda to be a bad choice. Even if the average user usually issues multiple requests each visit, you’d still need hundreds of thousand of users to hit that number.
Final thoughts on Serverless
So what’s the takeaway? Here’s why Serverless is worth paying attention to:
The hype is growing. Serverless will most likely gain popularity in the coming years. It’s worth considering this option sooner than the competition.
Focus only on functionalities. With Serverless, you can build individual functions of the app and let the provider do the work of combining and hosting them.
Smoother scaling. Instead of creating additional virtual machines or app instances, Serverless allows you to scale on a function-by-function basis.
‘Pay as you go’. Instead of paying for idle servers, with Serverless you only spend as much as your users actually use the app.
Hope this helps you get an understanding of the opportunities that Serverless grants you.
Serverless May Kill Containers
Kubernetes, the darling of the container world, seems set to dominate the next decade of container orchestration. That is, if containers last that long.
While it seems obvious that containers, the heir apparent to virtual machines, should have a long shelf life, the serverless boom may actually serve to cut it short. Though serverless offerings from AWS and Microsoft are built on the backs of containers, they eliminate the server metaphor entirely (and, hence, the need to containerize that server).
According to some, including Expedia’s vice president of cloud, Subbu Allamaraju, serverless frameworks like AWS Lambda are improving at such a torrid pace that they may soon displace container wunderkind like Kubernetes.
Killing Your Servers, and Not So Softly
As hawtness goes, it’s hard to find anything bigger than Kubernetes. Ranked in the top .01% of all GitHub projects, and pulling over 1,500 contributors, Kubernetes is on fire. Given its Google pedigree and the promise of helping run containers at scale, it’s not hard to see why.
And yet…the serverless phenomenon is already putting containers under fire, and just a few short years after Docker popularized them for mainstream enterprises.
Why? Well, according to Simon Wardley, industry pundit and advisor to the Leading Edge Forum, it’s because serverless changes…everything: it’s an “entirely new set of emerging practices that’ll change the way we build business.”
Oh, that’s all?
This wouldn’t trouble containers much if serverless were some distant possibility for enterprise infrastructure, but it’s happening now, and fast. Indeed, serverless’ potential comes with casualties, as Allamaraju posits: “Serverless patterns are pulling the rug from underneath container cluster managers faster than the latter becoming industrial grade.” If this seems a bit unbelievable, that’s because it is – if you’re thinking of IT circa 2010 or earlier.
Cloud, however, has dramatically accelerated things. Responding to Allamaraju’s claim, Amplify Partners’ Mike Dauber commented, “It’s incredible how fast we’re collectively moving here. Container management is NOT legacy tech…”. No. By most enterprise standards, it’s still the cutting edge. This must make serverless the diamond blade cutting the cutting edge. Yet this pace of application development innovation is only going increase.
Will enterprises be able to keep up?
Can’t Get Here From Here?
Serverless frameworks like AWS Lambda may be the future, but it’s unclear whether enterprises are ready to embrace them yet. Google’s Alan Ho, for example, believes that “From a programming model and a cost model, AWS Lambda is the future – despite some of the tooling limitations.” Even so, “Docker…is an evolutionary step of ‘virtualization’ that we’ve been seeing for the last 10 years,” while “AWS Lambda is a step-function.” Not everyone is ready to break out of the evolutionary IT track.
Talking with Server Density CEO David Mytton, he confirmed this supposition:
“The migration path for VMs to containers is much easier than VMs to serverless. Serverless is basically starting from scratch and that’s a huge barrier for existing workloads. The question is whether serverless becomes the starting point for new applications. The lack of proper tools around development, builds, monitoring and testing is a real barrier to that right now.”
Not only is serverless a more difficult migration path, but it also requires a fundamental paradigm shift in how we think about infrastructure, Begin founder and CEO Brian Leroux advised me. You have to get beyond the server metaphor, he said “As soon as you take that metaphorical leap, you get a huge degree of isolation and in that isolation you get more durability.”
As much as the learning curve for serverless can be steep, Leroux stressed, Kubernetes and containers aren’t easy, either. The payoff for making that serverless shift, however, is huge: “In Kubernetes you can compose a microservices architecture but you have to take care of the plumbing yourself. Lambda just takes care of all of that for you. With Lambda you don’t think about how your application is going to scale.” AWS takes care of all that bother for the developer.
When I asked how long it took Leroux’s development team to get comfortable with AWS Lambda, he suggested that it took a year for the team to really get comfortable as the team figured out “Amazon-isms.” Microsoft Azure, however, second to the serverless party, watched AWS’ successes and failures, he indicated, and has made it much easier, faster to get up and running with serverless. AWS has since caught back up because, he told me, “The pace of innovation is stunning for Azure and AWS.”
Google, perhaps because of its Kubernetes heritage, has been slower to establish itself as a credible serverless player. This doesn’t bode well for Google’s Cloud, though its Kubernetes-to-Google-Cloud play has been a fantastic stroke. One reason that AWS Lambda is so good, Mytton told me, is that it’s likely the heart of the Amazon Echo. In other words, “AWS is productizing their own usage of it, which is why it’s already pretty good.” This is also why Google Cloud functions remains far behind, he reasons: “I’m unsure what Google themselves might use it for, as Kubernetes is heavily used inside Google as Borg.”
The more serverless bypasses containers entirely, however, Google’s cloud will start to look retro.
Enterprises aren’t going to dump their new container initiatives overnight, of course. Not all applications are an easy match for serverless, for example. Mytton told me that event-based apps, e.g., Internet of Things-type apps, are particularly well-suited to serverless, though not exclusively so.
It’s also the case that the shift to serverless will be an easier decision for new, greenfield applications. For enterprises simply hoping to modernize their monolithic, old-school VM-based applications, containers and Kubernetes will play a key role for some time.
At least, until something even newer/better/cheaper/faster/better comes along. At the current pace of enterprise infrastructure innovation, set your alarms for…next year.
I’m joking, of course, but a quick look at how the enterprise cloud market has changed is revealing. As Allamaraju pointed out to me, “Platforms like OpenStack had 6-7 years for raise, plateau and slow down.” But “Container cluster managers may not have those many years,” he goes on.”
David Linthicum preaches truth when he opines, “The feature gap between public and private clouds has grown so wide that the private cloud demos that I attend are laughable considering the subsystems that enterprises need, such as security, governance, databases, IoT, and management, versus what private clouds actually deliver.” Just a few years ago, however, all the capabilities were in private datacenters. Public clouds were cheap and convenient but feature-light.
Today, the opposite is true.
As Allamaraju says, innovation in serverless is outpacing the maturing of Kubernetes and other container management tools. This bodes well for serverless, and not so well for containers. Containers may end up being the hottest trend in enterprise computing, and yet not be able to sustain that heat for very long. Not when developers are ultimately driven by which tools will deliver the most productivity for the most convenience. Serverless, again, provides a step function in that equation. Containers are merely evolutionary.
So, yes, Kubernetes, important and cool as it is today, is very much at risk. A year ago that would have been heresy. A year from now that might be received wisdom.