Uncategorized

DevSecOps Will Ensure That Time-to-Market and Security Don’t Clash

Posted on

 

Container security for DevSecOps
By Rani Osnat
According to a recent survey by Veracode, 52% of developers worry that application security will delay development and threaten deadlines. This is huge percentage, especially considering how crucial finding, fixing and preventing security vulnerabilities is to any development effort.
Any way you look at it, ensuring that quality code is also secure is complex. Traditionally, surfacing security vulnerabilities during design, development, deployment, upgrade, or maintenance were mitigated via:

Design review – involves creating a threat model of the application, usually together with a spec or design document, even before the code is created.
Tooling – using automated tools can lower human overhead, but you need to beware of false positives.
Blackbox audit – involves security testing through another application
Whitebox review – manual review of source code by a qualified engineer

Each of these techniques has its advantages, and each involves varying levels of time, effort, and cost – especially time. It’s exactly these types of before-the-fact or after-the-fact security reviews that developers fear – especially when time to market can make or break a project.

That’s Why They Invented DevSecOps
When it became clear that a streamlined yet secure build and ship process was a must have – AppSec and/or InfoSec teams began taking a hard look at how and when security enters the development process. The idea is to integrate security measures at numerous points in the DevOps workflow, but to do so in a way that is as transparent as possible to developers. This keeps DevOps teamwork, agility, and speed intact – while still ensuring application security throughout the entire lifecycle, from production to deployment.

Transforming to a DevSecOps model isn’t simple, though. First off, it requires a change in the way your organization works and thinks. Namely, DevSecOps requires:

Increased focus on customers – app security experts aren’t necessarily used to thinking about customers. Rather, they’re rightfully used to seeking vulnerabilities and mitigating threats. With the advent of DevSecOps, AppSec needs to adapt security programs and practices to client needs and business demands.
Scaling toward innovation – In the context of DevSecOps, application security isn’t a gatekeeper. It’s an innovator – a partner that needs to keep pace with business demands and DevOps methods. To scale accordingly, application security may have to streamline processes and adopt automated tools to lower overhead.
Creating objective criteria – To ensure the fast security decision-making that facilitates rapid time to market, application security needs to create objective security criteria, then adopt the tools to measure them.
Working proactively – With the goal of identifying potential attack targets before they become actual targets, application security should proactively hunt, surface, test, and remediate. This not only ensures that the business impact of weaknesses discovered is minimal, it also helps inject security into core business processes.
Continuously detect and respond – In the DevSecOps model, application security needs to constantly detect, compare, correlate, and respond to threats. Moreover, detection models need to channel information to internal teams for more effective trans-enterprise responses based on real-world business goals.

Different Approaches to DevSecOps
The best approach to implement DevSecOps into your organization depends on a mix of organizational culture, tools, and goals. Some of the primary avenues organizations are taking include:

Creating a bilateral task force – Even as you’re working to facilitate better cultural and professional integration, you can still get down to actual work. A joint DevOps/AppSec task force can start addressing pressing matters right away, or tackle more basic DevSecOps issues like defining a joint set of measurements that facilitate continuous collaboration.
Training DevOps in security – Understanding comes from both sides. To help DevOps team members better understand their new security colleagues, encourage them to learn more about security. From practices to jargon – there is no shortage of online resources or conferences that can promote DevSecOps coexistence.
Add security to your DevOps mix – Security thinking is very different to DevOps thinking. To jump-start what will hopefully become a smooth-running team effort, add security staff to your DevOps team. By breaking down the internal cultural barriers, you can smooth and expedite the integration of these two teams into a cohesive DevSecOps group.

The Bottom Line
By adopting the DevSecOps model, those 52% of worried developers can rest easy. Finding, fixing and preventing security vulnerabilities – ensuring application security throughout the entire lifecycle, from production to deployment – is possible without degrading productivity or time to market.

 

Uncategorized

Serverless Computing Explained – Comparing Features and Pricing to SaaS, IaaS, PaaS

Posted on

From stxnext.com

When you’re thinking about hosting your app, you want it to be as hassle-free as possible.

After all, you’re on your way to create software that will transform your organization, your community – possibly the world. (No need for modesty here!)

On that path to greatness, there’s no room for hosting frustrations.

Everyone wishes for a hosting solution that makes it easy to deploy features rapidly. It should also be cost-effective, keeping your bottom line happy and freeing up resources to put towards development.

First, let’s answer the most pressing question…

Why should you care about Serverless?
Because it’s coming. The hype is mounting.
first the hype grows…
then it peaks…
then people start seeing issues and criticizing…
until finally they find ways to make it work. The tech matures.

For Serverless computing, the hype is just starting, meaning you can get in on it early.

To be fair, the Serverless computing model is not an entirely new idea – it’s 3-4 years old at least. But up until recently, it was only discussed between technical experts, developers and DevOps professionals.

Now, Serverless is starting to enter the broader IT conversation. How should you react?

At the very least, you should understand what’s going to be on everyone’s minds very soon. But more importantly, you might want to implement Serverless architecture in your project.

So to stay up to date, you should know about Serverless.

What is the Serverless model?

The road to Serverless
The name Serverless might be a little misleading. When we talk about Serverless, we’re not talking only about servers, but about the entire cloud ecosystem.

The easiest way to explain Serverless is to take a historical view.

Serverless Architecture Comparision
A long time ago in the yonder days, you dealt mostly with dedicated servers. To host your app, you had to buy a whole server which would be physically situated in a server room. The whole server was yours and you were responsible for making it function properly.

As you can imagine, that was a bit of a bother to do, especially when all you really wanted was to build your app, not spend time updating and maintaining your server(s).

As a response to that, IaaS – or Infrastructure as a Service – was born. In IaaS, the server is no longer yours – it’s the provider’s. All you have to worry about is setting up the OS, the app itself, its functions and the service. An example of an IaaS solution is AWS EC2.

But if you’re like me, that still sounds like too much ‘Ops’ in your DevOps.

So the next step is Platform as a Service – PaaS. Here, the OS falls on the side of the provider. All you have to do is create the app, while the provider worries about updating the OS and keeping it secure.

Now we get to Serverless, the next logical step.

When you use Serverless architecture for your software, you don’t have to create the whole app. Instead, you only create single functions of the app, while the app layer, the part that manages the functions, is on the side of the provider.

That means the provider handles scaling and ensures proper exchange of information between different parts of the app – so you don’t have to worry about that. In Serverless, you and your devs only care about creating functionality. And isn’t that what development should be all about?

Serverless vs Software as a Service (SaaS)
Finally, the last model on the image is SaaS, or Software as a Service. Here the whole software is on the provider’s side. As the buyer, you get the service, i.e. what the software actually does.

SaaS apps are very popular these days, and you’re probably using some of them. Think about Dropbox, Salesforce, Netflix, Google Apps, and so on – if you pay for them, you only get the service they offer.

However, we need to make a key distinction here between using an app and building an app.

From a user’s perspective, Netflix might fall under SaaS – after all, you just want to watch Stranger Things.

But when you’re building a service like Netflix, you need to use at least a Serverless model to add more functionalities to the app. If you want more control over how the app is built and hosted, you could use PaaS or IaaS instead.

Scaling in Serverless vs other models
Now let’s consider how your costs will scale in each model.

Foodpanda has a range of functionalities: you list restaurants, filter to your taste, select your dish, choose additional ingredients, and finally process your payment.

With PaaS/IaaS, you would build one app that has it all: listing, menu, and ordering.

With Serverless, you would break that up into several functionalities (or Lambdas for Amazon Lambda). You don’t combine them into one app, but send them separately to the provider, and the provider builds the app.

The provider also handles scaling. If the menu function is used very often, but ordering doesn’t see that many requests, the provider can scale each function individually. So the popular menu function would get more processing power, but ordering would still have the same level.

Whereas in PaaS/IaaS, you are responsible for configuring the app to handle the load and be scalable. The difference is that to ensure proper scaling, you need DevOps personnel on your side while in Serverless, a provider such as Amazon handles all of that.

TL;DR: Serverless architecture allows you to focus on the application’s code, and not on how the code will perform on the server.

Pricing: How Serverless can save you money
Comparision of Serverless Architecture Pricing
The final reason to consider Serverless is its flexible pricing model.

In IaaS/PaaS, you pay for the time when your app is working and available to users. If you own Foodpanda and want it to be available 24/7, then you pay for each hour when it’s online and waiting for connection from users. Crucially, you keep paying regardless if the server/app is in use or not. To scale, you have to add new virtual machines (IaaS) or create new app instances (PaaS).

For Foodpanda, that is fine; the site is probably used by someone every minute of every day.

But what if your app isn’t Chairman of the Popularity Club yet?

In Serverless, if it ever happens that Foodpanda is used by no one for half an hour – you don’t pay for that. More realistically, you could have an office app that employees mostly use during business hours. That would ‘sit around bored’ all night long, but it should still be available for that one employee that desperately needs to check something at 2 AM. For such cases, Serverless ideal because you only pay for how much your app is actually used.

What do I mean by ‘how much the app is used’? With Serverless, you pay for the amount requests the app gets and for milliseconds of CPU and RAM work.

AWS Lambda
Let’s use Amazon’s AWS Lambda as an example for pricing.

Lambda is currently the most popular Serverless solution. Lambda is compatible with both Python 2.7 and 3.6.

So what’s the pricing for AWS Lambda? Here’s an overview straight from the official AWS Lambda page:

AWS Lambda Pricing
Approximation to 100ms

The first 1 000 000 requests each month are free
After that, you pay $0.0000006 per request
The first 400 000 GB-seconds are free
After that, you pay $0.00005001 per GB-second

Pay special attention to the ‘free tier’. Using Lambda, your first 1 000 000 (that’s one million) requests and the first 400 000 GB-seconds are completely free. After that, each request and every GB-second used by your app is counted – and you pay only for that.

This limit is reset every month. Quite generous, isn’t it?

Comparing costs between Lambda and EC2 (IaaS)
Of course, Serverless isn’t a solution for every situation. In some cases, a IaaS solution like EC2 could serve you better. That depends on the amount of attention your app is getting.

What’s the breakpoint for Serverless vs IaaS? Take a look at this cost – https://www.trek10.com/blog/lambda-cost/

The most important part is the far right: if your app gets over 81.9 requests per second (24/7) then IaaS becomes the preferred solution. If it’s anything less than that, Lambda is more cost-effective.

Let’s do the math on that. Take the top row, in which each request takes 100 ms and 128 MB RAM to process. Every day, you need an average of 81.9 requests per second, times 60 seconds in a minute, times 60 minutes in an hour, times 24 hours…

81.9 * 60 * 60 * 24 = 7 076 160 daily requests

For those assumptions, your app needs over 7 million daily requests for Serverless to be more expensive than IaaS.

In other words, your app needs to be really, REALLY popular for Lambda to be a bad choice. Even if the average user usually issues multiple requests each visit, you’d still need hundreds of thousand of users to hit that number.

Final thoughts on Serverless
So what’s the takeaway? Here’s why Serverless is worth paying attention to:

The hype is growing. Serverless will most likely gain popularity in the coming years. It’s worth considering this option sooner than the competition.

Focus only on functionalities. With Serverless, you can build individual functions of the app and let the provider do the work of combining and hosting them.

Smoother scaling. Instead of creating additional virtual machines or app instances, Serverless allows you to scale on a function-by-function basis.

‘Pay as you go’. Instead of paying for idle servers, with Serverless you only spend as much as your users actually use the app.

Hope this helps you get an understanding of the opportunities that Serverless grants you.

 

Uncategorized

Serverless May Kill Containers

Posted on

Kubernetes, the darling of the container world, seems set to dominate the next decade of container orchestration. That is, if containers last that long.

While it seems obvious that containers, the heir apparent to virtual machines, should have a long shelf life, the serverless boom may actually serve to cut it short. Though serverless offerings from AWS and Microsoft are built on the backs of containers, they eliminate the server metaphor entirely (and, hence, the need to containerize that server).

According to some, including Expedia’s vice president of cloud, Subbu Allamaraju, serverless frameworks like AWS Lambda are improving at such a torrid pace that they may soon displace container wunderkind like Kubernetes.

Image result for Serverless May Kill Containers

Killing Your Servers, and Not So Softly

As hawtness goes, it’s hard to find anything bigger than Kubernetes. Ranked in the top .01% of all GitHub projects, and pulling over 1,500 contributors, Kubernetes is on fire. Given its Google pedigree and the promise of helping run containers at scale, it’s not hard to see why.

Designing Your Data Strategy for the Cloud First Age

And yet…the serverless phenomenon is already putting containers under fire, and just a few short years after Docker popularized them for mainstream enterprises.

Why? Well, according to Simon Wardley, industry pundit and advisor to the Leading Edge Forum, it’s because serverless changes…everything: it’s an “entirely new set of emerging practices that’ll change the way we build business.”

Oh, that’s all?

This wouldn’t trouble containers much if serverless were some distant possibility for enterprise infrastructure, but it’s happening now, and fast. Indeed, serverless’ potential comes with casualties, as Allamaraju‏ posits: “Serverless patterns are pulling the rug from underneath container cluster managers faster than the latter [are] becoming industrial grade.” If this seems a bit unbelievable, that’s because it is – if you’re thinking of IT circa 2010 or earlier.

Cloud, however, has dramatically accelerated things. Responding to Allamaraju’s claim, Amplify Partners’ Mike Dauber commented, “It’s incredible how fast we’re collectively moving here. Container management is NOT legacy tech…”. No. By most enterprise standards, it’s still the cutting edge. This must make serverless the diamond blade cutting the cutting edge. Yet this pace of application development innovation is only going increase.

Will enterprises be able to keep up?

Can’t Get Here From Here?

Serverless frameworks like AWS Lambda may be the future, but it’s unclear whether enterprises are ready to embrace them yet. Google’s Alan Ho, for example, believes that “From a programming model and a cost model, AWS Lambda is the future – despite some of the tooling limitations.” Even so, “Docker…is an evolutionary step of ‘virtualization’ that we’ve been seeing for the last 10 years,” while “AWS Lambda is a step-function.” Not everyone is ready to break out of the evolutionary IT track.

Talking with Server Density CEO David Mytton, he confirmed this supposition:

“The migration path for VMs to containers is much easier than VMs to serverless. Serverless is basically starting from scratch and that’s a huge barrier for existing workloads. The question is whether serverless becomes the starting point for new applications. The lack of proper tools around development, builds, monitoring and testing is a real barrier to that right now.”

Not only is serverless a more difficult migration path, but it also requires a fundamental paradigm shift in how we think about infrastructure, Begin founder and CEO Brian Leroux advised me. You have to get beyond the server metaphor, he said “As soon as you take that metaphorical leap, you get a huge degree of isolation and in that isolation you get more durability.”

As much as the learning curve for serverless can be steep, Leroux stressed, Kubernetes and containers aren’t easy, either. The payoff for making that serverless shift, however, is huge: “In Kubernetes you can compose a microservices architecture but you have to take care of the plumbing yourself. Lambda just takes care of all of that for you. With Lambda you don’t think about how your application is going to scale.” AWS takes care of all that bother for the developer.

When I asked how long it took Leroux’s development team to get comfortable with AWS Lambda, he suggested that it took a year for the team to really get comfortable as the team figured out “Amazon-isms.” Microsoft Azure, however, second to the serverless party, watched AWS’ successes and failures, he indicated, and has made it much easier, faster to get up and running with serverless. AWS has since caught back up because, he told me, “The pace of innovation is stunning for Azure and AWS.”

Google, perhaps because of its Kubernetes heritage, has been slower to establish itself as a credible serverless player. This doesn’t bode well for Google’s Cloud, though its Kubernetes-to-Google-Cloud play has been a fantastic stroke. One reason that AWS Lambda is so good, Mytton told me, is that it’s likely the heart of the Amazon Echo. In other words, “AWS is productizing their own usage of it, which is why it’s already pretty good.” This is also why Google Cloud functions remains far behind, he reasons: “I’m unsure what Google themselves might use it for, as Kubernetes is heavily used inside Google as Borg.”

The more serverless bypasses containers entirely, however, Google’s cloud will start to look retro.

In-Between Days

Enterprises aren’t going to dump their new container initiatives overnight, of course. Not all applications are an easy match for serverless, for example. Mytton told me that event-based apps, e.g., Internet of Things-type apps, are particularly well-suited to serverless, though not exclusively so.

It’s also the case that the shift to serverless will be an easier decision for new, greenfield applications. For enterprises simply hoping to modernize their monolithic, old-school VM-based applications, containers and Kubernetes will play a key role for some time.

At least, until something even newer/better/cheaper/faster/better comes along. At the current pace of enterprise infrastructure innovation, set your alarms for…next year.

I’m joking, of course, but a quick look at how the enterprise cloud market has changed is revealing. As Allamaraju pointed out to me, “Platforms like OpenStack had 6-7 years for raise, plateau and slow down.” But “Container cluster managers may not have those many years,” he goes on.”

David Linthicum preaches truth when he opines, “The feature gap between public and private clouds has grown so wide that the private cloud demos that I attend are laughable considering the subsystems that enterprises need, such as security, governance, databases, IoT, and management, versus what private clouds actually deliver.” Just a few years ago, however, all the capabilities were in private datacenters. Public clouds were cheap and convenient but feature-light.

Today, the opposite is true.

As Allamaraju says, innovation in serverless is outpacing the maturing of Kubernetes and other container management tools. This bodes well for serverless, and not so well for containers. Containers may end up being the hottest trend in enterprise computing, and yet not be able to sustain that heat for very long. Not when developers are ultimately driven by which tools will deliver the most productivity for the most convenience. Serverless, again, provides a step function in that equation. Containers are merely evolutionary.

So, yes, Kubernetes, important and cool as it is today, is very much at risk. A year ago that would have been heresy. A year from now that might be received wisdom.

Uncategorized

Understanding Serverless Cloud and Clear

Posted on

Serverless is considered the successor to containers. And while it’s heavily promoted as the next great thing, it’s not the best fit for every use case. Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit. This post offers some technology perspectives on the maturity of serverless today.

First, note how we use the word serverless here. Serverless is a combination of “Function as a Service” (FaaS) and “Platform as a Service” (PaaS). Namely, those categories of cloud services where you don’t know what the servers look like. For example, RDS or Beanstalk are “servers managed by AWS,” where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers means there’s no provisioning, hardening or maintenance involved, hence they are server “less.” A serverless platform works with “events.” Examples of events are the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.

Performance

All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions will finish one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like serverless is going to win in this performance challenge, right? Sometimes the serverless container with your function is not running and needs to start. This causes slight overhead in the total execution of the “cold” functions, which is undesirable if you want to ensure that your users (or “things”) get 100% fast response. To get predictable responses, you have to provision a container platform, leaving you to wonder if it’s worth the cost, not just for running the containers, but also for related investments in time, complexity and risk.

Cost Predictability

With container platforms or servers, you’re billed per running hour, or, in exceptional cases, per minute or second. If you have a very predictable and steady workload, you might utilize at around 70%, which is still a lot of waste. At the same time, you always need to over-provision because of the possibility of sudden spikes in traffic. One option would be to increase utilization, which would come with fewer costs, but also higher risk. With serverless, in contrast, you pay by code execution to the nearest 100 milliseconds, which is much more granular and close to 100% utilization. This makes serverless a great choice for traffic that is unpredictable and very spiky because you pay only for what you use.

Security

You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the “attack surface” is limited and therefore possible to fully protect. With serverless, however, this surface is really thin and broad and runs on shared servers with less protection than, for instance, EC2 or DynamoDB. For that reason, information such as credit card details are not permitted in functions. That does not mean it’s insecure, but it does mean that it can’t pass a strict and required audit…yet. Given the high expectations for serverless, security will likely improve, so it’s good to get some experience with it now so you’re ready for the future.

Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, and so on. Or process orders of groceries, but outsource the payment to a provider. Like credit card numbers, these things are on their own sensible piece of data, but if data in memory is leaked to other users of the same underlying server, a credit card number exposure can be exploited, but an identifier like id: 3h7L8r bought tomatoes cannot.

Reliability

Another thing to think about with security is the availability of services. A relatively “slow” service that can’t go down is generally better than a service that is fast but unavailable. Often in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases, it’s better to turn off your on-premise and go all-in cloud. If you’re not ready for this step, you can also use serverless as a failover platform to keep particular functionalities highly available, not all functionalities of course, but those that are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.

Cloud and Clear

Until recently, it was quite tricky to launch and update a live function. More and more frameworks, like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. This ensures the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C#, you develop the functions themselves. Even logging and monitoring has become really mature over the last few months. The whole source gives you a “cloud and clear” overview of what’s under the hood of your serverless application: how it’s provisioned, built, deployed, tested and monitored and how it runs.

Conclusion

AWS started in 2014 with the launch of Lambda, and although this post is mainly about AWS, Google and Microsoft are investing highly in their functions, and in the serverless approach as well. Over the last couple of months, they’ve shown very promising offerings and demos. The world is not ready to go all-in on serverless, but we’re already seeing increasing interest from developers and startups, who are building secure, reliable, high-performing and cost-effective solutions, and easily mitigating the issues mentioned earlier. You can look forward to waking up one day and finding out that serverless is now fully secured, provides reliable performance (pre-warmed), and has been adopted by many competitors. So be prepared and start investing in this technology today.

Uncategorized

5 Common Misconceptions of Serverless Technology

Posted on

The rapid growth in serverless technology affords companies opportunities to save money on server costs and allows developers to save time and focus on coding rather than back-end operations. A challenge with such swift adoption is the difficulty in maintaining a standard understanding.

Myth #1: Serverless is a Revolutionary New Direction for Software

“A common misconception is that it’s a revolutionary new direction for software,” says Nick Martin, co-founder and CTO at Meteor. “Really, it’s just the next step in the evolution of making software development faster and easier. Like the compiler, the database and cloud computing did in previous eras, serverless further abstracts away the complexities of modern app development and is part of an ongoing trend to free developers from entire classes of concerns.”

As for the benefits for developers, Martin notes that developers can now “focus on their application logic and avoid undifferentiated work like provisioning, server management, or load-balancing.” Serverless ultimately “promises to let developers ship apps faster and at a lower cost,” he says.

Myth #2: Serverless is a Gadget Technology for Hobbyists

Nick Gottlieb, head of Growth at Serverless Inc. believes that one of the biggest myths is that it’s a ‘gadget’ technology that’s not mature or only for a hobbyist. “While serverless compute is still a very early technology, it’s built on the same core infrastructure that providers like AWS, Google, and Microsoft have been investing in and selling to enterprises for years,” says Gottlieb. Furthermore, “because the underlying infrastructure is battle tested, and the value it provides around cost savings and faster time to market are so great, there is already huge amounts of mission-critical enterprise workloads being done on serverless compute.”

Myth #3: Serverless Will Hurt the Movement Toward Containers 

“Containers will continue to be front and center in terms of the underlying infrastructure, but that doesn’t mean they will be the primary unit of deployment for developers,” says Lawrence Hecht, author at The New Stack. “Take, for example, a cloud provider that may build its FaaS (function as a service) on top of containers and use Kubernetes to manage that deployment,” he says. “Individual developers would then deploy application components to functions instead of to container images.”

Hecht notes that this will not happen right away and in the meantime, “We’re seeing emerging companies build dashboards that allow developers to choose which VM, container or function they want to deploy to. Those dashboards are becoming the gateway to the CI/CD pipeline.”

Myth #4: Serverless is Free of Security Vulnerabilities

“The biggest security misconception is thinking you no longer need to worry about known vulnerabilities,” says Guy Podjarny, co-founder and CEO at Snyk. While serverless addresses the risk of known vulnerabilities in OS dependencies, such as OpenSSL’s Heartbleed vulnerabilities, “These ‘unpatched servers’ account for the vast majority of successful attacks today. Serverless applications still contain a large and growing number of application dependencies, pulled from npm, Maven, PyPI and more. These components often carry known vulnerabilities, and require diligent monitoring and preventative tools.”

Myth #5: Serverless Means No More DevOps

“A common misperception is that it completely absolves development teams from the harsh realities of operating software,” says Joe Ruscio, partner at Heavybit. “While it does promise to almost completely do away with the ‘undifferentiated heavy lifting’ of provisioning and managing infrastructure, understanding how your application code performs in production remains of paramount importance.”