How to build a microservice?

Posted on

By Sonu Meena

Microservice architecture is the advancement over existing architecture which is more commonly found based around monolithic architecture.

In monolithic architecture all the functionalities and features are offered through single piece of software. Generally, this software is built around three tiers: Data, Application (business layer) and View tier.

This has been the de facto standard for starting any software development at least in startups now. It has its benefits that makes it the most sought architecture to begin development with.

  • Faster to build
    It shorten time to market and hence every new tech company start with this architecture.
  • Work best in constraint environment
    In limited number of developers and budget this is proven to work best for both: Marketing and Devs.
    Marketing guys Or Business leaders want feature out very fast and with few hands working on one piece of software they are able to meet this deadlines pretty well.

So, starting with boring monolithic is good.


Beings said, these benefits don’t last forever. With surge in traffic comesscalability issues. Team put more number of resource. They add more CPUs, more RAM, and more disk to cope up with the growth. This is called vertical scaling where existing servers are upgraded to support growing number of users.
Vertical scaling also has its limitation. Beyond a point you cannot scale and then your application response time again started increasing thereby affecting user experience.

Before it go worse and to further scale we look into horizontal scalabilityoption. This involve running multiple instances of the application behind fault tolerant, highly available load-balancer. And to answer spikes in traffic you simply add more number of instances. This is easy? but wait…

Scalability issue isn’t confined to internet traffic only that you can answer every time with increasing numbers alone. Its effect is more profound internally at development side. With more and more feature keep adding you end up knitting a giant ball that’s getting unwieldy to carry around and issues like following pops up:

  1. Codebase gets bigger and bigger 

    Developer keep pushing more features, thereby adding more number of dependencies and more number of lines of code. Now here comes the problem. Bigger code now takes more space and bandwidth to pull or push. It means code will now take more time to deploy. Since it’s lots of dependencies releasing new features may produce downtime. Downtime is bad for any business. It literally mean loss in revenue.Read: Goes Down, Loses $66,240 Per Minute

  2. More time to build and test
    Individual developer working on particular feature now have to download the entire codebase with all the dependencies to make it work on his little workstation. If he has to test even only his changes, he has to run tests against entire software to ensure the change doesn’t break anything.
  3. on-boarding get slower 

    Now here comes another challenge. Your company is growing and hired some more developers. Codebase is one and to teach new developer how it works will now take considerably lot amount of time. This makes new developer boarding process more slower to make him familiar with all the feature is another challenge.

  4. With more features comes more bugs 

    Since many developers are now working together on same codebase, they all are pushing features in the same repository. It creates favorable opportunity for bugs to propagate unnoticed and create havoc in production.
    No matter how hard you try to avoid them they will be missed anyhow. Earlier codebase was small and so it was easy to track and kill them. This time with many releases going per day they are getting more and more harder to trace. Unless you have team of perfectionist bugs are unavoidable.

& the list continue to grow.

Solution: Componentization


So, horizontal scalability answer your scalability problem partially. Rest you have to answer by splitting along y-axis as per scalecube model.
Y-Axis split of scalecube model says split your big suit into multiple smaller components i.e componentize it

Componentization should be in a way that each component is individuallyupgradable and replaceable. They should talk to each other on published interfaces built with well known  industry standard technologies like REST and JSON so that it don’t sound esoteric to end users. They can either communicate through synchronous protocol like HTTP or more preferably asynchronous protocol like AMQP.

Componentization brings following benefits to the team:

  • Freedom to choose technology

Team is divided with cross-functional domain. They can choose the right tool for the right job as long as they are in constraint.
These constraints are:


  1. tools should not be esoteric to team and should not incur extra cost while doing linear scalability
  2. Technology should be standard so that even end user don’t need to take pain of learning it explicitly.


  • C.A.L.M.S

It brings devOps culture into practice inadvertently. There would be more communication among teams now as team are cross-functional. Using devops life-cycle they get the best out of this.


DevOps Growing in Popularity Despite Unclear Terminology

Posted on

By David Ramel

DevOps in the enterprise is growing, but interpretations of what the term actually means varies widely, a recent study found.

Delphix Corp., a Data-as-a-Service (DaaS) company, partnered with Gleanster Research to survey more than 2,000 DevOps leaders and practitioners, concluding that “DevOps continues to gain momentum, but data issues — including data security — are major challenges, while the definition of DevOps lacks consistency and success metrics among its leaders and practitioners within IT organizations.”

That lack of consistency is shown by the various definitions of the term DevOps proffered by respondents, such as:

  • Developers and system administrators collaborating to ease the transition between development and production (listed by 84 percent of respondents).
  • Using infrastructure automation to facilitate self-service provisioning of infrastructure by development teams (69 percent).
  • Evolving operations to meet the demands of agile software development teams (60 percent).
  • Developers taking full responsibility for all operations tasks (42 percent).
  • Increasing frequency of deployments to uncover defects earlier in the development lifecycle (35 percent).

For another take, Wikipedia defines DevOps as “a software development method that stresses communication, collaboration, integration, automation and measurement of cooperation between software developers and other [IT] professionals.”

Top Four Pressures Causing Organizations To Invest in DevOps

[Click on image for larger view.] Top Four Pressures Causing Organizations To Invest in DevOps (source: Delphix Corp.)

The survey, titled “2015 Annual State of DevOps,” said that while DevOps is one of the hottest trends in the industry, it’s also one of the most ill-defined. “For some people, embracing DevOps is about managing IT resources with Chef, Puppet or CFEngine,” the survey said, “and for others it is about using tools like Jenkins to automate deployments to cloudbased infrastructure. For several of the organizations we surveyed, DevOps was simply about making sure that developers and operations professionals were communicating efficiently.”

Whatever it is, DevOps is gaining traction, according to the report, which stated that nearly every responding organization was practicing the technique or planned to do so within 24 months. The top three reasons for that were: to deliver software faster (66 percent); identify bugs earlier (44 percent); and deliver software more frequently (43 percent).

As for impediments to successful DevOps implementations, a question asking respondents to identify the top two challenges to organizational DevOps initiatives yielded these responses: application teams move faster, the rest of IT struggles to keep up (92 percent); testing environments are limited due to data management challenges (90 percent); several DevOps groups compete for limited resources and budgets (82 percent).

If successful, the primary reported benefits of DevOps were divided into key performance indicators (KPIs) related to software releases and software quality. “It’s abundantly clear that the economic impact of time is a theme that ripples through all facets of DevOps,” the survey stated. “This comes as no surprise given the fact that survey respondents indicated that on average 40 percent of their day is spent re-coding due to bugs. Respondents also indicated it takes an average of 2 hours to reset an environment after a test cycle; reducing this time could have a real and substantial impact on efficiency and effectiveness.”

And yes, the report did propose its own global definition of DevOps:

DevOps is more than just the close collaboration of two departments (development and operations) within IT, it is more than just managing infrastructure with Chef or Puppet, and DevOps is much more than a specific collection of tools and techniques used to automate deployments and manage infrastructure.The term “DevOps” refers to the transformation IT experiences when cross-functional teams develop and deliver software across the full spectrum of IT systems. From software architecture and design to system administration and production support, the term “DevOps” refers to a style of IT management and implementation that places an emphasis on automation and iterative delivery of software, while also empowering developers to manage portions of the software delivery process that were previously inaccessible due to specialization within IT.

DevOps tools and practices have one thing in common: they focus on reducing time to market and making it possible to extend the frequent iterations of Agile into infrastructure and data environments. Overall DevOps is inseparable from both agile software development and cloud computing. As a term, “DevOps” stands for “our infrastructure moves as quickly as our developers need it to.”


Top 10 important DevOps controls

Posted on

by James Henderson

DevOps, the integration of development and operations teams to eliminate conflicts and barriers, often leads to more features in business applications, developed in a faster time and with greater efficiencies.

But the very features that make DevOps attractive to organisations can cause concern for assurance, security and governance practitioners.

A new guide from global IT association ISACA outlines 10 key controls companies need to consider as they embrace DevOps to achieve reduced costs and increased agility.

“DevOps can introduce new risk but, done right, it also mitigates other risk,” says Bhavesh Bhagat, CEO EnCrisp.

According to DevOps Practitioner Considerations, those controls are:

1. Automated software scanning

3. Web application firewall

4. Developer application security training

5. Software dependency management

6. Access and activity logging

7. Documented policies and procedures

8. Application performance management

9. Asset management and inventorying

10. Continuous auditing and/or monitoring

“Because DevOps adoption changes the environment and often impacts a company’s carefully crafted control environment and accepted level of risk, governance, security and assurance professionals need to play a key role,” Bhagat adds.

For Bhagat, the governance decisions relating to risk, including decisions made in the past, may require rethinking, and performance metrics on which business decisions are based may need to be adjusted.

“Furthermore, many security controls that are intertwined with the development process may be compromised,” Bhagat adds.

Is there such thing as a DevOps Hierarchy of Needs?

Posted on


In 1943 the psychologist Abraham Maslow proposed the concept of a ‘hierarchy of needs’ to describe human motivation. Most often portrayed as a pyramid, with the more fundamental needs occupying the largest space in the bottom layers, his theory states that only in the fulfilment of the lower-level needs can one hope to progress to the next strata of the pyramid. The bottom-most need is of course physiological (i.e. food, shelter, beer etc); once this is achieved we can start to think about safety (i.e. lets make sure nobody takes our beer) then we start looking for Love and Self Esteem before ending up cross-legged in an ashram searching for Self-Actualization and Transcendence.


Is this a Devops blog or what? Yes, yes it is. The suggestion is not that we should all be striving for Devops Transcendence or anything, but that perhaps the general gist of Maslow’s theory could be applied to coin a DevOps Hierarchy of Needs, and we could use the brief history of our own Devops team at Spaceape to bolster this idea.

In the beginning there was one man. This man was tasked with building the infrastructure to run our first game, Samurai Siege; not just the game-serving tier but also a Graphite installation, an ELK stack, a large Redis farm, an OpenVPN server, a Jenkins server, et cetera et cetera. At this juncture we could not even be certain that Samurai Siege would be a success. The remit was to get something that worked, to run our game to the standards expected by our players.

Some sound technological choices were made at this point, chief of which was to build our game within AWS.

With very few exceptions, we run everything in AWS. We’re exceedingly happy with AWS, and its suits our purposes. You may choose a different cloud provider; you may forego a cloud provider altogether and run your infrastructure on-premise. Whichever it is, this is the service that provides the first layer on our DHoN. You need some sort of request driven IaaS to equate to Maslow’s Physiological layer. Ideally this would include not only VMs but also your virtual network and storage. Without this service, whatever it might be (and it might be as simple as, say, a set of scripts to build KVM instances), you can’t hope to build toward the upper reaches of the pyramid.

Samurai Siege was launched. It was a runaway success. Even under load the game remained up, functional and performant. The one-man Devops machine left the company and Phase 2 in our short history commenced. We now had an in-house team of two and one remote contractor and we set about improving our lot, striving unawares for that next level of needs. It quickly became apparent, however, that we might face some difficulty…

If AWS provided the rock on which we built our proverbial church, we found that the church itself needed some repairs, someone had stolen the lead from its roof.

Another sound technology choice that was made early was to use Chef as the configuration management tool. Unfortunately – and unsurprisingly given the mitigating circumstances – the implementation was less than perfect. It appeared that Chef had only been used in the initial building of the infrastructure, attempts to run it on any sort of interval led inevitably to what we had started to call ‘facepalm moments’. We had a number of worrying 3rd party dependencies and if Chef was problematic, Cloudformation was outright dangerous. We had accrued what is commonly known as technical debt.

Clearly we had a lot of work to do. We set about wresting back control of our infrastructure. Chef was the first victim: we took a knife to our community cookbooks, we introduced unit tests and cookbook versioning, we separated configuration from code, we even co-opted Consul to help us. Once we had Chef back on-side we had little choice but to rebuild our infrastructure in its entirety, underneath a running game. With the backing of our CTO we undertook a policy of outsourcing components that we considered non-core (this was particularly efficacious with Graphite, more on this one day soon). This enabled us to concentrate  our efforts and to deliver a comprehensive game-serving platform, of which we were able to stamp out a new iteration for our now well-under-development second game, Rival Kingdoms.

It would be easy at this point to draw parallels with Maslow’s second tier, Safety. Our systems were resilient and monitored, we could safely scale them up and down or rebuild them. But actually what we had reached at this point was Repeatability. Our entire estate – from the network, load-balancers, security policies and autoscaling groups through to the configuration of Redis and Elasticsearch or the specifics of our deployment process – was represented as code. In the event of a disaster we could repeat our entire infrastructure.

Now, you might think this is a lazy observation. Of course you should build things in a repeatable fashion, especially in this age of transient hosts, build for failure, chaos monkeys, and all the rest of it. The fact is though, that whilst this should be a foremost concern of a Devops team, quite often it is not. Furthermore, there may be genuine reasons (normally business related) why this is so. The argument here is that you can’t successfully attain the higher layers of our hypothetical DHoN until you’ve reached this stage. You might even believe that you can but be assured that as your business grows, the cracks will appear.

At Spaceape we were entering Phase 3 of our Devops journey, the team had by now gained some and lost some staff, and gained a manager. The company itself was blossoming, the release date of Rival Kingdoms had been set, and we were rapidly employing the best game developers and QA engineers in London.

With our now sturdy IaaS and Repeatability layers in place, we were able to start construction of the next layer of our hierarchy  – Tooling. Of course we had built some tools in our journey thus far (they could perhaps be thought of as tiny little ladders resting on the side of our pyramid) but its only once things are standardised and repeatable that you can really start buildingeffective tooling for the consumption of others. Any software that tries to encompass a non-standard, cavalier infrastructure will result in a patchwork of ugly if..then..else clauses and eventually a re-write when your estate grows to a point where this is unsustainable. At Spaceape, we developed ApeEye (ahilarious play on the acronym API) which is a RESTful Rails application that just happens to have a nice UI in front of it. Perennially under development, eventually it will provide control over all aspects of our estate but for now it facilitates the deployment of game code to our multifarious environments (we have a lot of environments – thanks to the benefits of standardisation we are able to very quickly spin up entirely new environments contained on a single virtual host).

And so the launch of Rival Kingdoms came and went. It was an unmitigated success, the infrastructure behaved – and continues to behave – impeccably. We have a third game under development, for which building the infrastructure is now down to a fine art. Perched as we are atop our IaaS, Repeatabilty and Tooling layers, we can start to think about the next layer.

But what is the next layer? It probably has something to do with having the time to reflect, write blog posts, contribute to OSS and speak at Devops events, perhaps in some way analogous to Maslow’s Esteem level. But in all honesty we don’t know, we’ve but scarcely laid the foundations for our Tooling level. More likely is that there is no next level, just a continuous re-hashing of the foundations beneath us as new technologies and challenges enter the fray.

The real point here is a simple truth – only once you have a solid, stable, repeatable and predictable base can you start to build on it to become as creative and as, well, awesome as you’d like to be. Try to avoid the temptation to take shortcuts in the beginning and you’ll reap the benefits in the long term. Incorporate the practices and behaviours that you know you should be,  as soon as you can.  Be kind to your future self.