How Service Mesh Can Help Your Organization

Many organisations today are looking for new ways to innovate with microservices architecture and multi-cloud infrastructure, which can deliver flexibility, speed, and choice. Modern application and infrastructure architecture enable continuous integration/continuous deployment (CI/CD) for faster deployments, bug fixes, and updates. However, deploying these applications in production environments creates new issues in terms of security, scalability, and management. In other words, as applications grow in scale & complexity, it can become more difficult to manage and troubleshoot.
The solution for deploying production-grade microservices at scale is called service mesh - a framework, which can be used to connect necessary application services, like load balancing, monitoring, and security, to microservices. A service mesh provides an array of network proxies alongside containers and each proxy serves as a gateway to each interaction that occurs between containers as well as servers. It serves as a dedicated infrastructure layer for handling service-to-service communication.
A matter of scale
Scalability of services is necessary to address the problems posed by microservices architecture, which work by breaking applications down into a single independent service (microservice), each wrapped in its own lightweight and very portable virtual environment (container). So, a conventional web application might span a handful of virtual machines, a microservices application, on the other hand, can comprise a collection of hundreds or even thousands of microservices, each in its own container running anywhere across a hybrid cloud infrastructure.
These containers can be turned on and off, updated, patched and moved around very easily without impacting on the availability of the application as a whole. Each of these containers also needs to find and communicate both with its companions and gain access to critical application services. And that is far from straightforward given the sheer number of containers involved and their potentially high turnover rates.
To manage this need to communicate to cloud-native apps by traditional means is impossible. Hence, with the help of service mesh, a dedicated infrastructure layer for handling service-to-service requests effectively connects the dots for cloud-native apps. A service mesh provides a centrally managed service ecosystem ready for containers to plug into and do their work.
Service please!
Despite the relative immaturity of this market, there is a lot going on to put all the theory into practice, by vendors in the services space (particularly application delivery, traffic, and security management solutions) as well as the big-name cloud providers. This has led to the development of a number of proprietary service mesh “products”. But, of greater interest is Istio, an open source initiative, that provides the fundamentals you need to successfully run a distributed microservice architecture. Originally led by Google, IBM and Lyft, but now with an ever-growing list of other well-known names contributing to and supporting its development including, Cisco, Pivotal, Red Hat and VMware. It reduces complexity of managing microservice deployments by providing a uniform way to secure, connect, and monitor microservices.
Istio is now almost synonymous with “service mesh”, just as Kubernetes is with “container orchestration”. Istio’s initial implementations are bound to Kubernetes and cloud native application architecture. The promise of service mesh is alive today within a Kubernetes cluster, but the true value will grow tremendously when a service mesh can be applied to all applications across clusters and clouds.
Where next for service mesh
Today many companies are joining the service mesh conversation under the Istio banner. This is the type of support that helped Kubernetes evolve from a “project” to the de facto container orchestration solution in a very short span of time.
The rich combination of established technology giants and innovative start-ups will continue to encourage the development of the Istio service mesh which will include more features and support more applications. By extending Istio with other proven technologies, one can apply the value of Istio to traditional architectures, including those of existing applications in the datacentre.
we know that the granular application services which is an idea that is readily applicable to traditional applications running on virtual machines or bare metal infrastructure. With the disaggregation of application components made possible by the containerized microservices architecture, this mode of service delivery is essential and will eventually become ubiquitous across all application types beyond Kubernetes clusters.
Companies looking to adopt microservices will definitely need a service mesh of some description. And Istio is the best solution for it. Whatever format it takes, the chosen solution needs to deliver value that fits with new and existing applications.
Ranga Rajagopalan
CTO & Co-founder, Avi Networks
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.