Recent discussions about service mesh have been dominated by vendors, each trying to frame service mesh as a new technology that provides security, reliability, and observability for east-west traffic. However, just as microservices are an architectural pattern and not a specific technology, service mesh is a new way to deploy features that in the past fell into the category of API management.
In a service mesh, a proxy is deployed locally with each service in an application. Each service only communicates directly with the proxy on its host, and proxies communicate with each other to pass traffic between services over the network.
Marco Palladino (Kong) explores the service mesh pattern and discusses the problems the pattern is designed to solve, including security (proxies can encrypt all network traffic without services being aware of it), observability (proxies collect metrics, logs, and tracing data from network traffic), reliability (proxies can enforce rate limiting, retries, and handle network drops), composability (swap or reuse services with nothing but a proxy configuration change), standardization (east-west traffic can all be secured in the same way), and efficient development (service developers can focus on business logic instead of interservice communication). Finally, he explains the requirements for any technology that supports this pattern: services can be any size, in any language, or run on any infrastructure or a mix; proxies need to be lightweight, since an instance will be deployed with each service; proxies should be flexible and composable to provide security, reliability, and observability benefits; proxies should be simple to deploy and replace in containerized environments; and proxies should be self-reliant and resilient to network slowdowns and failures.
This session was recorded at the 2019 O'Reilly Software Architecture Conference in San Jose.