As Telestax consolidates websites, on February 1, 2020, Restcomm.com will be directed to Telestax.com.

06 Mar The Rise of the Full Stack Enterprise CPaaS

The Rise of the Full Stack Enterprise CPaaSEven though all code is shifting to the public cloud, there is no doubt that the enterprise infrastructure running therein is anything but public. Most enterprises are setting up their cloud infrastructure in very closed and secure environments, where they choose to deploy isolated microservices that communicate primarily over HTTP REST APIs.

This is why any communications platform as a service (CPaaS) in this environment needs to come with an on-premise deployment option, not just be available as a public API– and it needs to work behind the corporate firewall.

In practice, most code running on cloud machines is only talking to other private APIs and not public APIs. For example, Netflix’s API traffic is 99% internal. Evernote’s API traffic is also at 99%. Even more traditional businesses like The Guardian report up to 70% internal API traffic.

Since only 1% of the code that runs within these company software infrastructure is accessed publicly, or accesses public API services, the rest of the code runs inside protected environment and connects to APIs hosted in the same trusted environment. This is true whether the code runs on a hardware box within a company’s own data center or a cloud offering like AWS. In the case of a hybrid deployment, where some services run on-premises, and some reside on AWS, they are connected by a VPN.

A recent blog by Yorgos Saslis highlights how many levels of protection and isolation it takes to run microservices safely on AWS. Even within an environment of high mutual trust, where team members in the same company work daily together towards a common goal, there are good reasons to isolate access between microservices and minimize exposure to planned and unplanned service disruption.

Most new enterprise and service provider development projects are designing and implementing microservices or delaminating current monolithic applications or services and replacing them with microservices. There are very good reasons for this including ease of development and debugging, improved resilience for individual features or services, better and easier scaling of the application or service, and the need for less coordination and management for ongoing development and support of the individual applications.

The following simplified diagram from the AWS VPC documentation illustrates the hierarchical nature of software resource isolation to safeguard ingress and egress to the public Internet. In reality the topologies are more complex graph-like structures with VPC peering and direct VPN access to enterprise data-centers and on-premises infrastructure.

AWS VPC The Rise of the Full Stack Enterprise CPaaS

In a more common scenario, where the deployment model has both services in the cloud and on the customer’s premises, the two environments are connected using a VPN as shown below. This deployment model is commonly known as hybrid, and is shown below.

Also, to explain how microservices may be deployed on AWS, in different instances or regions, and how the microservices can be connected to one another across different regions, we show them as VPN connections in the diagram as well.

microservices exposed to external public IPs - AWS VPC The Rise of the Full Stack Enterprise CPaaSWhat the above diagram illustrates is how and why so few of the deployed microservices in a real world environment are exposed to external public APIs.

When building application microservices, good developers borrow the smallest set of required functionality from in-house and third party software to do the job and nothing else.

Being able to pull one or more building blocks in the form of a containerized microservice from a third party stack becomes a key selection criteria when evaluating build v. buy options.

We have experienced this trend at Telestax for the past couple of years. Customers who used to run large monolithic applications on dedicated hardware clusters are moving towards containerized and orchestrated cloud environments— and they ask us to deliver Restcomm into byte sized microservices that can fit efficiently and safely into their new apps.

The first generation of CPaaS vendors were focused on public APIs, which offered great value to a limited number of applications. However public-only APIs cannot address a number of critical enterprise use cases such as business process workflows within protected environments.

With an increasing focus on accommodating enterprise customers, CPaaS vendors begin to experience the limitations of public-only APIs. Some cloud providers introduced enterprise interconnect product which partially addressed the issue. It allows a more secure connection between an enterprise private cloud environment and theirs to address concerns about routing all traffic via the public Internet.

Exceedingly few CPaaS vendors are beyond the Public API stage, thus do not address the emerging real world enterprise API requirements.

Private network interconnects are a step in the right direction, but they still leave a lot of room for improvement. One major problem with network-level private interconnect is that they need to be configured for access to all VPCs within an organization running application code. Alternatively, API gateways or proxies need to be designed and implemented at each required application VPC boundary for the private interconnect to be usable. This is resource intensive and complex to design and maintain. Not to mention that VPNs don’t play well with microservices environments such as Kubernetes or Docker Swarm.

The technology complexity factor for private interconnects is one challenge. The bigger challenge is the human factor. Application developers have to coordinate with their IT and Network teams to make this happen. Business cases have to be built and presented at a director or executive committee level. IT budget has to be set aside for establishing the interconnects and managing the ongoing risks and maintaining required quality of service (QoS). Without these steps the application developers cannot build new rich communications features into their new application workflows. And all this takes a lot of time to hash out.

Opening a private connection to a 3rd party vendor VPC has still more risks. It leaves the customer’s infrastructure exposed to potential vulnerabilities, should the vendor side of the interconnect be compromised. This requires the application-side APIs to be built with security and fault tolerance overhead. This takes extra time, more security expertise, and additional CPU resources to support the overhead.

For most scenarios, it is more practical and more feasible for enterprise apps to consume the CPaaS as safely containerized microservices deployed in a trusted application environment. Some of the advantages include:

 

  • Low overhead. The application code can focus on its business logic and use the internal CPaaS microservices for its core functions, while benefiting from the inherent safety and low latency of a local operating environment. This of course requires that the CPaaS itself is designed as containerized autonomous components that can be consumed as independent microservices and still function transparently to the application as one coherent CPaaS stack.
  • More Control Over DevOps Risk. Deploying a component locally reduces many of the external API operational risks. When depending on external APIs, there are not only technical risks but also people and processes risks. For the flawless operation of a public API, all its network, security, load balancing, fault tolerance and high availability layers have to work flawlessly and the people responsible for its operational availability as well as software quality have to be coordinated 24×7. With your own internal CPaaS available, you are in charge, and you make the risk assessments that you are willing to live with.
  • Reduced exposure to public Internet. By deploying the full CPaaS stack or smaller components into an application environment as needed, the risk of public Internet access is reduced to that of securing telecom protocols. All enterprises already have their best practices for PSTN and VoIP connectivity in place. These telecom connections have well-established external and internal access points that can be leveraged by the new enterprise CPaaS.
  • CI/CD integration. Another advantage of CPaaS components consumed as local containerized microservices is that these can be embedded in the application CI/CD suite. The CPaaS components can be continuously tested for regression, integration, performance and functional integrity as the application evolves. With a public API you are left with little choice but to stub the external service or mock the invocations to its API. Both options, however, bring about their own maintenance, and require updates as the public API evolves. With a containerized microservice, you can run your tests in a staging or pre-production environment against the real thing!
  • Alignment on Updates. When working with locally deployed microservices, the application DevOps team can choose to align CPaaS software updates with its own scheduled maintenance windows to minimize disruption in service. Furthermore, new patches can be reviewed, tested carefully in the application CI/CD environment before applying to production code.
  • Quick Fixes. When the CPaaS source code is available to enterprise customers, as is the case with Restcomm, app developers can directly jump into debug mode when in a time crunch. With the commercial version of our CPaaS, Restcomm, you will have access to the code and containers and deployment mechanisms as they become available over time so your team can inspect and come up with a workaround or even a quick patch.
  • Economies of scale. Larger enterprises have negotiated significant discounts with AWS or an alternative IaaS provider. They have also negotiated significant discounts with their telecom service providers. Being able to run a CPaaS on their own cloud accounts and telco plans, may offer additional cost benefits.

 

Conclusion

CPaaS is a fast growing market and a quickly advancing technology with more competitors entering weekly. Most first generation CPaaS providers took an All Public API approach. This is a great, frictionless way to start building new apps with rich communication features.

But as communication API adoption moves from startups to mainstream enterprises, CPaaS providers have to take into account the requirements of more sizeable customers. In addition, the success of enterprise CPaaS will be in large part aided by the rise of the containerized microservices architecture. The public-only API provider will not be able to meet these needs. At Telestax, we are already starting to see more real world enterprise apps taking advantage of CPaaS components running in their own CI/CD and DevOps environments.

CPaaS Deployment Model Comparison Public APIs Local Microservices
Effort to get started Low Moderate
API latency Good Excellent
API Uptime 99.99% 100%
CI/CD Integration No Yes
Timing Updates No Yes
Self fix No Yes
Leverage Own Cloud Account No Yes
Leverage Own Telco Plan No Yes
Exposure to Public Internet Significant Minimal
External DevOps Dependency High Low

 

Rate this article:
Sending
User Review
5 (2 votes)
No Comments

Sorry, the comment form is closed at this time.