Related to
Latest from Cloud Solutions/Public/Private/Hybrid
A Secret Weapon for Empowering Automation
Unlocking Cloud-Native Potential for MEC and RAN
Network operators across the world are feeling the pressure from unprecedented demand and user expectations. The number of global mobile Internet subscribers surged to over 4.3 billion last year1, exposing the need for networks to be thoroughly adaptable, enabling a high Quality of Experience (QoE) while delivering a high return on investment, and delivering significant growth.
Open-source solutions, like Kubernetes (See Figure 1.), hold the key to this transformation by enabling automated lifecycle management at scale for containerized applications and services. (See Sidebar: What in the World is Kubernetes?)
Choosing the right platform can help pave the way for a connected, digitalized world through optimized service delivery. In this transition, the big lesson to learn is that how you automate is just as crucial as what you automate.
It’s no surprise that industry leaders are leveraging the benefits of these systems, with Rakuten recently becoming the first telecom operator to deliver a 100% cloud-native architecture.
Cutting “Edge” Connectivity
Internet of Things (IoT) devices are now broadly embedded across a wide variety of industries. From energy to agriculture, companies are looking to harness the abundant potential of connected operations and data.
The cloud-computing capabilities offered at the edge of the network, through Multi-access Edge Computing (MEC), helps operators to successfully deliver services requiring real-time functionality. This enables the hosting of virtual environments close to the devices that require it. Applications such as Autonomous X, Virtual Reality (VR), Augmented Reality (AR), Industry 4.0 and Ultra-High-Definition (UHD) videos that require a real-time connection, will be able to thrive on the connectivity offered. With MEC, capability is moved closer to the user to produce a low latency, high bandwidth environment. Instead of backhauling all data to a central site to be analysed and processed, operators can now access a service that runs locally, offering high throughput alongside minimum latency. But how can an operator make this process a fully automated one? The answer lies with picking the optimal distribution.
More and more professionals continue to move their applications—such as big data—to containers, in line with the rising demand for machine learning (ML) applications and IoT technology. With the container management market forecasted2 to grow to around $944 million US dollars within the next two years, it is clear vendors are becoming aware of the benefits of cloud automation.
When containerized, applications are streamlined and broken down into their constituent parts and functions called micro-services. This empowers operators to scale out the micro-serviced container that is responsible only for a specific function or task, offering maximized scalability and reliability when running applications. Furthermore, this leads to a reduction in life cycle management timelines, with tools like auto-healing ability helping to shorten tasks, as well as auto-scale micro-services that can respond to and relate back to any number of KPIs required. With the “right kind” of Kubernetes automation, life cycle processes have been reduced from days to minutes and from minutes to sub-seconds.
“Choosing the right platform can help pave the way for a connected, digitalized world through optimized service delivery. In this transition, the big lesson to learn is that how you automate is just as crucial as what you automate.”
When it comes to building these accessible and maximized container applications, Kubernetes is able to automatically scale pods either horizontally or vertically based on data reported directly from the system or other linked sources. The vast majority of Kubernetes services, including the platforms offered by Robin.io, enable Horizontal Pod Autoscaler (HPA). This has been designed to accommodate dynamic changes in an application’s load over time by altering the number of pods. That allows average pod utilization, or average application response units, to remain constant, thereby supporting fluctuations in demand whilst offering unprecedented ease-of-use for the user. The number of Kubernetes organizations auto scaling their pods is growing, with approximately 40% now using HPAs.3
A Cloud Cure?
One pitfall in embracing Kubernetes is assuming that it alone is a simple cure-all for repetitive or scale-out tasks. While it is supporting the en masse migration to the cloud, variations between platforms and orchestration solutions means there can be large disparities in time to resource utilization, outcome, solution costs and opportunities. How you automate is just as important as what you automate. Thus, a system’s ease of use should play a vital role in an operator’s decision-making in order to generate success throughout the life cycle of a service.
The variations among automated Kubernetes cloud platforms and orchestration solutions means there can be large disparities in time to outcome, resource utilization, solution costs, and opportunities. Choosing these platforms must be done with the upmost care, where it is not just about features, but how they are implemented for performance, flexibility, scale and how they reduce time to outcome of your services integration and production life cycles.
This is even more apparent when one distributes applications and services to the edge. Just taking your clumsy, legacy, Kubernetes or VM solution won’t cut it at the edge. Additional concerns that need to be addressed include, cloud platform footprint, data retention strategies and footprints, reliable functionality and throughput as one scales down to the edge and smooths end integration back into head end services or control points.
Handling workloads such as edge applications and subscriber information has also become a key consideration when it comes to Kubernetes deployments. Agility and efficiency of an operation can be expected if these are skillfully handled, but as Kubernetes micro-services add a level of complexity, snapshotting and cloning storage volumes is no longer enough. Zero-touch automation requires snapshotting of the other constructs such as configuration, application metadata and SLA policies, but the benefits of this are tenfold.
Teams will be very quickly able to rollback an entire application to a previous state or clone it so that one has a fully functioning running database from a previously taken snapshot, meaning no hardcoding, hunting, or restarting for the user. A non-Kubernetes aware, storage-only, or simple Container Storage Interface (CSI) approach to operations is counterintuitive when it comes to Kubernetes, going against the agility and efficiency expected and will only hamstring the capabilities of your chosen solution.
By carefully choosing the right infrastructure, operators can achieve a smooth transition to the edge. Life cycle automation, workflows, and the overall operations stack needs to be unified, even when deployed over multiple locations heterogeneous VM and container environments. An operator’s platform selection will dictate how operations and resource silos are reduced—that will save costs and quicken time to outcome. In fact, the appropriate cloud native platform has the potential to reduce the time needed for scale-out tasks from weeks to minutes.
Fully automated deployments are no longer just a goal, but the expectation. Telecom operators looking to rapidly deploy 5G services must consider new ways to remain agile, eliminate downtime, and resource silos. Open-source platforms will soon become the norm for unlocking cloud-native potential and empowering MEC and RAN environments to become fully automated, streamlined, and cost-effective.
REFERENCES AND NOTES
1. Statista: Mobile Internet Usage Worldwide - Statistics & Facts report, Feb 2022, https://www.statista.com/topics/779/mobile-internet/
2. Statista: Container Management Software and Services Revenue Forecast Worldwide for 2020 and 2024 report, https://www.statista.com/statistics/792217/container-scale-management-worldwide/
3. Datadog: 10 Trends in Real-World Container Use report, October 2021, https://www.datadoghq.com/container-report/
ABOUT THE AUTHOR
Brooke Frischemeier is Sr. Director of Product Management at Robin.io, part of Rakuten Symphony. His most recent interests include edge computing, container orchestration, mobile networks, and service providers.
Brooke brings close to three decades of extensive experience in cloud, telecom and networking domains and has held leadership positions in organizations such as Cisco, World Wide Technology, NetNumber, FORE Systems, and Bell Laboratories. His experience spans product management, business development, partner management, system engineering and professional services.
For more information, visit https://www.robin.io/. Follow Robin.io on Twitter @Robin4K8S and LinkedIn: company/robin4k8s.
What in the World is Kubernetes?
Kubernetes, initially developed by Google and later donated to the open-source community managed by the Cloud Native Computing Foundation (CNCF), is the most successful, fastest growing Open-Source Project in history, with the largest community of engineers from many companies actively contributing to the project.
Kubernetes is the de facto standard for container management that delivers the second generation of virtualization by dividing applications into smaller components (containers) that abstract application code from the underlying infrastructure, simplifying version management and enabling portability across various deployment environments.
Brooke Frischemeier | Sr. Director of Product Management, Robin.io
Brooke Frischemeier is Sr. Director of Product Management at Robin.io, part of Rakuten Symphony. His most recent interests include edge computing, container orchestration, mobile networks, and service providers.
Brooke brings close to three decades of extensive experience in cloud, telecom and networking domains and has held leadership positions in organizations such as Cisco, World Wide Technology, NetNumber, FORE Systems, and Bell Laboratories. His experience spans product management, business development, partner management, system engineering and professional services.
For more information, visit https://www.robin.io/. Follow Robin.io on Twitter @Robin4K8S and LinkedIn: company/robin4k8s.