© 2022 Black Swan Telecom Journal | • | protecting and growing a robust communications business | • a service of |
Telecom is an externally driven industry today. A huge number of new applications are emerging like IPX, eHealth, Smart Cities, and the Internet of Things. And each of these requires expanding and flexing the ecosystem to manage a wide variety of partnerships with OTTs and other service providers.
But to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. So joining us now to shed light on these many challenges is Ken Dilbeck, VP of Collaboration at TM Forum.
Dan Baker: Ken, what do your service provider colleagues say about all the changes required to evolve the network and meet the needs of the digital ecosystem? |
Ken Dilbeck: Dan, service providers are very much in the mode of “let’s prepare for change and learn as we go”. They’d love to run with a winning game plan today, but unfortunately, the business models that will drive this extended ecosystem are still emerging.
Plus the business models will vary quite a bit by region. For example, the model for delivering eHealth in India may be government centric, but if you go to the US or UK, then eHealth services may be financed quite differently. Lots of uncertainty out there.
But to prepare for the future, one big thing service providers are pushing these days is to get all their compute and storage assets available in a single IT pool.
Think about all the IT assets telecoms have at their Points of Presence. Imagine if you could unlock those storage and compute capabilities, and turn them into virtualized resources. Well, it opens up some intriguing possibilities. For instance, during times of heavy compute load at a specific locale, other available under-utilized resources from other regions in the SP network can be brought into play.
This enables telecoms to deliver something like the content delivery networks employed by companies like Google and Akamai to cache content close to the network access point based on expected usage patterns. This reduces latency, especially when you need to serve a high volume of users in a certain geographic area.
Hardware is one thing. But operating their IT resources like an Amazon or Microsoft does require a big mindset shift in business processes too. |
Think of it as the transition from NetOps to DevOps. Today’s NetOps environment constrains you a great deal because of all the specialized hardware.
Whenever you need to make a new network capability go live, there’s a two- to three-month test cycle required due to the telecom hardware. But in a DevOps, cloud-oriented world, adding new network functions is a matter of merely sliding cards into racks or deploying new software. That greatly lowers the cost of scaling the network hardware.
Now this shift to a DevOps way of thinking will require some major cultural change in telecom organizations. In fact, the cultural changes may be tougher to deal with than the technical stuff.
For instance, the services that an operator creates will be built from commodity components — they won’t be as specialized to the service provider’s own requirements as they are today.
But to enable the new service enablement environment, you need to very rapidly compose, deploy, and decommission services. So the boundaries between telecom, content, and OTT players will blur because you need to combine components from many different players to provide a unique service.
Operators know quite well they need new processes and they recognize that if you’re operating from the cloud, there may be a lot less visibility into the physical location of the network than is needed currently.
Let’s talk about Network Function Virtualization or NFV. I think there are strong parallels here to developments in the computing world that gave us the cloud, big data, and other commodity computing advances. |
It’s true. And something interesting happened: as computing hardware got more capable, it was accompanied by a greater level of software complexity. Take Windows 8 for example: it’s a couple orders of magnitude more complex to manage that DOS ever was.
Likewise, the management of NFV will be critical and we’re seeing a preview of that as computing scales to the cloud. A lot of layering is required to make that happen.
We think NFV is going to hit the industry much quicker than people expect, so service providers need to begin adapting their infrastructure today to accommodate future virtualization.
So yes, similar to what happened in computing, the drive for more open network systems will push NFV along. And when I say “open”, it’s not necessarily open source, but open systems in the sense of greater plug and play — a flexible enough architecture that allows for easy upgrade and replacement of components.
Now while we truly believe that NFV will take off soon, the way the network hardware will shake out across Layer 2 and Layer 3 is a complete toss-up. There’s no consensus today and it’s a safe bet that some fresh and compelling solutions will pop up in the next few years from players who are not even in the market today.
One thing’s clear: NFV will continue the trend of dramatically lowering the cost of entry into the telecom business. You won’t need to make the tremendous investments that incumbents like AT&T and Verizon did to get to the same point.
Companies like Google or Amazon can say: I’ll develop my own optical switch, put in my own dark fiber, and in that way, bypass the need to conform to service providers’ business rules, hardware, and ways of doing things.
So the largest OTT players will increasingly want to take care of their own delivery thread top to bottom. Google is putting research dollars into creating virtual data centers based on swarm computing. It sounds a bit like science fiction, but it may be closer than we think.
And a related trend to NFV is Software Defined Network or SDN. |
Yes, SDN is an enabler of NFV and there is a lot of synergy between the two. All indications are that SDN will become a major part of NFV deployments in the future.
Perhaps the biggest contribution of the SDN side will be improved network efficiency and better cost savings, particularly the ability of the network to follow-the-sun — that is, to turn down servers when the load lightens, or even turn them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10pm to 6am, you will see a significant decrease in the cost of electricity and cooling.
Ken, what sort of programs is TM Forum running to support this transition to the digital service economy? |
Well, certainly our Catalyst programs are the backbone of what we do at TM Forum. In Nice we showed examples of how operators are leveraging best practices in virtual and hybrid networks.
We are also very focused on end-to-end management. And we think TM Forum is uniquely positioned among the industry standards bodies to look across all these technologies: OpenFlow and traditional telecom.
Then there’s educating service providers on the organizational challenges they face.
For instance, the digital transformation will have a big impact on procurement.
Take the way highly-specialized network hardware is procured today. You issue an RFP to Huawei and Alcatel-Lucent, and you get back 800 pages of compliance information. Compare that to taking your credit card and visiting the local electronics store and saying, “I’ll have two of those off the shelf.”
That’s a dramatic change, but it’s closer to the way your IT department buys generic hardware. So putting training and best practice programs in place in areas like procurement is another way TM Forum reaches out.
Thanks for the insights, Ken. |
Copyright 2014 Black Swan Telecom Journal