Email a colleague    

September 2014

Paradigm Shift in OSS Software: Network Topology Views via Enterprise-Search

Paradigm Shift in OSS Software: Network Topology Views via Enterprise-Search

“Throw a stone into the stream, and the circles that propagate themselves are a beautiful example of all influence.  Likewise, man is placed in the center of beings and a ray of relation passes from every other being to him.”   — Ralph Waldo Emerson, Nature 1836

Enterprise-search is the most frugal of big data’s creatures: it lives off the land and doesn‘t need to be fed.

It acquires its data from any number of sources: machine logs, tables, emails, twitter feeds, Excel, call center conversations — you name it.  It even bypasses APIs and the data trolls who force you to pay a king’s ransom to cross their bridges.

Search also discovers the relations in data without the heavy lifting and expense of architecting data schemas or constantly modifying the business logic inside the labyrinth of enterprise data gardens.

Linked-In is a perfect example of search’s power.  With a market cap of $28 billion, Linked-In makes its money by searching and analyzing the ever-changing relations (or links) between people and the subject matter that unites them.

And yet, as successful as enterprise-search has become on the web, its influence has not yet rippled into the IT main stream.  It’s still considered “experimental” technology: good for report generating and ad hoc analytics, but not for serious operational applications.

So this is precisely why Ontology’s recent announcement of the first successful “shrink-wrapped” enterprise-search OSS application for a Tier 1 operator is so significant.  It’s a clear sign that enterprise-search is now on the maturity fast track — and it can be at the core of genuine software, not just consulting services.

Here to discuss his company’s breakthrough and delve into a discussion on network inventory and the future of enterprise-search and topology visualization is Benedict Enweani, Ontology’s CEO.

Dan Baker: Benedict, first of all, congratulations.  The delivery of a large-scale enterprise-search application is exciting news for all of us big data enthusiasts.  Please tell us about it.

Benedict Enweani: Thanks, Dan.  About a year ago, we deployed the solution to a big Middle Eastern operator who has about 40 million subs.  We were initially hired to come in and create reports for them.  And they saw how we could generate across multi-technology and multi-vendor vendor views — see a full topology from the antenna and RAN all the way back through the various layers of microwave, backhaul, Ethernet and T1/E1 connections riding under that.

So they came back to us and said: “Why should we pay you all this money to get just reports from you.  Can‘t we just pay you to provide an operational system we can use for network operations and troubleshooting?”

Needless to say we were thrilled to get that offer.  So that was the impetus to package up our search technology and sell it as a solution.  In turn, it’s generating interest from other telecoms and managed service providers.

What did the operator consider most valuable in the solution?

In a word, the ability to see “multi-dimensional topologies”: multi-vendor, multi-technology, and multi-layer.  And when you get that view, it becomes much easier to do network change management, trouble shooting, and service impact analysis.

This Middle Eastern operator has 8,000 different configurations it needed to bring in.  And they didn‘t want to go into five different systems to find their answers.  So we delivered a tool intelligent enough to spot the context talked about in different configuration sources and build a structure that explains them and fits them in a data model.

Another value we brought in was an ability to resolve tough data integrity problems that require comparing multiple sources.

In the past we would simply extract our data into reports and special interfaces, but we realized that everyone has a need to visualize network topology so that’s what are our primary output is today.  And while topology sounds like a very network specific concern, it’s actually a data integration problem.  The information you need to understand a topology you can‘t actually discover using SNMP, in fact, getting an understanding of only a single system is not enough.  You need to know how things cross many different systems.  You need to study how everything interacts with everything else.

So the data sources we pulled in were many, for instance: GE Small World, Siebel CRM, Twitter feeds, Mobile network, Cisco configurations of fixed network, and addressable parts of the network from IBM.  Search semantically recognizes nodes, switches, cards, interfaces, and connections, and in this case across 10 different network domains.

In the topology you see a link between the radio antenna and base station controller.  You notice the base station serves six different cells, and there’s a microwave backhaul for the first couple of miles and that terminates on an optical network and runs into the BSC.  So all that detail is there.  And if you want to drill down and see how a particular switch is set up and what cards are in the chassis, you can do that.

Now a program like this is usually a candidate for a network inventory solution.  But that’s a very difficult road to take.  I have personally been involved in several network inventory projects, and often the network inventory system never gets fully functional.  What we accomplished for the Middle East operator took us 6 months.  But if it was a network inventory project, it would have taken three years.

So we figure, our technology is lower cost and a better choice than a network inventory solution in many cases.

“Better than network inventory” is a bold statement, Benedict, and it goes against the conventional wisdom of the past 15 years which says that network inventory is the Holy Grail of what you need to do in OSS.

Yes, I realize it’s a bold statement, so let me step back and explain my view of the OSS world -- its evolution and issues from an inventory point of view.

The first wave of inventory solutions was born in the TDM world where you simply documented facilities.  Then as IP emerged and greatly complicated things, folks like Cramer Systems and Granite came along with the idea that network inventory should be elevated to a kind of golden database and business process engine for provisioning the network.

However, history has shown there are big problems in keeping network inventory up to date.  And the complexity of accounting for many different types of services and equipment requires you to constantly create new SQL logic.  Then there’s the huge cost of manually or semi-automatically importing the data into inventory.  We’ve seen cases where it took 8 years to actually stand an inventory system up.  Plus, once you’re there, you need to ensure the inventory system is the exclusive way you the provision, else you can quickly lose the bubble.

I figure the third inventory wave was Intelliden’s idea of reconciling against the actual network itself instead of trying to meticulously replicate it in an off-line database.  Intelliden’s concept was an important breakthrough except that you could only see active elements in the network: it gave you no visibility over all the passive elements out there.  Plus there are issues keeping up with the changes to all the vendors‘ network equipment.

Network Data Alignment

So how does Ontology improve on the network inventory state-of-the-art?

Well we begin by recognizing that while getting detailed network views is key, going the extra step of automating provisioning may not be worth the effort, especially in large networks.

I worked for a company called Orchestream in the 90s and we built the IP network activation software. [Orchestream was bought by MetaSolv, then later acquired by Oracle.]

Now the task we set out for ourselves at Orchestream was a tough one because we needed to tightly integrate with all the NMSs and EMSs.  We found this effort can succeed in constrained domains, for instance a network with only Cisco and Juniper network gear.  But if you strayed off and bought something from another provider, then the cost of integrating to that equipment was just huge.

So this is why Ontology limits itself to getting the network view for service assurance and network planning, but we stop short of provisioning or writing anything to the network.  In fact, our Middle Eastern operator was very satisfied that we went halfway.  They use our solution for service assurance and generating the topological views, then make manual or semi-automated changes on their own using various NMSs and EMSs.

One of the biggest drawbacks of inventory solutions is that every time you run into a new source of data or change the infrastructure, you need to insert new logic in the inventory software’s model which is an expensive and error-prone task.

But with enterprise-search, if you add a new data source, you essentially create a new class and tell the system how objects of that class link to other things.  The search routine then automatically discovers and maps those objects into the topology.

For instance, to add a new type of antenna into your network you simply define how that antenna interacts with, say, your microwave links.  Then the next topology search would discover all those antenna nodes in your system.  Network nodes are constantly announcing how they relate to things they connect with.  For instance, there may be IP addresses in the antenna’s configuration, or maybe time slots associated with a SONET network within that equipment, so the system deduces that there’s a connection to an optical network.

Each data source updates on its own clock.  Faults are coming in all the time.  Network discovery engines are usually programmed to discover the network 1 to 4 times a day.  CRMs can update each transaction, however that normally creates a heavy load, so operators tend to batch that up once an hour.  Physical inventory updates don‘t usually happen faster than once a week.  And Twitter tweats are loading constantly.  Then using a technique we call “finger-printing” we detect abnormal behavior so if we normally receive an update from a source every hour, and we haven’t seen one in the last 3 hours, that raises an alert.

In what network areas you are applying your enterprise search technology to?

Actually there are four or five very specific areas that customers find most useful for this technology.  Let me walk you through them:

  • Change management -- You are going to make some network changes and you need to pick the most opportune time to make the change and know how the change will impact your SLAs with customers.  And will changes that others are making at the same time possibly create a catastrophe?
  • Network troubleshooting — If fifty 3G cells go down at one time, you ask yourself: what’s common for those 50 cells?  And the system comes back and tells you they all use the same transmission node.  So such root cause analysis can be done across a multi-layer system view.  Theoretically you can do this in an inventory system, but in the seven years that my partner Leo and I have been working on this project, we have actually never seen an inventory system that spans more than 2 technologies.
  • Service Impact Analysis — If a key DWM switch goes down, which antennas are affected?  Which service centers are affected?  And which of the actual enterprise customers are affected?
  • Enterprise Customer Topology — This is a very interesting one and a very large operator in the UK needed this capability.  Take a financial firm like Llyod’s Bank.  They have thousands of network delivery points and it’s very useful to search the carrier’s infrastructure and pull together a topology showing where service is running poorly and where there are issues.  For each enterprise client, we ended up saving the carrier many days of work trying to pull this same information from multiple sources.

So these four use cases are where the lion’s share of our ROI comes from.  So you are saving people time digging up vital information or decreasing their SLA violations.  Or you are decreasing the minutes of downtime, or maybe increasing service resilience.

And the beauty here is that an operator can achieve all these things without the expense of building out a new inventory system.

Benedict, thanks for this highly interesting briefing.  Good luck as you roll this solution out — and please keep us informed of your progress.

Copyright 2014 Black Swan Telecom Journal

Benedict Enweani

Benedict Enweani

As CEO and co-founder of Ontology Systems, Benedict aims to make his company the leading innovator of semantic solutions in the data center and networking marketplace.  Before founding Ontology in 2005, Benedict was CTO at Corvil Networks and CTO at Orchestream (now Oracle).   Contact Benedict via

Black Swan Solution Guides & Papers

cSwans of a Feather

  • Delivering Service Assurance Excellence at a Reduced Operating Cost interview with Gregg Hara — The great diversity and complexity of today’s networks make service assurance a big challenge.  But advances in off-the-shelf software now permit the configuring and visualizing of services across multiple technologies on a modest operating budget.
  • Paradigm Shift in OSS Software: Network Topology Views via Enterprise-Search interview with Benedict Enweani — Enterprise-search is a wildly successful technology on the web, yet its influence has not yet rippled to the IT main stream.  But now a large Middle Eastern operator has deployed a major service assurance application using enterprise-search.  The interview discusses this multi-dimensional topology solution and compares it to traditional network inventory.
  • The Multi-Vendor MPLS: Enabling Tier 2 and 3 Telecoms to Offer World-Class Networks to SMBs interview with Prabhu Ramachandran — MPLS is a networking technology that has caught fire in the last decade.  Yet the complexity of MPLS has relegated to being mostly a large carrier solution.  Now a developer of a multi-vendor MPLS solutions explains why the next wave of MPLS adoption will come from tier 2/3 carriers supporting SMB customers.
  • Big Data: Is it Ready for Prime Time in Customer Experience Management? interview with Thomas Sutter — Customer experience management is one of the most challenging of OSS domains and some suppliers are touting “big data” solutions as the silver bullet for CEM upgrades and consolidation.  This interview challenges the readiness of big data soluions to tackle OSS issues and deliver the cost savings.  The article also provides advice on managing technology risks, software vendor partnering, and the strategies of different OSS suppliers.
  • Calculated Risk: The Race to Deliver the Next Generation of LTE Service Management interview with Edoardo Rizzi — LTE and the emerging heterogeneous networks are likely to shake up the service management and customer experience management worlds.  Learn about the many new network management challenges LTE presents, and how a small OSS software firm aims to beat the big established players to market with a bold new technology and strategy.
  • Mobile Broadband: The Customer Service Assurance Challenge interview with Michele Campriani — iPhone and Android traffic is surging but operators struggle with network congestion and dropping ARPUs.  The answer?  Direct  resources and service quality measures to ensure VIPs are indeed getting the quality they expect.  Using real-life examples that cut to the chase of technical complexities, this article explains the chief causes of service quality degradation and describes efficient ways to deal with the problem.

Related Articles