Email a colleague    

August 2013

The Shrink-Wrapped Search Engine: Why It’s Becoming a Vital Tool in Telecom Analytics

The Shrink-Wrapped Search Engine: Why It’s Becoming a Vital Tool in Telecom Analytics

In its early days, Google faced a computing challenge of epic proportions: how to catalog the internet so it could profitably grow its web search and GoogleAds biz.

The mission required massive disk space and lightning fast response, and building a solution around standard commercial hardware and databases would have been way too costly.  So what did Google do?  It invented a globally distributed search engine that lives in many mammoth data centers, each populated with thousands of commodity CPUs and disks.

Fifteen years later, the same technology that drives Google and other search engines has been productized into software, basically shrink-wrapped search engines.  They are quickly becoming a standard tool in business analytics.

Splunk is a fast-growing success story in this new software category.  The firm IPOed in 2012 and now has 5,600 customers and $198 million in annual revenue (2012).

Now here to explain the Splunk business and explain how its product plays in the telecom industry is Tapan Bhatt, the firm’s senior director of solutions marketing.

Dan Baker: Tapan, it would be great if you could explain what your software solution is about.

Tapan Bhatt: Dan, when people say “big data”, they often think of Twitter or Facebook data, but at Splunk we consider social media as just one example of the broader category of machine data.  Machine data is everything coming off mobile devices, emails, web pages, security logs, and much more.  IDC estimates that 90% of the data an organization owns is machine data that lives outside relational databases.

So Splunk allows companies to have full access to that information for many purposes: analytics, compliance, troubleshooting, and security.  Splunk software is especially useful where the data is either too costly to integrate in relational databases or the data needs to be accessed in real time.

Splunk is completely agnostic to data source type and data that varies widely so even with various data sources and types you can still make sense of the data.  Let’s take an example of a customer escalation issue at a telecom company:

You start off with multiple data sources, say, ordering processing, middleware, care/IVR data, and Twitter feeds.  At first, the data may not make sense — some of the data is simply blocks of text.  But by picking up a customer ID here and a product ID there, you can correlate and trace the entire process of the telecom service.

Let’s say the customer tried to place an order, there was an error in the middleware that made the order fail.  The customer then calls customer service and is put on hold for 10 minutes, so the customer gets frustrated and complains on Twitter.  By analyzing the machine data, you can correlate the customer ID from the call all the way back to the original order and identify the middleware error that ultimately resulted in the tweet so you can address the issue.

This is an example of what’s possible with machine data.

What’s the technology behind Splunk?

Basically, the technology comes in three parts: the data collection, data indexing, and an analytics query language called the Splunk Search Processing Language.  The collection function pulls in data in any format, then the data indexing organizes the data for fast analysis using the Search Processing Language.  What’s really unique about doing analytics in Splunk is it is on-demand, a schema on-the-fly, versus the traditional slow and costly data management practice of normalizing, loading data into a warehouse for analytics.

The core technology is the ability to search and correlate data across many discrete datasets.  You can do reporting or alerting; you can create custom dashboards in the interface.  Then there are software development kits and APIs that allow third party developers to build things on top of Splunk.

One of the hardest things to wrap my mind around is the fact that the Splunk machine data is not stored in a database: the fields and rows of an Excel spreadsheet is a paradigm that it’s hard to shake.

Yes, underneath Splunk, the data is not stored in a relational database.  Think of Splunk as a highly efficient file system that lives across distributed computers and allows you to do real-time search and analytics.

Machine data is relatively unstructured.  Each item has a time stamp, but most everything else is unstructured.  Data warehouses and relational databases rely on a schema which means you need to understand the metadata structure.  With Splunk you extract the schema at search time after the data is indexed.  In Splunk you can have a data source available immediately because you are extracting the schema only after the data is indexed.

In a data warehouse you have connectors within an ETL (Extract Transform Load) process that periodically captures new records and adds them to the system.  Splunk has no such connectors.  Instead it uses Forwarders, which listen to data and appends the data to the file system automatically.

Now the other beauty of this approach is it saves money: you avoid the cost of database licenses and you can scale to terabyte size deployments linearly using commodity hardware.  Multiple machines are processing the data in parallel as the data volume increases.  This significantly reduces your hardware costs.

In a great many use cases, the Splunk approach compares very favorably with relational databases.  And when it comes to search and ad hoc correlations, Spunk delivers greater power than a database, especially since you can better ask questions of the data and you’re not constrained to a certain type of search.

And yet I understand you now allow users to pull data directly from relational databases.

Yes, four months ago we launched DBConnect and it’s been one of the hottest downloads off our website.

DBConnect enables Splunk users to enrich machine data with structured data in multiple databases.  So if you pull in data such as customer IDs, product IDs, or regions the customer lives in, those data are inserted in the machine data.  These lookups allow users to gain access to richer profile information.

What about customers and users in the telecom market?

Dan, we serve a broad range of industries in finance, government, media, telecom and others.  One of our largest installations is one of North America’s largest mercantile exchanges where Splunk monitors 25 million trades a day for security.

Our telecom customers include firms like Verizon, China Mobile, Telstra, CenturyLink, Taiwan Mobile, KDT, and NTT.  And those firms are using Splunk in a wide number of use cases — in security, quality of service monitoring, network capacity planning, and fraud analysis to name a few.

A good way to explain the possibilities in telecom is run through a quick use case in the mobile world, so why don‘t we do that.

Mobile Service Profitability & Optimization Case

We’re using a hypothetical mobile operator that is offering an unlimited song download service, and they are eager to analyze the usage of that service on their mobile devices to optimize the service and its profitability.  They connected three machine data sources to Splunk:

  • Radius authentication data is used to track log-in data to verify customers are accessing the right resources and services.
  • Web data is brought in to find out exactly what songs are being accessed.
  • Business Process Management (BPM) logs are a collection of logs and other transactional data pulled from the middleware stack generated from order management and billing applications.

Now correlating the data from these three very different machine data types is not a trivial exercise, yet doing so is a powerful feature of the Splunk engine.

For example, Radius authenticates and tracks the identity of a particular user in a particular IP session.  But what happens when the user logs in an hour later and starts a new session?  How do you group the webpages an individual user visited on a particular day?

Well, what you can do is ask Splunk to merge the Radius logs and the web logs to create a “transaction” and use that transaction to track user activity sequentially no matter how many sessions were started or websites were visited.

And once the transactions are created, you can search for specific events.  You can geographically map the BCM data for a particular time period to, say, track purchases of iPhones from a particular zip code and show the results on a Google map.

You can create a CEO-level dashboard in Splunk that tracks average revenue per user or the number of iPhone orders perhaps.  And you can also do external look ups, say to a reverse DNS lookup file allowing you to dramatically enrich the data in Splunk.  With DBConnect, you can bring in data directly from a relational database.

Finally you can distribute reports.  If you want, you can schedule a PDF file of a particular report to be distributed to the marketing group at 9:00 am every Monday.  Likewise, you can have the results dumped to CSV files for viewing in Excel.  Plus APIs in Splunk enable report access in third party analytics solutions.

Sounds like the solution is very flexible.  Tell me, how do you price Splunk?

We charge based on volume of data indexed per day.  Our pricing is tiered and starts at 500 Megabyte to 1 Gig, then 1 to 5 Gig, 5 Gig to 10 Gig, and so on.

At the low end, the cost of one use case is in the $5,000 to $10,000 range.  Of course, the biggest companies are paying more than $1 million a year.

Another advantage is your time-to-value is fairly quick with only a small investment.  Comparing to a traditional database provider, you could easily spend $1 million on software and another $4 million on services — and that might take you a year to deploy.  At Splunk, however, our revenue from services is basically negligible.

By the way, the Splunk engine scales all the way from desktop to enterprise.  The same product used to process only 500 Megabytes of data is the same at a customer processing 100 Terabytes a day with analytics capabilities on approximately 10 Petabytes of historical data.

How quickly can users get up to speed on the product?

What gets people started with Splunk is downloading the product freely off our website and trying it out in their area of interest.

We have a one-hour tutorial and in that time you can learn how to index the files.

There’s a straightforward way through APIs to use SQL commands, but you don‘t need to know SQL.  The vast majority of users employ our Splunk Processing Language (SPL) that features over 100 statistical commands.

Using SPL, the user can extract particular fields and even utilize some predictive features in the language.

The learning curve is quite fast.  We have a guy on staff that was an Oracle DBA in a previous life.  He says that in a day or two he was able to get 90% of the functionality he had on a sophisticated database.

Thank you, Tapan.

Copyright 2013 Black Swan Telecom Journal

 
Tapan Bhatt

Tapan Bhatt

Tapan Bhatt is Senior Director of Product Marketing at Splunk.  He has broad experience in corporate marketing, product management, and business development.  Prior to joining Splunk in 2011, he held key marketing roles at Vendavo and Siebel Systems.

He has a BS degree in Chemical Engineering from the Birla Institute of Technology and Science and an MBA from the Graduate School of Business, University of Chicago.   Contact Tapan via

Black Swan Solution Guides & Papers

cSwans of a Feather

  • A Big Data Starter Kit in the Cloud: How Smaller Operators Can Get Rolling with Advanced Analytics interview with Ryan Guthrie — Medium to small operators know “big data” is out there alright, but technical staffing and cost issues have held them back from implementing it.  This interview discusses the advantages of moving advanced analytics to the cloud where operators can get up and running faster and at lower cost.
  • Telecoms Failing Badly in CAPEX: The Desperate Need for Asset Management & Financial Visibility interview with Ashwin Chalapathy — A 2012 PwC report put the telecom industry on the operating table, opened the patient up, and discovered a malignant cancer: poor network CAPEX management, a problem that puts telecoms in grave financial risk.  In this interview, a supplier of network analytics solutions provides greater detail on the problem and lays out its prescription for deeper asset management, capacity planning and data integrity checks.
  • History Repeats: The Big Data Revolution in Telecom Service Assurance interview with Olav Tjelflaat — The lessons of telecom software history teach that new networks and unforeseen industry developments have an uncanny knack for disrupting business plans.  A service assurance incumbent reveals its strategy for becoming a leader in the emerging network analytics and assurance market.
  • From Alarms to Analytics: The Paradigm Shift in Service Assurance interview with Kelvin Hall — In a telecom world with millions of smart devices, the service assurance solutions of yesteryear are not getting the job done.  So alarm-heavy assurance is now shifting to big data solutions that deliver visual, multi-layered, and fine-grained views of network issues.  A data architect who works at large carriers provides an inside view of the key service provider problems driving this analytics shift.
  • The Shrink-Wrapped Search Engine: Why It’s Becoming a Vital Tool in Telecom Analytics interview with Tapan Bhatt — Google invented low cost, big data computing with its distributed search engine that lives in mammoth data centers populated with thousands of commodity CPUs and disks.  Now search engine technology is available as “shrink wrapped” enterprise software.  This article explains how this new technology is solving telecom analytics problems large and small.
  • Harvesting Big Data Riches in Retailer Partnering, Actionable CEM & Network Optimization interview with Oded Ringer — In the analytics market there’s plenty of room for small solution firms to add value through a turnkey service or cloud/licensed solution.  But what about large services firms: where do they play?  In this article you’ll learn how a global services giant leverages data of different types to help telcos: monetize retail partnerships, optimize networks, and make CEM actionable.
  • Raising a Telco’s Value in the Digital Ecosystem: One Use Case at a Time interview with Jonathon Gordon — The speed of telecom innovation is forcing software vendors to radically adapt and transform their business models.  This article shows how a deep packet inspection company has  expanded into revenue generation, particularly  for mobile operators.  It offers a broad palette of value-adding use cases from video caching and parental controls to application-charging and DDoS security protection.
  • Radio Access Network Data: Why It’s Become An Immensely Useful Analytics Source interview with Neil Coleman — It’s hard to overstate the importance of Radio Access Network (RAN) analytics to a mobile operator’s business these days.  This article explains why the RAN data, which lives in the air interface between the base station and the handset --  can be used for a business benefit in network optimization and customer experience.
  • Analytics Biology: The Power of Evolving to New Data Sources and Intelligence Gathering Methods interview with Paul Morrissey — Data warehouses create great value, yet it’s now time to let loose non-traditional big data platforms that create value in countless pockets of operational efficiency that have yet to be fully explored.  This article explains why telecoms must expand their analytics horizons and bring on all sorts of new data sources and novel intelligence gathering techniques.
  • Connecting B/OSS Silos and Linking Revenue Analytics with the Customer Experience by Anssi Tauriainen — Customer experience analytics is a complex task that flexes B/OSS data to link the customer’s network experience and actions to improve it and drive greater revenue.  In this article, you’ll gain an understanding of how anayltics data needs to be managed across various customer life cycle stages and why it’s tailored for six specific user groups at the operator.
  • Meeting the OTT Video Challenge: Real-Time, Fine-Grain Bandwidth Monitoring for Cable Operators interview with Mark Trudeau — Cable operators in North America are being overwhelmed by the surge in video and audio traffic.  In this article you’ll learn how Multi Service Operators (MSOs) are now monitoring their traffic to make critical decisions to protect QoS service and monetize bandwidth.  Also featured is expert perspective on trends in: network policy; bandwidth caps; and  customer care issues.
  • LTE Analytics:  Learning New Rules in Real-Time Network Intelligence, Roaming and Customer Assurance interview with Martin Guilfoyle — LTE is telecom’s latest technology darling, and this article goes beyond the network jargon, to explain the momentous changes LTE brings.  The interview delves into the marriage of IMS, high QoS service delivery via IPX, real-time intelligence and roaming services, plus the new customer assurance hooks that LTE enables.

Related Articles

  • Tokopedia, Indonesia’s E-Commerce King, Partners with 11 Million Merchants; Adopts Multi-Cloud to Drive Innovation interview with Warren Aw & Ryan de Melo — Indonesia’s Tokopedia, founded in 2009, has grown to become one of world’s leading e-commerce players.  Read about its success, technology direction, and multi-cloud connectivity adoption.
  • Bridge Alliance: Knocking Down Regional & Mobile Connectivity Barriers so Connected Car Markets Get Rolling in Asia interview with Kwee Kchwee — The CEO of an Asian consortium of mobile operators explains how they  help simplify and harmonize their members‘ operations in support of multi-national corporations.  This integration is enabling two huge industries to come together in Asia: auto manufacturing and telco.
  • Epsilon’s Infiny NaaS Platform Brings Global Connection, Agility & Fast Provision for IoT, Clouds & Enterprises in Southeast Asia, China & Beyond interview with Warren Aw — Network as a Service, powered by Software Defined Networks, are a faster, more agile, and more partner-friendly way of making data global connections.  A leading NaaS provider explains the benefits for cloud apps, enterprise IT, and IoT.
  • PCCW Global: On Leveraging Global IoT Connectivity to Create Mission Critical Use Cases for Enterprises interview with Craig Price — A leading wholesale executive explains the business challenges of the current global IoT scene as it spans many spheres: technical, political, marketing, and enterprise customer value creation.
  • Senet’s Cloud & Shared Gateways Drive LoRaWAN IoT Adoption for Enterprise Businesses, Smart Cities & Telecoms interview with Bruce Chatterley — An IoT netowork pioneer explains how LoRaWAN tech fits in the larger IoT ecosystem.  He gives use case examples, describes deployment restraints/costs, and shows how partnering, gateway sharing, and flexible deployment options are stimulating growth.
  • ARM Data Center Software’s Cloud-Based Network Inventory Links Network, Operations, Billing, Sales & CRM to One Database interview with Joe McDermott & Frank McDermott — A firm offering a cloud-based network inventory system explains the virtues of: a single underlying database, flexible conversions, task-checking workflow, new software business models, views that identify stranded assets, and connecting to Microsoft’s cloud platform.
  • Pure Play NFV: Lessons Learned from Masergy’s Virtual Deployment for a Global Enterprise interview with Prayson Pate — NFV is just getting off the ground, but one cloud provider to enterprises making a stir in virtual technology waters is Masergy.  Here are lessons learned from Masergy’s recent global deployment using a NFV pure play software approach.
  • The Digital Enabler: A Charging, Self-Care & Marketing Platform at the Core of the Mobile Business interview with Jennifer Kyriakakis — The digital enabler is a central platform that ties together charging, self-care, and marketing.  The article explains why leading operators consider digital enablers pivotal to their digital strategies.
  • Delivering Service Assurance Excellence at a Reduced Operating Cost interview with Gregg Hara — The great diversity and complexity of today’s networks make service assurance a big challenge.  But advances in off-the-shelf software now permit the configuring and visualizing of services across multiple technologies on a modest operating budget.
  • Are Cloud-Based Call Centers the Next Hot Product for the SMB Market? interview with Doron Dovrat — Quality customer service can improve a company’s corporate identity and drive business growth.  But many SMBs are priced out of acquiring modern call center technology.  This article explains the benefits of affordable and flexible cloud-based call centers.
  • Flexing the OSS & Network to Support the Digital Ecosystem interview with Ken Dilbeck — The need for telecoms to support a broader digital ecosystem requires an enormous change to OSS infrastructures and the way networks are being managed.  This interview sheds light on these challenges.
  • Crossing the Rubicon: Is it Time for Tier Ones to Move to a Real-Time Analytics BSS? interview with Andy Tiller — Will tier one operators continue to maintain their quilt works of legacy and adjunct platforms — or will they radically transform their BSS architecture into a new  system designed to address the new telecom era?  An advocate for radical transformation discusses: real-time analytics, billing for enterprises, partnering mashups, and on-going transformation work at Telenor.
  • Paradigm Shift in OSS Software: Network Topology Views via Enterprise-Search interview with Benedict Enweani — Enterprise-search is a wildly successful technology on the web, yet its influence has not yet rippled to the IT main stream.  But now a large Middle Eastern operator has deployed a major service assurance application using enterprise-search.  The interview discusses this multi-dimensional topology solution and compares it to traditional network inventory.
  • The Multi-Vendor MPLS: Enabling Tier 2 and 3 Telecoms to Offer World-Class Networks to SMBs interview with Prabhu Ramachandran — MPLS is a networking technology that has caught fire in the last decade.  Yet the complexity of MPLS has relegated to being mostly a large carrier solution.  Now a developer of a multi-vendor MPLS solutions explains why the next wave of MPLS adoption will come from tier 2/3 carriers supporting SMB customers.
  • Enabling Telecoms & Utilities to Adapt to the Winds of Business Change interview with Kirill Rechter — Billing is in the midst of momentous change.  Its value is no longer just around delivering multi-play services or sophisticated rating.  In this article you’ll learn how a billing/CRM supplier has adapted to the times by offering deeper value around the larger business issues of its telecom and utility clients.
  • Driving Customer Care Results & Cost Savings from Big Data Facts interview with Brian Jurutka — Mobile broadband and today’s dizzying array of app and network technology present a big challenge to customer care.  In fact, care agents have a hard time staying one step ahead of customers who call to report problems.  But network analytics comes to the rescue with advanced mobile handset troubleshooting and an ability to put greater intelligence at the fingertips of highly trained reps.
  • Hadoop and M2M Meet Device and Network Management Systems interview with Eric Wegner — Telecom big-data in networks is more than customer experience managment: it’s also about M2M plus network and element management systems.  This interview discusses the explosion in machine-to-machine devices, the virtues and drawbacks of Hadoop, and the network impact of shrink-wrapped search.
  • The Data Center & Cloud Infrastructure Boom: Is Your Sales/Engineering Team Equipped to Win? by Dan Baker — The build-out of enterprise clouds and data centers is a golden opportunity for systems integrators, carriers, and cloud providers.  But the firms who win this business will have sales and engineering teams who can drive an effective and streamlined requirements-to-design-to-order process.  This white paper points to a solution — a collaborative solution designs system — and explains 8 key capabilities of an ideal platform.
  • Big Data: Is it Ready for Prime Time in Customer Experience Management? interview with Thomas Sutter — Customer experience management is one of the most challenging of OSS domains and some suppliers are touting “big data” solutions as the silver bullet for CEM upgrades and consolidation.  This interview challenges the readiness of big data soluions to tackle OSS issues and deliver the cost savings.  The article also provides advice on managing technology risks, software vendor partnering, and the strategies of different OSS suppliers.
  • Calculated Risk: The Race to Deliver the Next Generation of LTE Service Management interview with Edoardo Rizzi — LTE and the emerging heterogeneous networks are likely to shake up the service management and customer experience management worlds.  Learn about the many new network management challenges LTE presents, and how a small OSS software firm aims to beat the big established players to market with a bold new technology and strategy.
  • Decom Dilemma: Why Tearing Down Networks is Often Harder than Deploying Them interview with Dan Hays — For every new 4G LTE and IP-based infrastructure deployed, there typically a legacy network that’s been rendered obsolete and needs to be decommissioned.  This article takes you through the many complexities of network decom, such as facilities planning, site lease terminations, green-safe equipment disposal, and tax relief programs.
  • Migration Success or Migraine Headache: Why Upfront Planning is Key to Network Decom interview with Ron Angner — Shutting down old networks and migrating customers to new ones is among the most challenging activities a network operators does today.  This article provides advice on the many network issues surrounding migration and decommissioning.  Topics discussed include inventory reconciliation, LEC/CLEC coordination, and protection of customers in the midst of projects that require great program management skills.
  • Navigating the Telecom Solutions Wilderness: Advice from Some Veteran Mountaineers interview with Al Brisard — Telecom solutions vendors struggle mightily to position their solutions and figure out what to offer next in a market where there’s considerable product and service crossover.  In this article, a veteran order management specialist firm lays out its strategy for mixing deep-bench functional expertise with process consulting, analytics, and custom API development.
  • Will Telecoms Sink Under the Weight of their Bloated and Out-of-Control Product Stacks? interview with Simon Muderack — Telecoms pay daily for their lack of product integration as they constantly reinvent product wheels, lose customer intelligence, and waste time/money.  This article makes the case of an enterprise product catalog.  Drawing on central catalog cases at a few Tier 1 operators, the article explains the benefits: reducing billing and provisioning costs, promoting product reuse, and smoothing operations.
  • Virtual Operator Life: Enabling Multi-Level Resellers Through an Active Product Catalog interview with Rob Hill — The value of product distribution via virtual operators is immense.  They enable a carrier to sell to markets it cannot profitably serve directly.  Yet the need for greater reseller flexibility in the bundling and pricing of increasingly complex IP and cloud services is now a major channel barrier.  This article explains what’s behind an innovative product catalog solution that doubles as a service creation environment for resellers in multiple tiers.
  • Telecom Blocking & Tackling: Executing the Fundamentals of the Order-to-Bill Process interview with Ron Angner — Just as football teams need to be good at the basics of blocking and tackling, telecoms need to excel at their own fundamental skillset: the order-to-cash process.  In this article, a leading consulting firm explains its methodology for taking operators on the path towards order-to-cash excellence.  Issues discussed include: provisioning intervals; standardization and simplicity; the transition from legacy to improved process; and the major role that industry metrics play.
  • Wireline Act IV, Scene II: Packaging Network & SaaS Services Together to Serve SMBs by John Frame — As revenue from telephony services has steadily declined, fixed network operators have scrambled to support VoIP, enhanced IP services, and now cloud applications.  This shift has also brought challenges to the provisioning software vendors who support the operators.  In this interview, a leading supplier explains how it’s transforming from plain ol‘ OSS software provider to packager of on-net and SaaS solutions from an array of third party cloud providers.
  • Telecom Merger Juggling Act: How to Convert the Back Office and Keep Customers and Investors Happy at the Same Time interview with Curtis Mills — Billing and OSS conversions as the result of a merger are a risky activity as evidenced by famous cases at Fairpoint and Hawaiian Telcom.  This article offers advice on how to head off problems by monitoring key operations checkpoints, asking the right questions, and leading with a proven conversion methodology.
  • Is Order Management a Provisioning System or Your Best Salesperson? by John Konczal — Order management as a differentiator is a very new concept to many CSP people, but it’s become a very real sales booster in many industries.  Using electronics retailer BestBuy as an example, the article points to several innovations that can — and are — being applied by CSPs today.  The article concludes with 8 key questions an operator should ask to measure advanced order management progress.
  • NEC Takes the Telecom Cloud from PowerPoint to Live Customers interview with Shinya Kukita — In the cloud computing world, it’s a long road from technology success to telecom busness opportunity.  But this story about how NEC and Telefonica are partnering to offer cloud services to small and medium enterprises shows the experience of early cloud adoption.  Issues discussed in the article include: customer types, cloud application varieties, geographic region acceptance, and selling challenges.
  • Billing As Enabler for the Next Killer Business Model interview with Scott Swartz — Facebook, cloud services, and Google Ads are examples of innovative business models that demand unique or non-standard billing techniques.  The article shows how flexible, change-on-the-fly, and metadata-driven billing architectures are enabling CSPs to offer truly ground breaking services.
  • Real-Time Provisioning of SIM Cards: A Boon to GSM Operators interview with Simo Isomaki — Software-controlled SIM card configuration is revolutionizing the activation of GSM phones.  The article explains how dynamic SIM management decouples the selection of numbers/services and delivers new opportunities to market during the customer acquisition and intial provisoining phase.
  • A Cynic Converted: IN/Prepaid Platforms Are Now Pretty Cool interview with Grant Lenahan — Service delivery platforms born in the IN era are often painted as inflexible and expensive to maintain.  Learn how modern SDPs with protocol mediation, high availability, and flexible Service Creation Environments are delivering value for operators such as Brazil’s Oi.
  • Achieving Revenue Maximization in the Telecom Contact Center interview with Robert Lamb — Optimizing the contact center offers one of the greatest returns on investment for a CSP.  The director of AT&T’s contact center services business explains how telecoms can strike an “artful balance” between contact center investment and cost savings.  The discussion draws from AT&T’s consulting with world class customers like Ford, Dell, Discover Financial, DISH Network, and General Motors.
  • Mobile Broadband: The Customer Service Assurance Challenge interview with Michele Campriani — iPhone and Android traffic is surging but operators struggle with network congestion and dropping ARPUs.  The answer?  Direct  resources and service quality measures to ensure VIPs are indeed getting the quality they expect.  Using real-life examples that cut to the chase of technical complexities, this article explains the chief causes of service quality degradation and describes efficient ways to deal with the problem.
  • Telco-in-a-Box: Are Telecoms Back in the B/OSS Business? interview with Jim Dunlap — Most telecoms have long since folded their merchant B/OSS software/services businesses.  But now Cycle30, a subsidiary of Alaskan operator GCI, is offering a order-to-cash managed service for other operators and utilities.  The article discusses the company’s unique business model and contrasts it with billing service bureau and licensed software approaches.
  • Bricks, Mortar & Well-Trained Reps Make a Comeback in Customer Management interview with Scott Kohlman — Greater industry competition, service complexity, and employee turnover have raised the bar in the customer support.  Indeed, complex services are putting an emphasis on quality care interactions in the store, on the web, and through the call center.  In this article you’ll learn about innovations in CRM, multi-tabbed agent portals,  call center agent training, customer treatment philosophies, and the impact of  self-service.
  • 21st Century Order Management: The Cross-Channel Sales Conversation by John Konczal — Selling a mobile service is generally not a one-and-done transaction.  It often involves several interactions — across the web, call center, store, and even kiosks.  This article explains the power of a “cross-channel hub” which sits above all sales channels, interacts with them all, and allows a CSP to keep the sales conversation moving forward seamlessly.
  • Building a B/OSS Business Through Common Sense Customer Service by David West — Delivering customer service excellence doesn‘t require mastering some secret technique.  The premise of this article is that plain dealing with customers and employees is all that’s needed for a winning formula.  The argument is spelling out in a simple 4 step methodology along with some practical examples.