Features

With next-generation networks in place and 3G devices becoming more widely available, European telecom operators are facing new challenges with respect to All-over-IP, service orchestration,
changing paradigms, business transition, Push-to-XYZ – and whatever else comes to mind as the most important thing to tackle next. Regardless of the viewpoint you take, it will be rigorous customer orientation that will determine the future prosperity of telecoms providers. And the overall improvement of customer relationships and direct customer interaction will be key, says Mathias Liebe

Before we explore the realm of real-time, I would like to quote something I found in the prologue to Tom Peters’ book A Passion for Excellence, a book that truly inspired me, and that I recommend to anyone who is seeking answers to complex questions. It can also be understood as a guiding theme through this article about real-time capabilities and the sometimes hidden ‘obvious’.

“In the following pages I offer nothing more than simple facts, plain arguments and common sense...” - Thomas  Paine - Common Sense, 1776
Many industry experts are seeking success in strategies aimed at transforming telcos into retail-like companies which, instead of selling communications services, focus on content and entertainment. Almost no telecommunications conference goes by without someone envying the big retailers of this world, who supposedly have decoded the genetic code of consumer buying behaviour. Retailers who have identified literally thousands of different segments that should be individually targeted, doubling weekly sales based on optimised statistical usage cluster analysis! Is that so? And are retailers really a relevant benchmark? Are they really doing better? The fear of becoming a pure bit pipe seems to be omnipresent, and the latest hype called IMS will spur the old discussion about the value chain unbundling once more.
So why do telcos not just copy retail strategies? Because their CRM and billing systems cannot cope? No, certainly not. So what is important and what might be the right thing to do in times like these? Times in which objective product differentiation becomes more difficult, and a compelling service offering is often seen as a mere prerequisite to compete. Remember this: it’s all about customers and ways to improve real-time customer interaction on all levels. Let that be at the point of sale – wherever that might be for a mobile operator – after sales or when services are billed, either online in real-time or when the invoice arrives.
At this point marketing strategists usually call for customer loyalty and retention strategies based on scientific approaches, including so-called ‘perceived value’ concepts, methods to ‘shape an image’ perfectly matching the targeted audience thus creating ‘sustainable brand awareness’! Hey, don’t we all like, and sometimes even admire, those advocates of change and success? But what does that all mean to an existing customer of a mobile operator in real life? And are there no instant – let alone pragmatic – alternatives to improving customer relationships? Please don’t get me wrong: I am a whole-hearted marketer myself, and I believe in targeted customer and segmentation strategies. But I also believe in common sense, excellent service and the often neglected alternatives which somehow seem to be hidden behind the curtain of the obvious.

Obvious opportunities and courtesy 
Here are two examples. I have just changed to another mobile service provider. Bluntly speaking: Yes, I have churned, and I declare myself guilty on all charges. But do I see a difference between two operators? To be honest: No. Both seem to seek their salvation in sending information leaflet after leaflet whenever I get a bill. So does a brochure really make a difference? And to make things even worse: Did I ever bother to read the brochures?
Here is another thought-provoking story that someone shared with us recently. A businessman spent literally hundreds of euros per month due to extensive usage of international roaming services. Did his operator ever respond, recognising that he is a valuable customer, let alone to say: “Thank you for letting us connect you with your business partners while travelling!”
At home over the weekend the same businessman downloaded a music file from an Internet service provider. After the successful music download worth only 1.99 euros the business man received an instant thank you note and more useful information about the price he paid, related offerings as well as details on how to contact the company in case of complaints. In other words a simple but perfect service with ‘A passion for excellence’!
The key question for telecommunications companies is how to get there and where to start. Well, a retailer would answer this question in a split second and formulate the answer around the PoS (Point of Sale) where customers choose goods and pay for them. For mobile operators this is fundamentally different, because service usage and payment can occur virtually anywhere. So it more or less boils down to an MoS (Moment of Sale) rather than a physical location. It’s not the place but the moment in time that is relevant as the starting point to address your customers. And this is exactly where real-time capabilities come in as a most promising enabler: the key to a positive user experience lies in enhanced customer interaction based on behaviour – and payment – related issues at the exact moment when it is happening. And that is regardless of whether you are a prepaid or a postpaid customer. Today’s consumers demand an instant reaction or dialogue. They will not maintain interest or be willing to accept delays measured in hours or even days.
So where does this magic real-time capability come from, and what can operators do with it? Real-time functionalities have their origin in the IN prepaid payment arena, in which available credits and quotas are managed in real time to assure that prepaid customers have sufficient credit to pay. So until about 24 months ago, when speaking about real-time, people only associated it with a payment mechanism for a special clientele. Today this has changed, and the potential benefits of doing business in real time have outgrown aspects of pure revenue control.
Nowadays real-time is seen as an enabling tool for creating better customer service, instant dialogue and a major gain for operators in terms of flexibility. Here are some examples in which added real-time capabilities make a decisive difference. For closed user groups like companies or families that fit in with ‘lifestyle or hierarchical billing’, real-time features introduce several levels of spending and service usage control. In addition, real-time bonus and discount schemes can be implemented easily and – best of all – communicated instantly at the Moment of Sale.

Real-time to the rescue
The need for enhanced service management together with greater payment and pricing flexibility on one side, and the ongoing initiatives to streamline the existing IT/BCC landscape on the other, are the main drivers for deploying sophisticated real-time solutions for convergent customer interaction. Many operators are starting to invest in real-time capabilities, focusing on real-time processing to open up and protect revenue streams while gaining more flexibility regardless of the payment method. With respect to third party content business, real-time features might also help overcome a dilemma that the industry is currently facing. Something over 70 per cent of all content is sold via premium SMS outside the operators’ service delivery domain. Many content offerings are linked to hidden subscriptions, leading to several recent lawsuits and disappointed customers.
More sophisticated DRM and related billing concepts have not really emerged yet, so that pSMS will stay around for some time. Knowing that pSMS is inherently not a trustworthy and reliable payment method, added real-time communication and charging capabilities could be an answer.
Combined with ‘opt-in/opt-out’ best practices guidelines published by the MMA (Mobile Marketing Assoc.) and the MEF (Mobile Entertainment Forum), this could be a perfect way to regain the consumer’s confidence, to increase legal certainty and to prevent revenue leakage.
In the past, real-time and IN-based implementations have been mostly proprietary, with CAMEL being the only exception. Now more standardised AAA procedures and protocols are emerging, with DIAMETER taking the lead as the most promising real-time enabler. But instead of arguing whether the IT- or IN-based school of thought is better positioned to support operator needs in improving customer relations, one should focus on key capabilities – regardless of where they originate.
Managing money and improving customer interaction is neither black nor white: it is always unique to each operator and must be truly customer-centric. The one thing that really matters is leveraging the right enablers, and one of them is the real-time capabilities that have already emerged and are definitely here to stay.

Commodities and common sense
But to get back to those who are in search of excellence and do not really believe that voice is ‘just a commodity’ and that only data services will make an impact (let alone that they will sell themselves), why not try to come to grips with real customer expectations when they occur (MoS)? Why not practise some ‘common sense’ to find the obvious opportunities to make the user experience perfect.                         

Mathias Liebe  is Product Marketing Manager Billing, ORGA Systems, and can be contacted via e-mail: mliebe@orga-systems.com

It’s time to rethink the role and scope of the product catalog, say John Wilmes and William F. Wilbert, and a unified product catalog could give service providers a more efficient all-round approach

If you listen to telecommunication service providers’ marketing departments, we’re on the verge of a brave new world of service. Customers will be able to get just about anything they want by ordering from centralised product catalogs. What’s more, customers will be able to access to the same products and services from any device and over any technology. The services will be wide-ranging and cheap. And customers will be able to order using multiple devices.

Sounds pretty good on the face of it. But pulling it off will require a unified product catalog containing products that are mapped to a wide variety of services, some from the service provider itself, and an increasing number from third-party vendors. Service providers also want to synchronise product catalogs with corresponding data in billing systems. Because so many players are involved in the delivery of telecom products, today's catalog cannot be a simple pick list supported by a single back-end system. In telecommunications few things are that simple. And in the age of FMC (fixed-mobile convergence) and IMS (IP multimedia subsystems), the margin for error is increasingly thin.
The term 'product catalog' may have slightly different meanings for people in different parts of the communications industry, so a few definitions are in order. The TeleManagement Forum (TMF), a non-profit telecommunications standards body, defines a product catalog as a collection of product offerings, intended for a specific distribution channel, enhanced with additional information such as Service Level Agreement (SLA) parameters, invoicing and shipping details. Each product offering combines pricing and availability information with     product specification data. Product specifications describe the relationships between products, the services they provide, and the resources they require. 

Burdened by complexity
Traditional service provider product catalogs have been monolithic, with customised point-to-point integrations with other systems. These legacy catalogs can be highly valuable, but they require huge resources to build and maintain, and they are becoming increasingly complicated and problematic.
“The product catalog of most medium- to large-sized carriers is horrendously complex – and getting even more so,” says Dan Baker of Dittberner Associates, an international market research and consulting firm. “Today, the legacy billing and provisioning systems of traditional carriers are laden with thousands of products and calling plans developed over the years. And don't forget, these products are spread across multiple stove-piped back-office systems. Then, when it comes time to order off the catalog, there's the 'too many cooks spoil the broth' factor. Each time a new person touches the order you introduce errors, delays, and extra costs.”
Seeking a lower cost alternative, service providers are turning to product catalogs that leverage standard integration technologies to provide coherent access across multiple, disparate, internal, and external systems containing product data. The trick is to find new solutions that provide the fundamental benefits of legacy product catalogs while adding flexibility and lowering costs.
We recently spoke with several carriers about their service and product catalogs. All agreed that this is an area critical to revenue generation, but also frustratingly complex. The case of Venezuela's national service provider, Compañía Anónima Nacional Telefonos de Venezuela (CANTV), provides a representative example of the basic challenges many carriers are facing when they try to implement product catalogs.
CANTV offers local, long distance, wireless, and data transmissions, as well as paging, public telephones, private networks, and directory services. This adds up to the need for a pretty hefty product and services catalog. Making that catalog work is a complicated process that requires integrating multiple systems: two CRM systems, three customer care applications, three order management systems, and two billing systems. 
CANTV found that complex product catalog issues, as well as several other domain areas, could be effectively harnessed by adhering to standards.
 “We are building a centralised services/products catalog using the TeleManagement Forum's NGOSS (New Generation Operations Systems and Software) integration framework and SID (Shared Information and Data model),” says Cesar Obach Renner, CANTV's Corporate Manager of IT Architecture. “Our goal is to expose our Product and Service Catalog through our SOA platform. The fundamental reason for exposing the catalog as an OSS/J catalog management API is because the standard nature of the implementation leads to a lower total cost of ownership.”
OSS/J APIs leverage Java Platform, Enterprise Edition (Java EE), XML and Web Services to create a multi-tier architecture based on re-usable components and container technology. OSS/J APIs are mapped to the TMF's widely adopted enhanced Telecom Operations Map (eTOM). CANTV's standardised approach to its product catalog points to a better way of managing complexity.
“Our implementation is intrinsically distributed and the service encapsulates the federated behaviour and complexities for the management of other applications, such as CRM and self-care applications,” Obach Renner says. “The catalog is formed by components from different applications across the application landscape, with each application originating a catalog component that holds the data relevant for the process it's supporting. For example, billing holds rating and billing components, CRM holds marketing dependencies, and so on.”
How would he manage this level of complexity without the SID and OSS/J technologies? Obach Renner says: “We'd have to create the model home-grown from scratch, and maintain and update all the interfaces going forward. The standards approach is easier for us. We don't have to reinvent each time we want to add something.”
Most service providers still rely on home-grown catalogs that work in fragmented fashion. Traditionally, they are vertically organised around departments or lines of business. Each of these business and technology silos typically has its own product catalog(s), with the content of each catalog relatively static and relatively independent of the other silos. 
While this approach has been adequate in the past, the trend now is for service providers to reorganise horizontally, across the old lines of business. These new organisations sell complex, dynamic bundles of products that break the old boundaries by combining services from multiple categories like broadband, mobility, and entertainment. 
This new generation of products is more likely to be customised for and, in the near future, even designed by, much smaller target markets than the past generation of products. In some cases, products will be customised to the level of the individual subscriber.  Some service providers may even provide a product creation environment analogous to the service creation environment concept seen at lower levels of the value chain, for use by business or even residential customers. 
Service bundling, frequent change, and customisation put severe pressure on legacy product catalogs.
Service bundling often requires coordination between product catalogs that were never intended to be synchronized. Frequent change and customisation challenge the scalability of legacy catalogs, and may reveal shortcomings in legacy information models.

Converged services increase pressure
Service providers of all types are migrating to the IP Multimedia Subsystem (IMS) as a means of integrating mobility with an IP infrastructure, separating services from underlying networks, ideally allowing them to offer IP-based services that subscribers can get anywhere from any device. As IMS implementations become the norm, these new services will become the building blocks for products that not only bundle services, but blend them – common examples include caller ID on a TV screen, click-to-dial, and 'Follow Me TV'.
Blended services will increase the pressure on legacy product catalogs, because they can add to the already complex relationship between products and services.  The concept of a product is stretched, as are product metrics like usage and performance.
A relatively new factor in the OSS mix is the service delivery platform (SDP), a concept espoused by BEA, HP, IBM, Microsoft, Sun Microsystems, and other integrators. An SDP is intended to simplify service management by giving a service provider a single system into which service orders can be directed, and from which service information can be retrieved. A typical SDP supplies some level of pre-integration, incorporating inventory, provisioning, activation, mediation, and assurance functions. It supports interfaces that accept inbound service orders, escalate problems, and provide views required for service management. Many service providers are moving toward some level of SDP implementation.
The advent of the SDP is another motivation for rethinking product catalogs. Traditional product catalogs generally have a number of point-to-point interfaces, built up over the years, to the specific systems they support or use. These interfaces do not easily translate to the standards-based interfaces typically required, or at least encouraged, by an SDP. Another area of incompatibility can occur at a more fundamental level, in the information model. An SDP is likely to be strongly influenced by current standards and models, such as the TMF's SID model. 
Dr Lorien Pratt, Program Manager for OSS Competitive Strategies for analyst firm Stratecast Partners, notes that improved product catalogs are now a priority for many carriers.
“Since rapid product deployment is top-of-mind for many service providers, a unified product catalog is a natural early priority as part of an overall OSS information management redesign,” she says. “Stratecast Partners has found that the cost of information management is, in many situations, the largest part of an OSS integration. In response, service providers are taking a new look at their information management practices in general – including product catalogs – and in many cases redesigning existing information stores for greater efficiency and lower costs.”
There are several alternatives open to service providers looking for a single product view – a unified product catalog (UPC) – that spans the boundaries of current product groups and future service implementations. One approach would be simply to co-ordinate legacy product catalogs, but this pure synchronisation approach is becoming increasingly difficult. Although this alternative may address the immediate need for legacy product co-ordination, it fails to meet the longer-term need for a catalog not aligned with a specific legacy silo. And it can lead to a 'lowest common denominator' syndrome, in which the least functional catalogs determine the form and content of the global view.

Single source of truth 
An alternative to synchronisation is aggregation. It is possible, at least in theory, for a service provider to replace all legacy catalogs with a new, single catalog that encompasses all current and future lines of business and all possible products and services – a 'single source of truth' or 'single database of record' approach.  In practice, however, this option presents major challenges. One disadvantage is the massive migration and cutover required for non-greenfield service providers. Another challenge is that it creates a new bottleneck through which all product-related information must pass, raising serious issues of scalability and performance.
A more realistic alternative is a UPC that leverages a combination of aggregation, synchronisation, and federation technologies – precisely the approach taken by CANTV. A UPC can span legacy catalogs to the extent needed for a coherent overall view, but without forcing global import or full global synchronisation. It allows the development of new product offerings, based on any mix of new and old services, with minimal tie-in to legacy systems. And it facilitates data enrichment and semantic integration, which are important factors in supporting the 'highest common denominator' and in providing standards-compliant interfaces that can reduce integration costs.

Overcoming obstacles to integration
Integration will always be a major component of any UPC effort, and traditional obstacles to integration – disparate interfaces, divergent information models, etc.  – still exist. But open, widely supported standards with multiple implementations offer new solutions. 
The TMF's NGOSS provides architectural guidelines, information and data models, and business process models that are widely supported by service providers and are increasingly used by integrators. The TMF has recognised the interface specifications and design guidelines published by OSS/J as NGOSS implementations. TMF process groupings and OSS/J interfaces are used in the UPC example below.
The SID is, in effect, the language of NGOSS, according to Martin Creaner, TMF's Chief Technical Officer and Vice President, Technical Programs. “It provides a comprehensive information architecture, which means that it can be very effectively used in a number of applications, including product catalogs,” he says.
The SID provides a highly developed framework that maps products to services and services to resources.  Legacy catalogs, which may not make the same distinctions between products and services, may have difficulty mapping to these modern standards.
CANTV's Obach Renner also notes that tight working relationships with the TMF and OSS/J can pay off for carriers. “CANTV's team has had extensive experience developing models for converged services,” he says. “We have extended SID in a way that our objects are being proposed to the SID team to become part of the next SID releases.”
It's time to rethink the role and scope of the product catalog. As forward thinking carriers like CANTV have shown, legacy catalogs will not survive the pressure from the coming wave of bundled and blended services.  A unified product catalog, with open, standard interfaces that use a common data model, can enable a service provider to roll out new products more easily, integrate new and legacy components more effectively, and target markets and customers more profitably.

John Wilmes, CTO of Ceon Corporation, can be contacted via e-mail: JWilmes@Ceon.com
William F. Wilbert, Communications Manager of the OSS through Java Initiative, can be contacted via e-mail: wilbertz1@comcast.net

External Links

OSS through Java Initiative
Ceon

Geoff Ibbett argues that revenue-assurance issues are still being overlooked as operators launch next-generation products and services, but believes the industry has an opportunity to learn from the mistakes of the past to help maximise the revenues
of the future

After all the hype surrounding next-generation telecoms services, operators are finally starting to roll out new services and products, as they focus on increasing market share, and on generating new revenues. There is still, however, a big question mark over whether operators will ever actually make a profit from new services. Unless many of them start taking an end-to-end approach to revenue assurance, they will be unable to make these services truly profitable.

New research from telecoms analysts, Analysys, has estimated current average revenue losses in the telecoms industry globally to be 11.6 per cent of turnover in 2005, which is an increase from 10.7 per cent in 2004. However, the report, Operator Attitudes to Revenue Assurance 2005, did reveal that the importance of revenue assurance is continuing to grow, with over 60 per cent of respondents believing it to be more important than in previous years. This begs the question: how many operators are approaching revenue assurance correctly?
There are certainly some fundamental flaws that still need to be addressed. One of the major mistakes that telcos have made when launching new products and services is that little consideration has been given to how to bill for them effectively or protect revenues generated from them. This widening ‘planning gap’ led to many revenue assurance issues being considered as an afterthought, leading to operators taking a tactical and reactive approach to revenue assurance; responding to issues as they arise on a case-by-case basis, often in response to customer complaints. What remains then is a series of ‘elastoplast’ fixes, rather than an overall end-to-end solution. Such an approach can be very damaging, particularly if certain revenue-affecting issues remain undetected for long periods of time.

Strategic and pro-active approach
Operators will need to take a much more strategic and pro-active approach towards revenue assurance in the next-generation environment, where services will have higher-value and more complex revenue flows, otherwise they will experience even greater losses and unclaimed revenue.
In some respects, the task should be made easier for a lot of telcos, as much of the strategic planning should technically already be in place. Many of the main causes of loss on existing networks will continue to occur on next-generation networks, namely: billing being out of step with new service activations, incorrect/incomplete network information being recorded, the misidentification of calls, out of date reference information, stranded assets, unauthorised usage and the miss-selling of products. 
All of this is not to say that the move to IP-based services will mean that revenue assurance issues disappear. In fact, there will be a whole series of additional revenue assurance issues, which will need accounting for. Undoubtedly one of the greatest challenges facing operators will be the daunting task of monitoring and settling the vast amount of content that will be offered by an increasing number of providers. The emergence of converged, next-generation IP networks will bring new business models and fierce competition where incumbent operators, cable providers, data-services firms and entertainment companies will interact with each other to deliver services to customers and maximise revenues.
Most operators will want to maximise value from next-generation services due to their higher investments.  Some operators will see themselves as primary network operators, whose purpose will be to provide technical infrastructure and facilitate the movement of data across the network. Alternatively, other operators will see themselves as service providers too, meaning that they will have very active partnerships with content providers, taking a share of the content services provided.
NTL’s recent £817 million bid for Virgin Mobile is also evidence that so called triple/quadruple play offerings are finally starting to become a reality, providing an attractive distribution model for content. In both cases, it will be essential for operators to take an end-to-end view of the whole process as, for example, a single data error at one point in the value chain could impact on the revenues of partners further on in the chain.
In order to do this effectively, operators need to know how their network is being used. Current networks have been built on a traditional telecoms backbone, with the monitoring and exploitation of the network being isolated to those with a specific knowledge of the network. However, with the move to IP-based services, revenue assurance becomes not only a telecoms issue but also an IT one. Next-generation networks will have an IT backbone, meaning that there will be significantly more events to monitor from different sources, whether it is PC, mobile phone, PDA, and so forth. Additionally, with the network’s backbone being IT-based it is more susceptible to exploitation from the IT world, therefore better fraud detection and security measures will be required to prevent attacks from hackers and viruses.

Fraud a major issue
Fraud will certainly be a major issue facing many next-generation services. For example, in the VoIP environment, fraudsters can set up illegal origination or termination gateways to avoid international call costs or the VoIP costs of official providers. Content is also very susceptible, as subscription fraud can provide fraudsters with easy access to the network to download content, then resell it without authorisation.
Additionally, partner and subscriber repudiation is a potential source of fraud, because unless there is a mechanism confirming that content has been successfully downloaded, operators could be liable for claims for refunds/payments both genuine and fraudulent.  Therefore, it will be increasingly important to utilise content monitoring to distinguish between a person accessing e-mail versus downloading a ring tone, in order to identify potentially costly content redistribution.
Next-generation services are potentially a great new revenue source for operators, therefore it is imperative that revenue-affecting issues are kept to a minimum. This means taking an end-to-end approach and developing a culture that looks for potential revenue-assurance issues early in the product planning stage. The relative infancy of new services means that the industry still has a real opportunity to learn from its mistakes of the past. However if the current status quo is maintained, it will be operators’ headaches that increase, rather than their revenues.

Geoff Ibbett, Product Portfolio Manager at Azure Solutions, can be contacted via tel: +44 (0)20 7826 5420 ;
e-mail: info@azuresolutions.com

In this all-embracing media age, where multiple channels of communication churn out facts, opinions and comment, it’s so easy for a new idea to get immersed in hype. Such a fate befell MPLS, or Multiprotocol Label Switching. Several years ago it was being talked about as the next big thing that would take the telecom world by storm and revolutionise how networks were run. Yet only now does that prophecy look like coming true, as the technology’s benefits are being slowly recognised. Andi Willmott reviews where MPLS stands today

MPLS was originally designed as a tool to aid data and voice traffic engineering, irrespective of it being circuit switched- or packet-based. However, over time, its role has changed due to the fact that it makes all-IP networks possible – largely through producing a feeling of comfort for operators around the key concerns over security, reliability and efficiency.

According to its proponents, MPLS is so efficient it can re-route traffic and restore a failed connection within 50 millionths of a second (the time it takes for a bullet to travel one inch). So, as service providers take a tighter grip on their finances, MPLS, with its promises of a substantial return on investment, is back in the limelight.
MPLS evolved from a series of standards and initiatives formulated in the late 1990s from such companies as Ipsilon Networks (later acquired by Nokia), IBM, and most importantly Cisco Systems, with its 'Tag Switching', or 'Label Switching' as it became when given to the IETF for open standardisation. The aim was to bring silicon-speed performance to IP-based routing by reducing the required number of 'fully connected' routers and create simple high-speed switches.
MPLS found its first practical use in high-end routers, or as the network backbones of large international corporations. Now it offers network operators the chance to provide subscribers with some much-needed guarantees concerning reliability and overall performance times, possibly contained within an SLA. In fact, as VoIP (a central factor in MPLS) becomes more common, existing SLAs may have to be amended, as they do not offer the type of guarantees required.

How it works
The basis on which MPLS works is straightforward enough. Data packets are given a label containing information on, amongst other things, destination and required bandwidth, and so can be allocated an appropriate Labelled Switch Path (LSP). Short, fixed-length labels are much quicker to look up and allocate than a 'normal' address. These are virtual circuits across IP networks and allow operators to re-route data to an appropriate channel. Thus operators have the flexibility to divert traffic around congestion points. 
And therein lies the crux of the matter: if an operator or carrier cannot only see where the congestion points are, but is also able to move traffic away from them at certain times, and for certain periods, speed of data delivery is massively increased.
In theory, serious bouts of congestion should be avoided (if not eliminated) and the carriers can         guarantee a certain level of efficiency. The main benefits are that backbone architecture can be simplified following the integration of multiple technology types, and events such as video conferencing are much cheaper than with more traditional means.

Delays and concerns
When MPLS first appeared, it looked destined to become the technology that would tie everything together in a nice neat package, and looked to be better aligned with the technologies that carriers and operators were using at the time.
But with multiple technologies having been deployed over previous decades, and still in full operation, MPLS was in danger of becoming a technology that had obvious benefits, but would just be too expensive or inconvenient to implement. With the downturn in the telecoms sector that appeared towards the end of 2001, companies (many of whom were more concerned with remaining solvent) began to put many of their MPLS ideas on the backburner.
With the billions of dollars operators have spent on network building it wasn't surprising that they didn't want existing functions lost or unduly affected, and as such the movement toward MPLS (encompassing large infrastructure alterations) was not as quick as one might have expected.
It was still a technology that had a future, but few people were sure when that future would begin. The hype that surrounded the technology was partly down to its main proponents (lead by Cisco) ploughing money into marketing campaigns proclaiming that this was the next big thing, while directly promoting it to end users claiming that they can make the technology work better for them.
As for the ISPs, around about the time that MPLS appeared on the scene many had just finished upgrading their backbone to support new QoS, converged offerings, and so were under pressure to get more from the fibre-optics that they already had in the ground.

MPLS and its scope for service providers
Thankfully for service providers there's no size limits on MPLS networks. They can be anything from city- to country-wide, with peripheral networks which are often operated by local network operators or private companies attached to the MPLS network at certain points.
The technology integrates Layer 2 information (bandwidth, latency, utilisation) into Layer 3 (IP) within an autonomous system. This means that the high value customer who has paid more for speed and reliability can be guaranteed just that. The main safety aspect is that a carrier can – somewhere in between two end points – establish multiple paths to guard against line failure.
The main objective of MPLS has to be to keep the technology invisible. No one needs to know when MPLS is passing traffic or not – it should just work. Provided all parties can see the issue to be resolved, and can agree on the course of action, then the partnership is working and more proactive approach can be adopted.
Where MPLS really comes into play is with service providers that are looking to cut costs and operational expenditures – which, let's face it, is everyone in today's competitive markets. MPLS serves two main purposes here. Firstly, it allows for the more efficient bundling of services for the carrier, and this in turn leads to the easier maintenance of customers' loyalty and better revenue per user. Secondly, services can be designed to fit a particular market, thus allowing carriers to go after new sectors of society.

Where does VoIP come in?
Without doubt, over the past two years MPLS has acquired a growing momentum, not least because it can be run as a backdrop to several other disparate transmission systems – ATM and IP being the two most recognisable. At the same time it offers cost savings to the carrier and more control over what passes over the network.
It's IP that people are talking about right now, and it's IP that operators and carriers want to sell to their customers as part of the communications industry's brave new world. VoIP will be the biggest driver.
Quite simply, MPLS is crucial to the success of VoIP. Users have an implicit trust in telephone calls, which naturally leads them to have no lesser an expectation for any new voice solution.
The last year has seen a huge marketing drive targeted at service providers to offer new QoS-based solutions to their customers that allow them to lessen their call costs by up to 40 per cent.
Adding voice calls to a data network requires that the converged data network perform to the same standard as the previously split voice network. Any lessening in quality, consistency, or anything else means bad press, which causes decision makers to hesitate about authorising their IT groups to install these new solutions.
MPLS provides the best way to guarantee network performance for these voice calls. It helps ensure that no one is aware that VoIP is being used at all. The only people that should notice is the CFO who sees his bill go down and his service provider who sees his MPLS investment make money.
According to Nemertes Research, an independent research firm, it is in fact QoS that makes IT executives turn to MPLS. Over 60 per cent of the 3,000 organisations surveyed said they were already using, or soon planned to deploy, MPLS in some form. And the main reason for doing so was QoS.
But the adoption of MPLS needs to be done with care, and the term 'horses for courses' is appropriate. Its power can really be seen within organisations that might be said to have extensive 'any-to-any' traffic patterns as opposed to 'hub-to-spoke', or those experiencing some form of convergence. Indeed, convergence is seen as an area where MPLS can contribute to real cost savings. Moving video transmissions (with an any-to-any transmission pattern) onto a data network means ISDN lines don't have to be used, and so don't have to be paid for.

MPLS fills the ATM gap
ATM, designed to deliver more bandwidth via better transmission, switching, and signalling for the increasing volume of traditional data, while providing a common format for services with different bandwidth requirements, is now looking a little weary. Yet ATM has easily defined parameters, whereas charging for an MPLS service is slightly more problematic: does one charge on the basis of speed, differentiated services, number of switched paths, distance, or any combination of these factors?
Security is also another factor to be considered, with telecoms carriers needing to work out how security and encryption techniques and procedures are supported within MPLS? Will the end user, the carrier or the service provider handle the security process?

Issue of standardisation
Then there's the issue of standardisation, particularly in relation to Layer 2 VPNs and their geographic coverage requirements as carriers need to interconnect their associated VPNs – and offer some form of joint services – to make cost savings. For example, it's known that Cable & Wireless and German company Arcor already have an MPLS interconnection agreement.
Also grid computing – the creation of pools of distributed computer resources available on-demand, as used in well-known projects such s SETI at Home –  is seen as a driving force behind MPLS. 
But with all this going on it should be remembered  that the reason companies wish to converge and adopt MPLS is not because they think the technology is sexy or the next big thing, but because it makes good financial sense in the long term, and will aid efficiency.
Admittedly, MPLS is still a work in progress, with a single MPLS standard still slowly evolving and the interconnection of larger networks still problematic. Yet MPLS is fast becoming a mainstream technology with (to name but a few) BT, MCI, NTT Communications, Alcatel, Nortel, and Cisco in support.
The Japanese MPLS market is now worth well over  $1 billion annually and, according to telecoms giant NTT Communications, its Arcstar service (now one of the largest MPLS networks in the world with 65,000 ports, and available across Asia and the US), has a reported customer satisfaction rate of almost 99 per cent.
Its IP-VPN network spans more than 50 countries worldwide, and is one of the most expansive, wholly owned data-communications networks in the world, highlighting that this is a technology that has successfully met a business need, and looks to be on the agenda of carriers and operators the world over.

Andi Willmott,  is Business Development Director for NetEvidence, and can be contacted via tel: +44 1483 209970; e-mail: info@net-evidence.com
www.net-evidence.com

External Links

NetEvidence

Despite clear and substantial business and technology benefits, the migration of telecom businesses through IMS deployments can carry substantial risks for operators, says Kristofer Kimbler

The telecom industry has a new hype. It is called IMS, or IP Multimedia Subsystem. At the most basic level of definition, the IMS is a next generation all-IP service architecture that enables the delivery of a whole range of new multimedia and voice services over 3G, next generation and even legacy networks. By providing service infrastructures based on IP technology and Session Initiation Protocol (SIP), this 3GPP-originated standard also has the potential to drive network and service operation costs (OPEX) down. But, despite these promised benefits, the upfront investment (CAPEX) costs are substantial, the technology is still unproven, and some operators are still hesitant about when to make the necessary investment.

3G mobile operators view IMS primarily from the perspective of enabling new customer experiences through multimedia content delivery and the possibilities of providing truly personalised and context-aware services that invoke user presence and location information. By contrast, fixed network operators – themselves under continuous pressure from both mobile operators and recently peer-to-peer VoIP providers like Skype – view IMS as a way to substantially cut their operational costs and to make their service offerings richer and more attractive.
In turn, those large incumbent operators who already run fixed, mobile and broadband networks regard IMS as a vehicle capable of enabling true fixed-mobile convergence on network, service and terminal levels, making their entire portfolio very attractive across a wide range of consumer and enterprise customers.
Powering part of this transformation are the newer wireless technologies such as Bluetooth, WiFi or WiMax. These can be readily integrated with existing mobile devices and deliver smooth roaming between both mobile and wireless access networks, allowing fixed operators to leverage their existing investments in broadband infrastructures by supporting wireless hotspots in homes, offices and public spaces through DSL links.

The challenges of migrating to IMS
Despite the clear and substantial business and technology benefits, the transformation of telecom businesses through IMS deployments can carry substantial risks for operators. If traditional basic and value-added services that customers have used throughout their lives are no longer available – or have to be accessed and controlled in radically different ways – it will be difficult to get them to migrate to all-IP networks, irrespective of the wider cost, performance and service benefits.
It’s vital therefore that the migration of existing services to IMS and NGN be seamless, if customer dissatisfaction and churn are to be avoided. The scale of this problem is huge, with some fixed operators already having announced plans to move one million customers per month to NGN-based services.
Whatever happens, the quality of the basic voice services currently offered cannot be compromised. Any substantial deterioration in the quality of voice connections or slowing of system response times may cause irritation amongst subscribers and result in skepticism about the potential benefits and performance of the new and hopefully attractive multimedia services.
Customers may question why they should pay for poor quality voice services if there are cheaper alternatives for peer-to-peer VoIP telephony provided free-of-charge or for cheap flat-rate tariffs from alternative providers with models such as Skype and Vonnage. These pressures will only be accentuated when WiMax wireless access becomes widely available, as this will make these services even more attractive.
The quality of today’s peer-to-peer VoIP services delivered on best-effort basis may not be great, but is certainly sufficiently acceptable for millions of users to adopt them enthusiastically. If this business model wins, the role of the existing operators can only become even more marginalised.
In response, the only way that operators can compete with independent VoIP providers is through quality of service. The core task for operators and vendors migrating to VoIP services, therefore, is to provide an IMS infrastructure that can ensure high quality and reliable voice services. In the long run, this is the only way to guarantee that users will be happy to pay for the entire set of NGN services.

Service migration through service convergence
The full deployment of IMS will take years, if not decades. Until then, IMS and next generation networks must co-exist peacefully with legacy networks. It is in the interest of operators that those revenue-generating value-added services that are being deployed on fixed and mobile networks today can also be seamlessly provisioned in an IMS environment.
Over the years we’ve seen the continued convergence between the Internet and the PSTN, between fixed and mobile networks, and between different types of mobile terminals such as PDAs and smart phones. In particular, new business opportunities created by fixed-mobile convergence have led several incumbent operators to re-integrate their fixed and broadband operations by co-operating with or re-acquiring mobile units that they had previously spun off.
IMS’s all-IP infrastructure will play a key role in supporting this convergence, enabling a common transport and switching infrastructure to replace the current disparate networks. However, IMS has an even more dominant role to play in convergence on the service layer, enabling the same services to be delivered to different terminals across different access networks. The fundamental paradigm behind IMS service migration involves enabling existing and future services simultaneously on both IMS and legacy networks.
This convergent approach to service migration requires a new service delivery infrastructure capable of spanning multiple networks simultaneously. Here, the concept of a ‘horizontal’ service delivery platform (SDP) can do exactly the same for the service layer as IMS is planned to do for the core networks. It enables operational cost savings and speeds up return on investment through re-use of service logic, service provisioning and even business processes – but for multiple networks.

Central concept
The central concept behind a convergent SDP lies in the idea of a homogenous Network Abstraction Layer, based on open telecom and IT standards such as OSA/Parlay and/or Web Services, and enabling the creation of a broad range of truly network-independent voice and data applications. Service creation becomes greatly simplified, with applications using high-level concepts of call management, messaging, charging or user location as opposed to low-level, protocol-specific features. This also allows for the simultaneous provisioning of the same application logic to different networks using both legacy SS7 and the newer SIP-based signaling.
Here, however, we face another serious threat. Network operators and their equipment vendors might attempt to replicate existing IN service delivery models and create new ‘IMS-silos’. As a result, instead of a broad range of operator-hosted and third party services, only a limited number of IMS vendor-specific services will be commercially deployed, severely limiting potential revenues, flexibility, market share and competitive agility.
To avoid the creation of these handicapping ‘IMS silos’, operators have already started to create a new type of service delivery infrastructure able to support fixed-mobile convergence or global service provisioning across many mobile networks within the operator’s overall national or international group. Extending such convergent, horizontal Service Delivery Platforms to IMS will be a natural next step.
Network-dedicated SIP/IMS application servers that will be certainly used for delivering IMS-specific multimedia and telephony services in the same way as IN service control points are currently used in legacy networks. However during the years of transition to IMS, vertical service solutions will not reduce service logic and management redundancy, but actually rather increase it, this way limiting the OPEX savings facilitated by IMS.
IMS architectures open great new opportunities not only to service providers and network operators, but also to their customers and application developer communities. One of the major challenges on this road to all-IP networks powered by IMS architecture will be ensuring that the migration of services from the legacy to new infrastructure happens seamlessly and, to the end user, almost invisibly. 
A convergent, horizontal SDP architecture capable of spanning the IMS and legacy networks will play a crucial role in achieving this seamless migration of services to the IMS environment. Only by using a truly convergent SDP architecture can an operator simultaneously provide the same services on both legacy and IMS-enabled networks. This will not only reduce the risk of customers churning away while the operator migrates to IMS, but will also deliver potentially massive OPEX and CAPEX reductions for the operators community – which in turn will result in lower bills for the end customer.

Kristofer Kimbler is President and Founder of Appium AB   www.appium.com

Honey, I Pimped* the Network...

Firstly, can I apologise for using a word not traditionally occurring in telecoms. It did however seem admirably suited. A year or two ago, a few chums and myself spotted a creeping trend that we dubbed the ‘tabloidisation of telecoms’. For the non-British here, a ‘tabloid’ newspaper never overestimates the intelligence – or indeed the greed and venality – of its readers.

The thing that kicked off this bit of soul-searching was an announcement from Virgin Mobile that, at a sponsored superbike rally, a – ahem – helmet polishing service would be on offer: “All superbike fans like having their helmets polished, and the free service is available to all bikers, although we suspect the sexy Virgin Mobile Pit Police might prove to be a bigger hit with the men!”
Phanaar, phanaar...and other background sniggersounds of the adolescent male...
While I couldn’t agree more that it’s a great piece of marketing, aimed perfectly at their defined demographic, it did get me wondering if our once great industry was now marching resolutely downmarket for ever more.
Content is obviously the future but, for an industry that once prized itself on its public service ethos and had almost Reithian values when compared to the ravening wolves of the IT sector, this transformation is faintly embarrassing to observe at close range. Maybe a bit like watching your grandmother become a lapdancer?
More seriously though, we should start reassessing our relationships with the communities that we serve as our own offerings start to change dramatically. As we build our business plans on customer generated content or links with TV reality shows, are we going to set any limits for ourselves – or will we leave it to the, at times, literally naked market forces?
Chatting recently to Francis Schmeer of messaging and data services company, Empower Interactive, he used the interesting term ‘identity mediation’, describing a new potential role for the service provider as a multi-directional content filter, helping customers manage their own content streams as well as providing a wider watchdog role, at times, to deal with spamming and spoofing.
He felt that while some service providers would continue to recognise their public service role, others would happily follow their wallets wherever they led.
Taking a wider perspective, this is one issue that I see as crucial for the future of our industry. As our identities become increasingly diffused across an ever more pervasive digital landscape, we’re going to require support in managing them.
The transparency of communications and media these days means that personal can instantly become public – and vice versa – and boundaries will need to be set and controlled. That could be good business for some – and a valuable re-use of some old telecoms brand values of trust, reliability and integrity.
Now, where’s my cycling helmet?

*For those of you unfamiliar** with the youthspeak of the MTV generation, ‘pimp’ in this context is a verb, meaning to accessorise a car or house with inappropriate amounts of ‘bling’***, so that it resembles an idealised version of an American pimp’s car.
**Better learn to speak it – the youth demographic is the hot market for communications services and, if you’re going to keep your job, you’d better get down and dirty, daddy-oh...
***’Bling’ – do please pay attention at the back – noun describing the large amounts of gold and other jewellery worn by said pimps.

Alun Lewis is a telecommunications writer and consultant (and all-round ‘banging dude’). He can be contacted via: alunlewis@compuserve.com

IPTV is being viewed by the industry as the ‘killer app’ in triple-play that will save telcos from becoming commodity dealers. Jennifer Kyriakakis looks at the potential of the much-talked about technology

The telecom and media sectors believe that they are at the beginning stages of a long and fruitful romance. This romance will eventually lead to a permanent union termed the ‘ICE’ market-information, communications, and entertainment. The current ‘darling’ of this new merged marketplace is IPTV.

IPTV has been put on a pedestal and is touted as the ‘killer app’ of triple-play that will save telcos from becoming commodity dealers. Analysts are predicting that IPTV will grow from a $10 billion to a $40 billion global market by 2009, skyrocketing to 53.7 million subscribers. Given that 5 of the 10 largest IPTV deployments currently have less than 50k subscribers, those numbers seem astounding. Whether IPTV growth will live up to those predictions depends on the value of the service, geographical conditions, how well telcos understand and meet subscriber expectations, and having the back-office infrastructure in place to support service deployments. 
One thing is certain about IPTV: it is a ‘must have’ for telcos to compete effectively in the consumer ICE market. The telcos know this; capital expenditures for IPTV are expected to increase an astounding 1,377 per cent over the next 4 years (Infonetics). But, there has been little focus on the consumer experience with IPTV.   
What makes IPTV so compelling to the subscriber is not the ‘IP’ side, but rather the interactive piece, the control, and the convenience of use. IPTV can completely alter the way that information, communications, and media services are consumed at home. IPTV moves the TV experience from a passive model to an active, two-way communications model-where every subscriber experience can be unique. A subscriber may choose to receive only the channels and programmes that she wants, and will (hopefully) only pay for the content that she consumes. She can also consume other services over the TV, such as instant messaging a friend about a favourite show’s latest developments while both are watching it from their homes. This unique, interactive experience definitely qualifies as disruptive technology: but is it enough by itself to fuel massive subscriber adoption?
Telcos think the answer is a definitive ‘Yes’ – with the right content. For IPTV to succeed, the telco must match content already available through traditional pay TV providers, while offering much richer local and personalised content, information, and shopping experiences, as well as more targeted advertising. 

How can telcos manage IPTV?
Giving consumers micro-control over what is broadcast into their homes makes IPTV a more transactional type of service. Current pay TV is essentially subscription-based with a handful of VOD or PPV transactions per subscriber per month. But with IPTV, the subscriber requests the items they want to watch, a channel opens up, and the program is ‘delivered’ to them.  Multiply that by the number of TVs and the amount of available content, and suddenly TV is an event-based business with millions of transactions occurring each day. These transactions must be tracked at some level so that usage can be understood, properly charged for, and royalties distributed to the content owners (or delivery partners). 
Perhaps telcos are better equipped to handle IPTV than ISPs or MSOs because their business was built on the usage- and event-based service of making voice calls. But, to effectively compete with traditional pay TV, telcos must simplify the tracking, charging, and billing of IPTV services. Finding the right way to bundle programmes and content as well as charge for live events versus static content will be a major differentiator for those telcos who successfully and profitably launch IPTV. 

How to price it?
A subscription-only pricing model won’t work. First, it erodes the service’s value to the consumer, putting it on par with existing TV. Second, service revenues will not be high enough, given the cost of providing the content that attracts subscribers.
While telcos are well versed in event-based services, they have limited experience delivering entertainment or TV, and even less experience dealing with media companies and content aggregators that own the distribution rights. Telcos’ primary experience in the content world is from the mobile or Internet perspective. Content partnerships in the IPTV world will be very different and most telcos don’t have relationships with the media companies that serve the TV market. Telcos must realise that most of these relationships will require prepaid guarantees and minimum subscriber levels, and that the revenue share is typically 60 per cent to the content rights holder.
Unlike traditional telecom and broadband services – where once the network build out is complete, additional service and subscriber costs are close to nothing – IPTV has the ongoing cost of procuring premium content and surrendering much of the service revenues over to media and content companies. This means IPTV margins will be much lower than telecom and broadband services margins.
The challenge
To be successful, IPTV services must: 
• Get the right content;
• Price, bundle, and promote the services with the margin mix that will be palatable to the consumer and profitable for the telco; and
• Have the ability to track large volumes of events as subscriber numbers grow, so that telcos can:
- Assure revenue and margins
- Settle properly with media and content owners
- Analyze customer usage for segmentation and price modelling
- Exploit the opportunity for cross sell and up sell
- Attract advertisers
Success will be especially challenging given the need to provide more localized content and the unique, discrete subscriber experience that this enables. 

The key
In the current reality of IPTV, the focus remains on network build out and capacity upgrades. But once the delivery mechanism is in place, the ability to quickly roll out new services, add content and content partners, and segment and target customers with bundling and promotions in a flexible manner will be key to IPTV service penetration. The telco back-office system for billing and customer management must be able to support this. As with any new service, if the back office is not constructed to handle new types of network usage easily, major development work will be needed to enable processing, charging, and guiding IPTV usage to the proper customer account.
Considering the myriad of new network elements that IPTV requires, some telcos will have to take the approach of either building out new back office applications to handle IPTV usage and billing, or adding on IPTV customisations to their existing billing systems. This can become extremely complex, especially if the back office is still operating in silos: processing existing customer service usage for broadband, VoIP, voice, and IPTV in separate applications and then cobbling all of that together at the end of the month to create a single customer bill.
Spending time and resources developing IPTV-specific back office functionality – or even worse, ripping out and replacing entire billing and customer care systems so that IPTV can be properly marketed, bundled, and charged for – will only delay potential IPTV revenues and add additional CAPEX onto the already staggering network investments required. Any IPTV specific billing  and subscriber management functionality must be correlated to broadband, VoIP, and value-added service usage to create a meaningful and accurate picture of the consumer and broadband consumption.
Once telcos have their networks in place, the agile service provider won’t need a major overhaul to subscriber management and back office billing systems to fully realise the opportunity. Telcos that have already updated their back office to be NGN friendly will have a key advantage in service experimentation, business model development, and managing the new consumer experience of IPTV. Rather than have to build out specific IPTV back office functions, these service providers will be able to layer IPTV on top of existing broadband services and easily manage subscriptions, bundling, PPV, and service usage. Telcos that have a convergent IP billing platform in place will have the ability to quickly roll out IPTV services, bundling and promoting them on a single application together with VoIP and broadband, and to manage a single set of back office business processes for all IP services and revenues.

Already up and running
Several first-to-market IPTV service providers in North America and Europe have achieved these core competencies. When originally moving from dial-up to broadband, these service providers were able to put a back office in place that would support the ‘unknown’ service. By building business processes based on converged IP service delivery to the consumer, they were able to launch IPTV services quickly once the networks were in place-one in North America as early as 2003.
A European service provider deployed an IP-based back office solution in 2001 that has enabled them to layer broadband, WiFi, IP content, and now IPTV onto their subscriber offerings. They manage subscribers and revenues centrally, allowing them to be first-to-market and providing the ability to continue to innovate. Another leading IPTV provider in Europe delivers converged services, including voice, Internet, and television, over a single broadband connection to the home-utilising a fully IP-based architecture, including both DSL and fibre optics. Since 2000, they have grown from a local ISP into a full service triple-play provider utilising a single billing application to manage all services and revenues. As a pioneer in the IPTV market, they have been very successful, not only in subscriber growth, but also in achieving the highest ARPU in double and triple play in their market. Furthermore, because these companies were already using next generation business systems to manage multiple services, they could more easily develop and offer unique service bundles of different IPTV offerings, including channel bundles, PPV, and VOD. The flexibility of the back office has enabled IPTV pioneers to keep existing customers, attract new customers, and maintain their advantage in the increasingly hot IPTV market.

Capitalizing on IPTV
While the focus has been on IPTV CAPEX with equipment vendors and network upgrades, in the end it will be business models and content offerings that will prevail and determine whether or not IPTV will live up to subscriber and market expectations. Network infrastructure makes IPTV a possibility, but to deploy IPTV successfully and profitably, it must be an integrated part of the telco’s service portfolio and one that is tailored to – and charges for – the services that the customer wants and perceives as valuable. In the end, networks don’t generate revenues – consumers do. To fully capitalise on the IPTV opportunity, a convergent billing and back office business systems infrastructure will enable telcos to achieve maximum flexibility in determining how they bundle, promote, track, and charge for IPTV services. Proper back office planning will ultimately determine how quickly telcos can adjust business models and service offerings to meet consumer needs and demands. 

Jennifer Kyriakakis is responsible for IPTV and IP-based Billing and Revenue Management strategy for Portal Software; email: jkakis@portal.com   www.portal.com

Can IMS provide operators with the gateway it needs to successfully deploy a truly converged network, asks Lars Johan Larsson

In market research there are different ways to measure trends in business, and to judge if something is a temporary derailment of the discussion, or wheher there is enough substance to have an effect on the future.

One way is by 'word dropping'. In discussions with TEMs (Telecom Equipment Manufacturers) and other vendors of similar gear, you drop a question like: “What is your strategy for this or that?” Dropping the IMS word in these situations always triggers a lengthy conversation filled with questions about what I see, what other people think, and so on. The interpretation is that the players in the market are very interested and concerned about the evolutions in this area. They feel that a major change is about to happen, but they are not clear about the detailed development and they do not have a firm strategy on how to approach the situation.
This article will take a look at the underlying trends and how they will affect the future of IMS and – I can almost dare to say it – the future scenario of the telecom industry.
The first and most important trend to look at is IP. IP for Internet Protocol should be added since each abbreviation has numerous meanings these days. The telecom industry has renewed itself many times in the past and has come up with new communication protocols to satisfy the changing demands of the traffic. Persistently, the new technologies such as TDM and ATM were designed by the telecom industry for telecom requirements. With the adoption of IP as carrier for payload traffic, the industry has embraced a protocol that was designed for Datacom by the IT industry. The result is that the centre of excellence in the chosen technology is located outside the telecom industry.
The next trend is COTS. The time has come for the major telecom companies to realise that it does not make sense that each vendor designs their unique and proprietary platforms any longer. It hasn't made sense for quite a long time but it took a major market collapse a few years ago for the industry to realise this. Coming out of the recession it was clear that the TEMs no longer possessed the resources needed to design the complete network from bottom up. AdvancedTCA is the standardised system that the majority of the TEMs have selected for the payload servers. ATCA is specified by PICMG and looks to be a standard that has arrived in a timely fashion to satisfy market need. Some of the TEMs have proprietary servers that still have some life left in them, but the trend towards COTS will continue and the next generation will not be 'in-house specials'. So far the COTS trend has mostly been visible in the payload layer since the strategy for the control functions are still under development.
The final trend to mention is the location. In this respect the location is the physical room where the servers will operate. Traditionally the network has been divided into the Central Office Telecom environment and the Data Centre controlled IT site. The physical requirements have been quite different between the two locations. CO has required NEBS compliant servers operating at up to 40 deg C with five nines availability. Other factors that are regulated include noise level, power consumption and that each system has an air filter for the inlet air. The DC is normally held at 25 deg C maximum and is dust controlled, so the requirements for air filtering do not exist.
Neither are noise or power consumption restricted. This rigid separation does not make sense any longer. Operators will have fewer and bigger switching centres and the environmental conditions will probably be a compromise between the two existing ones. The term Network Data Centre is starting to be used for the new environment. I have not seen any organisation that has started to standardise the systems requirements for this new type of site, but this will probably happen soon.
After having listed the most important trends it is time to sit down and think what this will all mean in the end. You can stop reading this now and do that. Or you can continue since I happen to have done this already.

Conventional wisdom
The IMS server is a computer according to all conventional wisdom. The actual traffic in the network is isolated to the payload layer and above that there are only control and database functions. The requirements on these computers are quite traditional in terms of performance, but there is more emphasis put on scalability and availability. In addition to this there are the environment specifications dependent on the location, as discussed above.
IP has been adopted in the telecom environment, which means that IMS will happen. This might seem to be a very quick and drastic conclusion, but it is based on the fact that there is nowhere else to go. The operators must become service providers and compete with revenue generating services. The simple fact is that building separate silos of applications connected in a spaghetti network makes it virtually impossible to charge for these services. Each new service does not add to the complexity, it multiplies it. IMS can be seen as the billing gateway for the converged network and the fact that it enables a simplified service delivery mechanism can be seen as a bonus.
We are also agreed that IP is an IT technology and the question is now who will 'own' this new environment. In the world of open solutions, the ownership is becoming an important factor. It is defined in quotation marks since it is a virtual ownership. The TEMs are investing more and more into their services business and the value offering to the operators is to design, install and maintain a network. This lifts the focus from details in the system platforms and locks out competition. The noble art of doing business in the open doors environments is to close them smoothly. The same happens in the IT world. Outsourcing of whole IT departments is a trend in the industry. Even if this is not yet happening in the world of telecom operators, there is always a dominating IT vendor that is very helpful with the strategy development process.
On one side we have the TEMs who have some clear advantages. They know the requirements of high availability systems and they are driving the standardisation of the new protocols. They will compete with COTS equipment such as AdvancedTCA or commercial servers that are OEMed from the IT vendors. Some might try with proprietary servers that are adapted for the new environment, but this has quite a small chance of succeeding. The market wants open systems and this evolution cannot be stopped.
The IT vendors will, from their side, claim that this is an IT environment and this is what we do best. They support the CIO to include IMS in the IT strategy and this can be a strong argument in the end. Also an IT solution is traditionally more cost effective.
In the end the user decides, as always. This is by no means a simple decision since it is not an investment where they can calculate a cost to projected revenue quota and base the strategy on this. It is an enabling technology that the user will have to live with for a long time. The ultimate scenario where you can buy bits and pieces on the market and they will all play together as a big happy family is still far away. The question is whether it will ever happen.
The most probable scenario is that we will face a new spree of acquisitions where a few vendors will try to build a dominating position. This will predominantly be the telecom companies since they have rebuilt their balance sheets after the market crash and the money burns holes in their pockets when they see the potential in this market. The operators will not enter this market with a test-and-see philosophy, in the way they could with the introduction of individual service offering. It will be safer to go with a vendor that has a value proposal that is strong and well backed by products.

Technology not good enough
The risk seen from the telecom view is a pc-fication of the environment. I don't know if the word already existed in the English language, but it does now. The meaning is that the PC has entered many areas of the IT world where the actual PC technology was not good enough. The temptation to try to do the job for half the price was big however, and some people tried the low cost way. It failed in the beginning of course, but a new level of pricing was established. The fact that it didn't work was not so visible and the incentive was there to make it work.
The TEMs have the chance to dominate the new environment, but they will face a tough challenge. No doubt that they can build a functional solution, but will they be able to adapt to the new cost scenario? Or to understand that this equipment will not have to be built to run for 25 years come earthquake, flood or famine?
Just a final note regarding the pc-fication. The existing IT industry saw the PC coming at the time, tried to adapt to the changes, but no company succeeded. The winners were all businesses that were born with the new industry and were designed to live with these requirements.n

Lars Johan Larsson is a researcher in the field of COTS technology in telecom servers, and can be contacted via e-mail: lars@modt.se    www.modt.se

Rick Marshall describes how an Intelligent Infrastructure
Management architecture can provide a robust and efficient method of provisioning resources to a network

When the document company Xerox implemented its ‘Change for Growth’ programme – a company-wide initiative to prepare for the opportunities and challenges posed by the digital revolution – a key part of the scheme was to align its products with its various vertical markets. The programme succeeded, but with needless casualties along the way. CEO Rick Thoman resigned due to criticism over the mounting costs and the time it was taking to implement.

What was true for Xerox in the late 90s is true for businesses today – they have to change and keep up with today’s ever evolving commercial landscape. This is why enterprises today structure themselves around their business processes as opposed to the past when most would have been regimented along departmental lines.
Managers have raised concerns about this cultural change. Chief among them is how hard it is to maintain a dialogue between individuals in different  departments, and still maintain the smooth running of the business.
This inevitably leads to a large backlog of work orders for the IT department who cannot handle the volume of requests for network provisioning. Many users are compelled to break policy and configure the network for their own ad hoc use. The integrity of the network is therefore jeopardised.
Basically, users require the tools to ensure that communication across existing structures are smooth and without interruption. The implementation of an Intelligent Infrastructure Management Architecture could encourage this cultural change and bring about a more stable culture within organisations.
IIM benefits
The most obvious benefits of Intelligent Infrastructure Management (IIM) are the savings in time, resources and manpower in provisioning services. Normally, a network systems engineer would require an intimate knowledge of a building’s networking infrastructure to work out which switch port relates to which panel outlet.
Another important impact of IIM is that it will make it virtually impossible for users to reconfigure their own system and when they try, the IT department will automatically know. Thus IIM will allow you to bring in enforceable policies that make your system more robust.
The intelligence gathered from the IIM architecture can also reap benefits for business. For example, the regulations and legislation that businesses have to adhere to, such as The European Data Protection Directive, Basel II and Sarbanes Oxley, emphasise the importance of data control and integrity.
This cannot be achieved without securing and understanding the underlying physical architecture. Organisations need to practice, what lawyers refer to as, due diligence – confidence that it is complying with, to the best of its ability, data control, security and reliability. Thus IIM becomes a trusted source of information.
Additionally, the intelligence from the network can simplify the auditing and tracking of an enterprise’s IT assets by detecting the type of hardware and software the users on its network have. Ordinarily, this would be difficult as users move around an organisation, but because IIM dynamically detects where users are, this becomes one less headache to deal with.
Top down business changes often fail because the advantages are not evident to the rank-and-file employees who are left to deal with the chaos that changes in business practice can create; but when a user can raise a ticket and get an IT work order expedited quickly, this advantage would persuade staff to trust the IT department to avoid this turmoil.
Businesses will then have the confidence to adapt to change if their staff are provided with the tools to do the job. Intelligent Infrastructure Management provides these tools and could avoid the communication glitches which organisational upheavals can generate.
Example 1 – Implementing a new IT project
Speed is very important when implementing any new IT project. The faster it can be brought on stream, the greater the return on the investment.
This is certainly true for enterprise wide projects, such as customer relationship management or disaster recovery. Provisioning the network can often take several weeks for these cumbersome projects. With IIM, the process can be cut down to literally hours.
Additionally, with the ability to look at the hardware, issues such as ensuring that the hardware is compatible is quickly pin-pointed and taken care of.
Example 2 – Exhibitions & Conferences
Provisioning new users and providing new services for existing users can be a real chore. However, imagine if you had to commission thousands of workgroups that would churn-over on a weekly basis at a rate that’s off-the-scale – for example, a large conference or exhibition venue like the NEC in Birmingham; or Hannover Messe in Germany, which is host to CeBit.
 All the ports and the panel outlets would have to be decommissioned on the Friday and the process would have to begin again on the following Monday. With IIM, it can be achieved at the desk and dynamically, simply and without error.
Example 3 – Mergers and Acquisitions
Businesses occasionally merge or acquire other businesses; or spend a great deal of time and resources fending off being acquired by a competitor.
Often, the planning of these campaigns is fought behind closed doors bringing together a team of outside consultants, internal business strategists and other specialists.
Therefore, very fast commissioning of new telephone lines, the speedy transfer of existing telephone lines and the provisioning of other networking services is essential (also known as adds and moves).
Thus bringing together these diverse professionals into a single work group becomes a simple process. The team members may not even need to be based in the same location if it is deployed companywide.

How IIM works
Relating an organisation’s network topology to the actual building plan is usually a manual process. The difficulty with these manual processes is that they are time consuming, complex and unreliable. For example, the network engineer may have to physically locate the panel outlet floor-by-floor – and office-by-office.
A manual record is then made and this is where errors usually creep in. With the amount of new networks services being commissioned on the increase, this system can break down.
With IIM, user provisioning is all done at the desktop. The switch ports are labelled, as are the voice ports and these labels are applied to the sources.
When a ticket is raised with the IT department to provision a voice or LAN service to a particular desk, IIM will begin working in the background to calculate all the patching that is required to enable the service to bleed down into the desk. However complex, it is done in seconds. A work order is then distributed to the technicians that would do the patching.
In the cabinet, LED indicators would flash guiding him or her to the correct ports. Once the technician has completed the job, the work order is automatically closed.                                                             

Rick Marshall is Managing Director of Comunica Limited   www.comunicaplc.co.uk

Lynd Morley examines the latest players in the mergers and
acquisitions arena and what the proposed deals signify about the telecoms industry in general

Already dubbed the battle of the brands – Virgin versus Sky – the recent announcement of a proposed merger between Virgin Mobile and NTL has been accompanied by some undisguised glee – in the UK press, at least – at the prospect of someone finally taking on Rupert Murdoch’s empire, and giving it a run for its money.

The deal would create a communications and entertainment giant valued at around £4.9bn, reaching some 5m mobile customers, around 2.5m broadband Internet customers, and 4.3m fixed-line accounts, and would be the first company in the UK to have a quadruple-play offering of mobile, broadband services, fixed line and TV. 
Effectively a reverse take-over, NTL would be re-branded Virgin, and there can be little doubt that Virgin’s considerable experience – and success – in marketing it’s brand will add much needed pizzazz to the somewhat duller cable group.
Good branding, of course, does not, on its own, guarantee success. As Sue Richardson, Principal Analyst at Gartner, comments: “The Virgin brand will not be a panacea for NTL’s poor customer service reputation.  If the service is bad, re-branding will not help.”
She goes on to comment, however, that, overall, there are undoubtedly opportunities to be gained from the deal. “Bundling and cross selling will be the initial push, but a more aggressive and focused content strategy will be necessary to take full advantage of the opportunities,” she comments, noting that NTL has always been ambivalent about content, historically describing itself as an infrastructure company.
“When the merger with Telewest was announced, it was uncertain whether Flextech would be retained or sold off. The TV part of NTL’s tripe-play represents its lowest margin business, and is regarded as a poor relation by the Internet and telephony businesses. This deal, though, could signify a change of approach to focus and develop a more aggressive content strategy.”
Gartner, of course, were not alone in being quick to comment on the merger news. Taking a more pragmatic approach than the press’ general celebration of a giant (in the shape of Rupert Murdoch) killer (in the shape of Richard Branson), Capgemini Telecom Managing Consultant, Jerome Buvat, believes the merger is mostly a defensive move by two players facing increasingly competitive market conditions.
“In the TV market,” he comments, “NTL is facing growing competition from Digital Terrestrial TV which now captures more than 70 per cent of net digital TV additions. Competition from BT launching TV over DSL broadband and the likely similar moves from local loop unbundlers will also limit NTL’s growth in TV.”
Things don’t look a lot rosier for the cable company in the broadband market, either.
“DSL represented more than 80 per cent of new broadband connections in 2005 and this proportion is likely to grow as local loop unbundlers increasingly enter the market,” Buvat comments, adding that in the voice market, NTL is facing pressure from both VoIP players like Vonage, and from resellers like Carphone Warehouse which have been very successful.
He goes on to point that Virgin Mobile is also facing tough competition in the mobile space, from both new entrants and incumbents. Tesco Mobile, for example, added more customers than Virgin in the first nine months of the year: 185K customers compared to 122K for Virgin. Virgin also saw its churn increase from 22.6 per cent in 2004 to 24 per cent in 2005.
Buvat concludes: “The merger will certainly help both NTL and Virgin, but is unlikely to resolve NTL’s competitive issues in the TV and broadband markets. Virgin will bring a strong brand to NTL and will undoubtedly benefit from NTL’s strong presence in the family segment.” 

Lynd Morley is editor of European Communications

Data storage is providing the means for organisations to deal with the dramatic growth in business information, as well as complying with legal requirements. Richard Cramer looks at the options

The last three decades have seen the relational database grow to prominence and reign as the king of data management solutions through a combination of good technology and strong marketing. However, it should come as no surprise that the needs of the business, and the characteristics of the challenges that must be met, have changed dramatically over this same time period, and a sea-change is underway at this very moment.

The reason for the emergence of storage as the premier data management solution is the growing need for companies to retain increasing volumes of historical data, and to keep it longer than they have in the past. The volume is growing both in terms of business transaction data, such as phone call records and retail point-of-sale data, as well as unstructured data such as text documents and multimedia files. There are two key drivers for keeping the data longer; business needs for improving operations and customer service, along with mandates for industry and government compliance.

Simply storing data is not enough
The rampant growth in the volume of business data being generated is frequently cited as a reason for the enthusiasm surrounding the storage market. However, this enthusiasm must be tempered with the realisation that simply storing data is not enough: it must be accessible and useable to justify storing it in the first place. And this is where there is genuine reason for excitement, with vendors attacking the remaining obstacles to quickly retrieving specific information of interest even when it is stored within terabytes of data. Although the dramatic growth in business data volume is driving the need for storage, it is the ability to locate quickly and make use of the stored data that will elevate storage to prominence as a strategic data management solution.
Enterprise search vendors represent the most visible new segment – helping to make unstructured data more accessible by quickly allowing end-users to find documents of potential interest located among thousands or millions of similar documents stored on disk throughout the enterprise. Examples of unstructured data are files such as spreadsheets, e-mail and word processing documents that have proliferated at a rapid pace along with the move to electronic business documents. This is essentially a Google-like search within the enterprise.

Databases do not solve all problems
But unstructured data is not the only area of the business data landscape growing rapidly. Another category is structured data that can never change once it is created – what is increasingly being referred to as ‘business event data’. Examples of this type of data include banking transactions, credit card transaction, phone call records and point-of-sale transactions. This is information that has traditionally been kept in relational database systems. However, the sheer volume of data being generated makes this approach increasingly impractical due to the cost, complexity and constant performance tuning required as databases grow in size.
The opportunity for storage as a standalone solution is created by the simple fact that business event data needs very few of the capabilities of the relational database: since the data does not change, there is no need for all of the transactional integrity features of the database that impose a costly overhead on performance. What’s really needed for large volumes of business event data is nothing more than an enterprise search capability that works with structured business transaction data rather than just web pages and documents. In this manner, the enormous volumes of historical transaction data can be stored in low-cost file systems located anywhere on the network, rather than in an expensive relational database.

The database is not dead
This is not to say that the relational database is going to be replaced by storage-only solutions. The relational database will remain the solution of choice as the foundation for both applications and for the storage of structured, changeable data for the foreseeable future. What is likely to change dramatically, however, is the presumption that a database is the only place to store vast quantities of read-only data that require none of the sophisticated capabilities of the database management system. It is far more likely that storage will assert its cost advantages for this type of data, and when paired with innovative indexing and search technologies will end up providing equal or better data retrieval performance far more cost-effectively than a traditional database.
A key difference between a storage-centric solution and a database-centric solution is where the majority of the effort gets applied. With a storage approach, minimal work is involved in simply capturing information on disk in whatever format it may be, which is both quick and cost effective. Only when a piece of information is retrieved is the work applied to reformat it and structure it for presentation. With a database, most of the effort is up-front in defining the database schema, transforming the data to fit into the schema, and loading the data into the database in such a way that it can be retrieved in the time available and in a format appropriate for the intended use. This approach makes it easier to pull information from the database when needed, but also means a great deal of effort is wasted on data that will never get used. As a result, most enterprise architectures will need a combination of both storage and database solutions that are matched to the unique needs of the problems they solve.
What this all means to technology professionals, particularly in the fast-moving communications market, is that the information architectures of the future will likely look very different from current database-centric architectures. Neil Macehiter and Neil Ward-Dutton of Macehiter Ward-Dutton address this very issue in their white paper The challenges of business event data management. In a world where the fast-moving business, regulatory and technology landscape is creating new problems at a rapid pace, they offer a perspective that encourages technology professionals to carefully examine the characteristics of the problem at hand before assuming that a convenient and familiar technology is the best solution. To illustrate their perspective, they characterise the requirements of online transaction processing systems (OLTP), such as customer service or billing, against the needs of online analytical processing systems (OLAP), such as a data warehouse. This comparison very clearly shows why two solution categories have evolved to support these very different needs, with each solution exhibiting strengths specific to the problem at hand.
Perhaps most interesting, however, is the point made when they overlay their OLTP and OLAP analysis with the solution requirements for the retention of large volumes of business event data. This is exactly the problem domain presented by the proposed communications data retention legislation in both the UK and European Union, and provides a clear illustration that neither the OLTP nor the OLAP approaches for managing and retrieving data are well suited to the unique requirements of data retention.

It’s time again for innovation
In the end, the Macehiter/Ward-Dutton analysis is thought provoking rather than prescriptive of a specific solution. What it does quite well is clearly show how solutions evolve over time to meet the needs of specific business challenges, and suggests that business event data retention merits a broader investigation. Numerous advanced technologies have been developed and tested during the last five years. And although technologists and business people alike may just now be recovering from the hangover brought on by the excesses of the Internet boom, it is time to evaluate these innovations which are now mature, and ready to solve large-scale commercial problems.                       

Richard Cramer is VP Marketing, CopperEye, and can contacted via tel: +44 1225 745500
e-mail: info@coppereye.com

Deploying a Software Powered Services Network introduces numerous possibilities for operators to generate new revenues, as Priscilla Awde explains

There is a bit of a buzz about the communications industry these days. It centres on the explosive potential of using Web services technology to aggregate disparate Internet applications into innovative, complex new composite service bundles. Telcos and enterprises can combine content from the IT and Internet worlds to design and distribute new applications to target audiences fast and within tight budgets.

One of the drivers in the digital revolution, Microsoft, has devised a new concept in service development and delivery. Sitting above traditional network control layers is a new software powered Service Network layer which links into existing legacy OSS (Operational Support Systems) and BSS (Business Support Systems) infrastructure.
However, it is not enough simply to add an IP Service Network. To bring it alive, Microsoft has brought together its expertise in developing software for voice, data and video solutions into a new delivery and control platform based on Web services architecture.
The Connected Services Framework (CSF), is an integrated software platform which marries a Service Oriented Architecture (SOA), and Web service interfaces providing the tools to build and manage new applications. In an SOA environment, applications are broken down into processes and functions called 'services' which are able to 'communicate' with each other over the open standard Web services interface. Developers can therefore use standard Web services technologies to create complex composite services which operators can deliver to any end-user device including televisions, telephones, computers or game consoles.
Until now most Web services have not been well co-ordinated behaving instead as distinct, separate applications. However, Web service technologies are rapidly maturing into the open standards capable of connecting applications within and across IP domains so  becoming the building blocks for the composite Web applications of tomorrow.
Developers are already using IP based technologies to aggregate content into new composite services delivered automatically over broadband networks. The CSF platform makes it possible to combine those composite services into more sophisticated new applications.
The implications for network operators and large enterprises are interesting. Using an SOA as their application development infrastructure it is easy to formalise core software which can quickly be assembled into applications designed to meet new opportunities and market demand. The possibilities are exciting, as existing applications can be dynamically augmented and aggregated into sophisticated composite services faster and at lower costs than previously possible.
Camera phones will scan barcodes, automatically send signals to interrogate databases, accept text messages with the resulting stock information so customers can compare prices instantly and use their mobile to purchase goods. Such complex applications marry SMS, bar code recognition and on-line retail services plus shipping and billing.
Small businesses will be able to exploit services such as e-mail, instant messaging and Voice over Internet Protocol (VOIP), over broadband connections to centralise their contact databases and route voice calls over fixed and wireless connections, thereby giving them more ways of communicating with their mobile employees.
Such flexibility is difficult in existing architectures because Enterprise Application Integration (EAI), and Enterprise Service Bus (ESB), systems result in tightly coupled service environments which are difficult to scale, manage and maintain. In contrast, the less structured, more dynamic CSF environment allows several different types of services to be rapidly provisioned, built and combined in new and exciting ways and to suit particular demand.
In traditional networks, deploying value added services is closely linked to the infrastructure since software must be integrated with proprietary protocols controlling network equipment. The new Software Powered Services Network undoes this link by divorcing the underlying network infrastructure from the applications and services carried over it.
A Software Powered Services network is organised in similar ways to traditional infrastructure in which the transport layer is managed by a control layer. In this new architecture, the Service Network Control layer not only organises service creation, enablement and aggregation but also provides the interfaces to existing BSS/OSS environments.
Designed as a service enablement and aggregation platform, Microsoft's Connected Services Framework is at the heart of the Service Network Control Layer and creates links into legacy OSS/BSS components. By using service oriented architecture and XML Web services, it makes the development and delivery of open, secure, scalable and cost effective complex composite solutions fast and easy.
Operators can use the management layer to combine applications dynamically, exploit the web services currently available in the Internet domain and thereby significantly reduce the costs of bringing new products to market.

New services delivered
Since the Service Network Control Layer embraces the IP Multimedia Subsystem (IMS), which brings together wireline and wireless architectures with SIP, new services can be delivered independent of the type of network be it DSL, WiMAX or 3G.
By using industry standard interfaces, telcos can expose their application environment to selected third parties thereby allowing them to create their own services. In opening up their telecoms environment, operators can provide access to millions rather than the few hundred IT developers which currently have access. In Japan, DoCoMo has successfully implemented such an application development and revenue sharing model.
Learning from the success of the 'pay-as-you-go' mobile phone payment system, software vendors and enterprises are looking to exploit market opportunities for delivering Software as a Service (SaaS). This emerging trend allows customers to buy software on a 'pay-as-you-use' basis thereby avoiding significant up front capital expenditure and in-house operational costs. The value of this market is expected to grow to $9 billion by 2009.
Deploying a Software Powered Services Network introduces numerous possibilities for operators to generate new revenues and many are already using it to provide hosted e-mail, instant messaging and VOIP to small businesses. A recent IDC study estimated that spend on Web services software will grow from $1.1 billion in 2003 to $11 billion in 2008.
In taking the new model to market, the combined strength of an individual telco's and Microsoft's brand names is opening doors into the business and consumer communities. Operators such as BT, Verizon, SBC, i-City and Bell Canada are already using CSF to reduce their development and deployment costs, speed time to market and launch innovative new services. Making it even easier, Microsoft has created a range of services including Hosted Exchange, Live Communications Server, VOIP, Microsoft-TV (IPTV) which telcos can take to market today and the company is developing hosted CRM and ERP services.
Exploiting emerging Web service technologies in a dedicated Software Powered Services Network has the potential to do for the world of digital multimedia applications what browsers did for the Web and is likely to have a similar impact on work and leisure time.
Realising the power of Web services depends on creating an environment in which data services flourish and devices become more sophisticated. It needs a platform capable of exploiting, aggregating and co-ordinating the vast complex of existing Web applications easily and in new ways to the benefit of end users. As core Web technologies mature and converge, CSF may be one of the catalysts needed to realise the dream of the digital revolution.                           

Priscilla Awde is a freelance communications writer

External Links

Microsoft

    

 

European Communications is now
Mobile Europe and European Communications

  

From June 2018, European Communications magazine 
has merged with its sister title Mobile Europe, into 
Mobile Europe and European Communications.

No more new content is being published on this site - 

for the latest news and features, please go to:
www.mobileeurope.co.uk 

 

@eurocomms

Other Categories in Features