Features

Features

Internet access for all has been the rallying cry for governments and the UN. But even in Europe the division between the haves and have nots has yet to be fully addressed. Andrew Davies looks at the issues

The Digital Divide is a major issue at a national, European and world-wide level. In simple terms, it is the divide between those who have access to computers and the Internet and those who do not. The divide can be economic, demographic or geographic. It is the geographic dimension that is an issue for telecommunication service providers and the routes to its resolution could be a significant opportunity.
In modern economies, access to computers, the Internet and, increasingly, broadband communications is seen as essential for future prosperity. Consequently, lack of access could be a significant brake on economic growth. On a national basis, in countries such as the UK where penetration of computers and the Internet is high, the provision of access to broadband connection is important enough to be embodied in government policy. This view is held by most Western European countries, the European Union and the United Nations. With the enlargement of the EU, the need to provide equitable access to what is referred to as the Information Society is seen as essential.
New member states
The state of play in Europe varies, particularly in some of the new member states. The penetration of fixed line access is low, which means that significant proportions of the population do not have access to voice telephony, let alone the Internet. In some of these countries, as much as 25 per cent of the population have never even heard of the Internet. With a poor fixed line infrastructure, penetration of broadband is also low. This means that there is much work to do if the geographic digital divide is to be closed.
It is interesting to compare the situation of the new member states with the established members where the expectation is that the geographic digital divide should already be solved. In the UK, BT recently announced that all households would be in reach of a broadband connection by 2005. Their most recent estimate is that the latest technology will enable them to connect all but 100,000 individual users.
By contrast, a recent workshop in France defined the minimum broadband connection as 2 Mbps by 2007 and estimated that as much as 20 per cent of the French population would be out of reach of terrestrial connection at that rate. Spain and Italy have similar concerns and their national governments are developing programmes to address the issue.
National government and European Union concern about the digital divide in Europe potentially creates an opportunity for providers of telecommunication services and equipment. However, it is useful to answer a few pertinent questions to determine the extent of the geographic digital divide:
1. How real is the demand for Internet and broadband connection?
2. How great is the demand and how much is unmet by current means?
3. How can government institutions help and should it be left to market forces?
A quick look at the situation in the UK answers the first question. From Ofcom figures, personal computer ownership is tending towards 70 per cent of the population by 2007 and Internet connection is tending towards the same figure. If 70 per cent of the population is connected to the Internet by 2007, it is likely that all of them would be potential users of broadband. In the business community, particularly in the Small to Medium Enterprise (SME) sector, penetration of Internet connection is even higher and broadband take-up is already at 34 per cent and rising. As SMEs are the lifeblood of any economy, these figures indicate that Internet access, and increasingly broadband connection, are important to a thriving economy. This is especially true in rural areas where SMEs are often the main employer.
For the new member states, additional factors come into play. A good example is the impact of the Common Agricultural Policy (CAP). This places significant demands on record keeping and form filling for farmers in the new member states. This means that the farmers will need computers and, ideally, Internet connection for e-mail and information services. Additionally, village based communities are more numerous in the new member states and the importance of the rural SME is even greater.
This indicates that there is undoubted demand and economic need for Internet access and, increasingly, broadband connection. This leads to the second, more complex issue: how much of this is unmet by current connection means? BT has indicated all but 100,000 users will be broadband enabled if they want it, by 2005. With 70 per cent PC ownership this implies 70,000 users will want it but will be out of reach. If this figure is increased pro rata across Europe, then the unmet demand becomes 600,000. However, the French figures indicate as much as 20 per cent of their population will not have access to broadband by 2007, an unmet demand based on PC take up of around 8.5 million which, scaled up to the enlarged Europe, gives an unmet demand of over 60 million.
This indicates that there could be a large number of households and small businesses that want Internet access and broadband connection but are unable to get it. In some of the new member states there is the additional issue of the ability to pay for such services and this plays into the answer to the last of the three questions, on the role of government.
A number of EU and national initiatives are in progress but there are limits to what government can achieve. The two leading issues are the need to maintain competition and the need for 'technology neutrality'.
Contravening regulations
A national government is unable to simply give money to a service provider to connect remote users. This is seen as a subsidy and contravenes European and world trade regulations. Similarly, as technologies such as satellite already claim to cover most of Europe, albeit at higher cost, providing incentives to satellite service providers to bridge the digital divide is not 'technologically neutral'. However, funding can be made available that falls within the rules but enables rural users to have broadband access. In the UK, Regional Development Agencies provide grants to businesses that may be disadvantaged by the lack of broadband. Because the grant is made on an individual basis, the end user can choose how the connection is delivered, maintaining technology neutrality and competition. If, for example, satellite access is the only means of delivery to the user, he or she may still have a choice of competing service provider, thus staying within the rules.
For new member states, the EU funding available for disadvantaged areas, known as structural funding, can be used if Internet and broadband connectivity can be shown to be necessary for economic development. In Poland, grants will be made available to 100,000 farmers to enable them to comply with the requirements of the CAP. If this money is used to buy computers and Internet connections, it is still within the rules.
In the Nordic countries, tax breaks for users to encourage demand and for suppliers to encourage build out have enabled the fixed line infrastructure to effectively reach out further.
So, the three questions can be answered. We have confirmed that here is a growing demand for Internet and broadband connection across Europe and that a significant proportion of that demand cannot be met economically by fixed line connection in the current market. There are ways institutional funding can be used to underwrite some of the cost of connection. So how should the service provider community respond?
The first point to make is that these 'digitally divided' users in Western Europe are outside the economic reach of fixed line service providers. In many of the new member states, the fixed line infrastructure is very limited outside urban areas. Leaving it to market forces could result in a growing gap between those with Internet and broadband access and those without. However, if institutional intervention takes place to stimulate the market, someone still has to deliver to places that are not currently economically feasible. This situation is exacerbated in some of the new member states for two reasons:
• Average incomes are between a half and a quarter of those in Western Europe, so user subscription levels need to be lower.
• The successful roll out of GSM in some of the countries has meant many users are bypassing the fixed network altogether, discouraging build out of the fixed line network by the incumbents.
This implies that there is an opportunity for wireless connection for broadband, embracing both fixed wireless and satellite. There may also be an opportunity for newer technologies such as powerline delivery through the electricity infrastructure.
Satellite service providers such as Eutelsat and SES Astra are already looking at satellite/WiFi combinations to provide community broadband. The lower cost WiFi for local access combined with the more expensive satellite for trunking looks promising provided issues of scaling are successfully addressed. If so, the model could work anywhere in the satellite footprint. Low cost satellite access models are also being addressed, as the unmet market may be of sufficient scale to provide the volumes needed to drive down user terminal costs. Powerline – the delivery of communications via electric power cabling – is beginning to look more attractive because of the scale of the potential demand. It may even be argued that 3G networks could cover some of the unmet demand, based on the success of 2G networks in new member states.
What is emerging is that the issue of the digital divide could be addressed by the technologies that promised much in the late 1990's but never quite delivered. The government imperative may be the key to unlocking the potential of these technologies, provided service providers are able to integrate them in their service offer. There are many value chain issues to be resolved, particularly in new member states where credit cards and bank accounts are not the norm and where consumer scale distribution chains are relatively primitive. Innovation in service delivery is required in addition to innovation in technology.
One issue is clear. Western European governments believe that the Internet is an essential tool for the citizen and that broadband connection will form part of the path to future growth and prosperity. If this is true, those outside the reach of these technologies will be seriously disadvantaged, with resultant shifts in the patterns of business away from rural areas and less well connected countries. This is contrary to the aims of the enlarged European Union. However, it is not enough for government institutions to provide state support to service providers to solve the problem. The service providers themselves need to be able to offer innovative solutions that combine with institutional support to address the digital divide.  This could be a great opportunity for some niche technologies to come to the fore over the next five years.

Andrew Davies is Business Development Director at strategic technology consultants ESYS, and can be contacted via: adavies@esys.co.uk

Wireless service providers are now appreciating the vital role that OSS can play in a successful operation, says Kieran Moynihan 

With the exception of areas such as billing, OSS has never enjoyed a high profile. In the eyes of many a CFO, it was just a cost centre, strewn around the back office, and senior management in general didn't really understand what exactly it did day-to-day. In the severe opex and capex cuts in wireless service providers over the last two years, senior management have appreciated for the first time the importance of OSS in streamlining the operations of the network and acting as the overall foundation for the business processes employed in the network. As the network complexity continues to spiral and there are less people to manage the network, OSS has now entered a new phase of its evolution as the cornerstone of operating a wireless business.
Wireless service providers are under unrelenting pressure to improve their earnings performance, leverage their existing network infrastructure, improve customer satisfaction and bring new services to market more quickly. They are competing in an environment of intense competition with rapidly evolving and diverse technology challenges. One of the most expensive and painful problems facing wireless service providers today is that their multi-vendor OSS systems do not offer the level of interoperability and flexibility required to efficiently achieve their business goals, while reducing both the cost of ownership of the OSS and the effort needed by the service provider to maintain the OSS infrastructure.
The importance of interoperability
Both the wireless service provider and the OSS community have long understood the business benefits of interoperability. However, in practice, the level of open standards and interoperability achieved has been very disappointing. The OSS vendor community, the system integrator community and the wireless carriers need to share the blame for the lack of progress here. On the vendor side, there has always been a reluctance to work on standards with competitors, coupled with the financial pressure to maximise deployment and integration revenue from deploying 'complex' solutions. System Integrators did what came naturally to them and assumed the responsibility of knitting the disparate OSS systems together in expensive integration projects. Wireless service providers discouraged OSS products companies from conforming to standards by frequently requesting custom solutions to meet their special requirements, thereby creating a spiral of legacy OSS integration which is still in existence today.
The OSS landscape is now undergoing a fundamental change with a genuine sea-change in attitude to the importance of OSS standards and interoperability. In the current climate of microscopic focus on opex, service providers cannot keep sustaining the 'OSS integration tax' and unnecessarily high cost of ownership. Wireless service providers have accordingly increased the emphasis on OSS vendors conforming to the TeleManagement Forum (TMF) and emerging standards such as OSS/J. Wireless OSS vendors are reacting to this pressure from the service providers. Wireless OSS vendors are also, interestingly, reacting to the increasing pressure from their shareholders who have recognised that in most cases, the most successful OSS products companies, in terms of shareholder value, had high-volume OSS products with particular strengths in ease-of-deployment and interoperability. Finally the system integrator community is beginning to transition their revenue generation focus away from costly integration projects to high-value business transformation projects as it sees, for example, the business process transformation projects associated with the introduction of a Service Quality & Service Level Agreement management system.
While the basic concepts of service provisioning, activation and service assurance are well understood, the process of integrating these systems together so that they 'talk' to each other has been difficult, requiring service providers to undergo lengthy and costly software customisation projects or to build entirely new applications. The challenge of integrating diverse OSS components is a deep-rooted problem in the evolution of OSS technology. Wireless service providers are realising that it's no longer feasible to develop and maintain a costly customised OSS solution environment, particularly with the increasingly complex infrastructure environments of 2.5 and 3G networks.
Service providers try and use a best-of-breed approach to identifying OSS vendors that offer the best product or service. However, at some point, these disparate OSS solutions must be integrated into a unified OSS system to manage all aspects of their business operations. In order to achieve interoperability, wireless service providers are faced with cost-prohibitive integration costs and a lengthy data migration process that can often stretch into years. Wireless service providers who want to profitably deliver next-generation services must rely on OSS vendor support to meet the changing needs of consumer demands in an evolving digital economy.
The need for industry standardisation
A major challenge for wireless service providers is how to manage multi-technology, multi-vendor networks more efficiently. To meet the market demand for standardisation, standards bodies such as the TeleManagement Forum and OSS/J were created to better define development standards for OSS application development.
Wireless service providers have been hard pressed to find a standard set of applications from multiple vendors that will work together and, often, have chosen all of their products from a single vendor, which has resulted in deploying proprietary solutions, not solutions developed on a set of industry standards.
To meet the growing demand from wireless service providers requesting interoperable OSS solutions and industry standard applications, four leading independent providers of OSS solutions joined together to form the Service Management Alliance (SMA). The four companies, Argogroup, Casabyte, NetTest and WatchMark-Comnitel, are working together to promote the advancement, awareness and industry collaboration of service management solutions for the benefit of wireless service providers. The SMA is focused on developing a solution to the traditional problem of interoperability between multi-vendor OSS applications in the service assurance space that are often complex, proprietary, or require custom application development. Several other vendors in the wireless service assurance space have recognised the overall benefits of this alliance and have formally submitted requests to join the SMA. Early feedback from the wireless service provider community has been extremely positive and has acknowledged the proactive approach by several of the leading vendors in the wireless service assurance space to promote standards and interoperability and reducing the cost of ownership.
OSS' changing role
With the tremendous pressure on service providers to improve their service quality levels, reduce customer churn, improve customer satisfaction and increase their profitability, wireless service providers are now taking a different view of the OSS vendor community and recognising the significant role that OSS plays in their business operations.
It's time for wireless service providers to differentiate themselves in a commoditised business environment and they must now rely on value-added services, quality of service, speed to market for new services and a quality customer experience as market differentiators.
There's mounting competition for high-value customers and service providers must be able to address network, service and customer issues and opportunities simultaneously to ensure profitable operations. To achieve acceptable customer retention, service providers must develop new ways of viewing service quality through the eyes of the customer. This is a view that experience tells us is often out of sync with the traditional network-centric view. This transition to a customer-centric approach to managing networks, in a profitable manner, represents a fundamental change for wireless service providers as they roll out 2.5G and 3G services.
Wireless service providers have now recognised that the introduction of service management in wireless networks acts as a catalyst to bring together network operations, customer care, corporate account management and sales and marketing, to extend their current cooperation levels to a new powerful paradigm where all teams are integrated around a common core objective of delivering consistent service quality to customers in a profitable manner.
By implementing OSS solutions that truly offer interoperability, service providers will benefit by improved earnings performance, a reduction of OSS integration and maintenance costs, faster deployment of new services, improved customer satisfaction and loyalty and an overall increase in revenue per customer.

Kieran Moynihan, CTO, WatchMark-Comnitel can be contacted via tel: +353 21 730 6002; e-mail: kieran.moynihan@watchmark-comnitel.com

Margin management is a frequently overlooked element of revenue assurance. David Halliday explains why it should have a more prominent role

The downturn in the telecoms industry has seen several operators go out of business, and for many that have survived the emphasis is now on protecting and growing existing revenues. Revenue losses in the telecoms industry globally have been estimated at 13.7 per cent of turnover (Analysys/Azure, 2003), which equates to billions of Euros. Consequently, operators are slowly but surely facing up to these losses and are starting to combat them by implementing revenue assurance programmes.
Revenue assurance is a term that operators have traditionally associated with causes of loss, such as fraud, end-to-end revenue leakages and the over-payment and undercharging for interconnect capacity.  However, there is an equally important part of revenue assurance that has tended to be overlooked, which is margin management. Managing your margins makes perfect business sense in any industry, but is particularly relevant in the telecoms industry, which is still navigating itself out of a very difficult time. At a time when managing expenditure is seen as key, there is a significant addressable revenue opportunity by implementing an effective margin management strategy.
Margin management is complementary to all other aspects of revenue assurance. Taking a simplified view, margin management can be defined as managing the business opportunity – interconnect accounting makes sure payments are managed effectively, whilst fraud management ensures that none of the money is going astray. All these aspects, managed together, will help run a much more complete end-to-end revenue assurance operation.
Optimising revenues is by no means a new concept as is evident with the practice of least-cost routing.  The international telecommunications industry has experienced dramatic changes over recent years with the proliferation of competitive international carriers and capacity resellers who are increasingly offering cheaper rates to deliver traffic to an ever-increasing variety of destinations.  In order to remain competitive within the industry, carriers have needed to be able to negotiate and take advantage of the lowest market rates, whilst minimising disruption from poor quality routings. Failure to make quick decisions on how routing is organised can significantly damage profits.
A business-driven approach can be implemented across a carrier's entire processes and technology base. As with other aspects of revenue assurance, margin management isn't solely a finance or operational function. An organisation's COO will want to ensure that operational efficiency is being achieved, whilst its CFO will want to see the impact on the bottom line. To this end it is important to have the correct processes and technology in place.
Understanding the health of any organisation is critical. Regular third-party health checks of existing systems and procedures are recommended as even a basic human input error could significantly impact on margins. Carriers have finite resources, so it is essential that they ensure margins are being fully optimised.  Yet until existing procedures are actually reviewed, it is very difficult to plan how to structure a business to achieve maximum revenues.
Once business objectives have been identified, solutions need to be implemented in order to help bring margin management to fruition. Whether a PTT or a wholesale carrier, systems with a slow response time or insufficient reporting capability will risk profitability. Many carriers will claim to have effective systems in place, yet a great deal of them are making do with 'best-effort' in-house solutions. Too many carriers simply try to adapt their existing tools, with limited success. This can also often result in information being replicated, which wastes significant time and money and makes the whole process of margin management clumsier.
Carriers need to bite the bullet and move away from their existing legacy systems in order to make it possible for them to maximise margins and improve profitability. Systems need to focus on automating tasks, improving efficiency and productivity by reducing the need for large groups to manage core data. Systems are now available that will enable carriers to:
• highlight arbitrage opportunities at the lowest dial-code level
• substantially improve overall quality
• validate carrier quality
• automatically generate and maintain MML routing changes onto a switch which is fully auditable
• provide custom alarms and alerts based on a wide range of threshold data
• provide user definable custom reporting with drill-down capabilities
All of this will help carriers maximise margins and improve profitability in the complex world of routing choices, and help guarantee quality of service.
Having the correct processes and technology in place will become increasingly important with the emergence of next-generation technology and services. Multimedia and its content, for example, will bring increasingly complicated supply and value chains, meaning that partners will need to be settled with much more efficiently.  Moving forward, margin management will be evolving into what can be termed as trading management.  This environment would be much more opportunity driven and proactive, whereby carriers would be able to settle with each other in real-time which helps with cash flow and cuts down on manual intervention. In the future C-level executives will have the visualisation tools as part of a revenue assurance dashboard so that they can see such transactions first hand.
This of course will be achieved much more effectively in an open trading environment, which is not a common characteristic of today's telecoms market. Despite the vast increase in numbers of operators many of the PTTs remain highly influential, consequently the market still has many cliques with certain carriers only dealing with certain partners. All of this obviously can be detrimental to customers as quality of service is not being guaranteed as they may not necessarily be provided with the best service.
This however may not be an issue in the future as regulators such as OFCOM are looking at ways to develop and encourage a true open telecoms market.  Therefore, it is essential that carriers adopt margin management now, so that they are in the best position to take advantage of new services in the future.

David Halliday, Director of Margin Management at Azure, can be contacted via e-mail: info@azuresolutions.com   www.azuresolutions.com [l=www.azuresolutions.com/]http://www.azuresolutions.com/[/l]

Robert Winters looks at the drivers behind Ethernet Service testing and the ways in which QoS can be maximised

Ethernet services are on the increase, with carriers deploying cost effective, high bandwidth services on a worldwide basis. Some Asian countries, such as Japan and Korea, are well ahead of the curve with low price 100Mbps fibre-to-the-home Ethernet 'best effort' services already on offer for about EURO30 per month. 
However, there is also great emphasis on premium Ethernet products that offer differentiated quality of service (QoS) business packages that command higher prices in return for guaranteed performance and reliability. Europe and North America are catching up with greater focus, initially, on high value, high quality business-oriented Ethernet services.
Despite the ubiquity, high bandwidth and inherent cost advantages of Ethernet, there is still a requirement to effectively guarantee quality of service (QoS). The usual network-based Service Level Agreement (SLA) that offers throughput, latency guarantees is now being increasingly supplemented with IP QoS guarantees, as an array of delay sensitive value added applications such as video are offered.
For Ethernet service providers and the equipment vendors supplying them, competitive advantage can be much improved by offering a combination of traditional network service quality and IP QoS guarantees. However, this requires a more structured approach to testing in order to increase confidence levels in offering such guarantees.
Drivers for Ethernet Services testing
1. The move from best effort to 'Business Class' Broadband Ethernet Services
Fundamental to the whole issue of QoS testing is the underlying movement from best effort broadband services model to 'Business Class' premium service offerings. In order to distinguish between the SoHo/SME consumers – who will generally accept a best effort service – and the more demanding market segments covering large enterprise, financial services, healthcare and government etc, guaranteed service quality parameters are offered, including fixed bandwidth, latency and high priority throughput of traffic.
Coupled with the above differentiated services, new revenue models are being derived that include value added business applications such as Multicast Video, Voice over IP (VoIP), time sensitive web services guarantees, storage area networks etc. At the end of the day, it's all about end customer QoS experience and many of these applications are very sensitive to delays. It doesn't matter what the service level claims are, if a customer CEO has a Multicast Video session to fifty branch offices on a Friday afternoon and it is not up to scratch, there'll be trouble.
Therefore, along with traditional service level guarantees like latency and throughput, Ethernet service providers need to give themselves the best chance possible by determining any potential delay sensitive application IP QoS issues.
2. It is not just about packet blast testing anymore
During the late '90s telecom boom, building capacity into the network was key and test approaches tended to focus on packet blast methods to testing layer 2 services with stateless unidirectional packets. The approach was to fill the service pipe with varying size packets and measure throughput capabilities, latency etc, according to test standards such as RFC2544.
However, these days, pure layer 2 test methods need to be supplemented with end-to-end IP QoS testing that invokes what the real end user is expected to experience. Rather than the sum of the parts test view, it is more practical to emulate and analyse the performance of individual layer 2 services, their associated bandwidth and mixed priority settings. The IP flows that run over these services require verification too and fully stateful real applications that represent the most common internet mix, such as web, email, multicast, streaming etc.
For example, the knock on effects from dropped packets at layer 2 can result in a large decrease in effective bandwidth caused by TCP retransmits. However, badly specified application servers can also cause TCP retransmits. The issue is how to test and effectively isolate the problem source.
3. Understanding service limitations – testing for QoS boundaries
Strictly speaking, Ethernet inherently offers Class of Service (CoS)-based service guarantees through VLAN 802.1 services with bandwidth and priority traffic settings, as opposed to specific QoS settings that are more prevalent for example, in the traditional (and expensive) ATM world. This is an important distinction that necessitates a view of what defines 'carrier grade Ethernet QoS'.
In order to guarantee carrier grade Ethernet QoS, providers need to be confident that each service, each user and each IP application flow using that service are thoroughly tested for quality. Therefore, a pragmatic approach to testing is required whereby corporate Ethernet service and application flow models can be quickly built, then emulated and analysed for quality issues throughout the network, under test with varying load and network status conditions. Using this test method, QoS boundaries can be realistically determined for both network services and application layers.
4. Convergence – Network or Application quality issues?
The overall trend for convergence of telecom and IT departments in large enterprise is blurring the distinction between pure network layer testing requirements for Ethernet services and the quality issues of applications utilising those services. 
Ethernet service providers need a structured test environment, not only for pre-turn up and provisioning test but also for post-deployment capacity planning and trouble shooting test purposes.
For example, if a trouble ticket is generated from a large enterprise customer that necessitates an on site visit, then problems need to be narrowed down very quickly as the time and money clock is ticking for both parties.
Despite the Ethernet service provider opportunity to offer sophisticated value add applications, the downside is that poorly configured application servers can have a serious network layer side effect. Most of these servers are out of the control of the service provider and can lead to them being unfairly blamed for certain problems such as decreased bandwidth.
An integrated and pragmatic test approach is required that first determines whether the network is the issue and, if not, can then effectively prove whether the applications feeding into the network are at fault.
Test systems need to provide assessment capabilities as to whether dropped bandwidth is caused by packet loss on a layer 2 Ethernet Switch or caused by a high level of TCP retransmits due to a poorly configured web applications server.
In this instance, it is important to be able to decouple the application servers and instead emulate IP flows using a test approach that analyses whether the network is at fault. If not, the test system should then be turned on the actual application servers themselves and potential quality issues identified.
Customer expectations
High speed Ethernet services testing demands are intensifying as quality of service (QoS) guarantees reach a greater level of sophistication. End customer expectations are heightened, with the introduction of an array of delay sensitive value added applications such as video and VoIP. In order to improve competitive advantage, the traditional SLA is being supplemented with IP QoS guarantees. Therefore, Ethernet service providers also need to understand their overall QoS boundaries in a converged network and application environment within which premium level business class services can be guaranteed. Post deployment, trouble-shooting needs to quickly determine whether an Ethernet service issue is causing a problem or whether the applications using the service are at fault. Having an integrated network and application test approach provides the necessary test environment to meet these requirements.                 

Robert Winters, Chief Marketing Officer, Co-founder, Shenick Network Systems Limited can be contacted via tel: +353 12367002; e-mail: robert.winters@shenick.com [l=www.shenick.com/]http://www.shenick.com/[/l]

Lucent Technologies INS President, Janet Davidson tells Priscilla Awde why OSSs are crucial to providing both a network transformation and the best service to customers

Hidden behind the economic downturn of the past several years, a quiet revolution has been going on in the communications industry. It is one driven less by telcos and operators and more by consumers: end users are demanding more independence, flexibility, speed and functionality. Consumers are flexing muscles and, driven by greater choice and mobility, they are putting pressure on operators to deliver.
People want access to multimedia services in near real time regardless of where they are, what kind of device they are using and whether they are using it for business or pleasure. Indeed – whether terminating on fixed or mobile devices, at home or in offices – convergent, multimedia content and applications are forcing changes both in the type and organisation of service providers, and the communications networks themselves.
Lucent Technologies is committed to helping operators deliver convergent services seamlessly and efficiently over fully integrated IT platforms. Janet Davidson, Lucent's President of Integrated Network Solutions and their OSS software unit, explains: "To support new business processes, service providers need to completely link their business processes and their operations software environments.
"In the old world, dominated by POTs and ISDN, the number and variety of services were dictated and constrained by network technologies and limited supply. In today's brave new communications world, Davidson suggests, it is the networks and operators which must adapt to deliver high quality service levels to an even more demanding, better informed and more dynamic needs of the customer base.
Telcos are restructuring and becoming more streamlined, both out of economic necessity, and to serve rapidly changing markets. Lured by the benefits of co-ordinating front and back office functions, integrating disparate systems under a seamless IT umbrella, and using automation to save money and become more efficient, companies are turning to enterprise-wide OSSs. Davidson's goal is to convert telcos into organisations in which customer needs are a priority.
Working together, vendors, operators and content owners are creating new networks, software, systems and solutions designed to give customers what they want and when they want it. Old arguments about whether operators' core business is bandwidth or value added services have largely been won in favour of the latter.
Needing to keep customers and increase their lifetime value, telcos are marrying state-of-the-art networks with a portfolio of appropriately priced applications governed by service level agreements (SLAs). They are reducing churn by bundling voice/data services and creating new convergent applications and differentiated service levels. Managed services, hosted, bundled multimedia applications, value added and premium services are replacing basic connectivity, flat fees and fixed internet access.
Network convergence
"The challenges for service providers are to tailor their business processes to support the new business models and link them to their OSS," says Davidson. "Convergence is, in the end, the dynamic interaction of end users, networks and service providers, enabled by technology in the service of personal empowerment at work, at home and on the move. This dynamic is driving not just the transformation of networks but service provider business models."
Convergent networks, technologies and services create opportunities for new business models that exploit a telco's resident expertise in network provisioning and customer relationship management (CRM). In the world of converged services, the value lies in making things easy for customers. It lies in convenience, simplicity, location, connectivity and the transparent integration and delivery of information services.
"The challenge for service providers is to tune and/or develop new business processes to support these new ventures.  To do that profitably, service providers need to completely link their business processes and operations software environments," Davidson explains.
The new generation of OSS and business solutions software is designed precisely to support the necessary convergence and internal re-organisation. Davidson suggests there is a shift in the enabling technologies to control and measure how networks perform. The emphasis is less on simple connectivity and more on service level agreements: a move from complex backbone network technologies to high-speed, broadband fibre access systems.
Service providers' business models are moving towards measuring how customers are served; towards a customer-oriented view of the network. A change that Davidson believes will not happen piecemeal. "Service providers need to adopt a systematic approach to network builds, service layering and service creation and assurance," she says. "In our view, the foundation is an MPLS, core/optical transport network capable of delivering all end-user applications, and servicing all end-user devices. The MPLS/optical core unifies all current networks infrastructures – wireless/wireline, voice and data, metro optical, and cable. On that foundation, it is relatively easy for service providers to deliver mobile voice and high-speed data, VOIP, and legacy voice service and/or even cable service. Our Service Intelligent Architecture gives service providers a systematic way to capitalise on opportunities at each layer of the network. And includes Next-Gen OSS to support both legacy and new network services and elements, as new services and network elements are deployed.
Barrier free collaboration
"The ultimate goals are simplicity in network architecture and OSS, and flexibility in creating, delivering and managing services."
Davidson defines such 'barrier free' communications as the transparent, seamless exchange of information between service providers, network technology suppliers, content and application suppliers, and end users. The aims are to support consumer choice both in the types of applications, delivery methods and billing, independent of time or place, while simplifying users' experience and gaining maximum service awareness to improve take-up rates.
Success demands seamless, near real time data feedback shared amongst all partners. Increasingly complex relationships between operators, their suppliers and technology partners must be managed in new ways that support diversity and flexibility. The end result for service providers, Davidson suggests, will be new, differentiated revenue streams, richer consumer relationships and better management of capital and operational expenditure.
Telcos realise the benefits of highly automated barrier free communications systems in better flow- through, improved quality procedures, lower costs, shorter product lifecycles and faster times to market. In the process, they develop more flexible, integrated and responsive business systems designed to improve their ability to respond to fast moving competitive markets.
"The operational task is to govern priorities, organisational processes and resources to make sure the company is working on the things that drive business value," Davidson explains.
"Flow through rates must continue to ascend and also support session-based services. Self-service and selection must be measured and improved with end-to-end process ownerships.
Customer and service centric view
"Exploiting converged service opportunities requires that operators assure the quality of these services. Increasingly, providers will need to raise the service level bar to maintain differentiation and offer tiered services to extract the maximum value per segment. The value of managing service quality is immense."
The new service-centric view of the business makes it faster and easier to segment customers and offer differentiated service levels. Yet segmentation depends on operators knowing and understanding user patterns and behaviour. Real time network and session visibility are essential if telcos are to model, poll and predict service quality from customers' perspectives.
"Software needs to help support the free exchange of service quality and business optimisation information, and automatically distribute and monitor actionable events to all critical stakeholders," says Davidson.
Switching a business to a more holistic and service centric model and, in the process making it more agile and proactive, is enabled by tightly integrated IT platforms. Yet measuring and managing operations from a service perspective in complex multi-vendor networks is complicated, and made more so by the move to IP transmission. Telcos are simultaneously introducing new services and access options to an explosive number of feature-rich end user devices. For most operators this is an on-going, expensive and sometimes frustrating exercise, which, in the short term, may slow down new product launches.
Ultimately, believes Davidson: "The goals of agile integration are to reduce this integration tax and speed software service delivery without breaking anything."
Agile integration enhances profitability
It is the task of OSS software vendors to make the integration process as painless and as productive as possible. It is up to vendors to design the enterprise-wide solutions that will help operators manage their enormous and ever changing product portfolio and tariffs.
Davidson's recipe for vendors includes: standards involvement and advancement; service-oriented architecture (SOA) software design; improved software delivery and development lifecycles; and effective content and technology partnerships. All of which are essential in designing the requisite software to allow operators to move from their current situation in which the pervading 'spaghetti junction' of bolted-on legacy systems inhibit fast reaction to either market changes or customer demand. Integrated IT platforms are essential for operators wanting to succeed in the new communications market. Soon presence-enabled conferencing, personalised ring-back tones, speech activated services, multimedia entertainment and productivity applications will infiltrate all aspects of work and lifestyles.
"Users will be able to find, order and experience content quickly and simply. Users will hear, see, talk, purchase, do and create from a wide range of access means and applications with the aid of converged services," explains Davidson. "In the new network environment, services revolve around users, not users around the services.
"Networks will be fluid, responsive, dynamic. They will adapt to the changing location and preferences of an end user. In this environment, providers need to ensure robust, flexible services by means of progressive security, QOS-enforced IP routes, flexible pricing and pricing options and policy based end user service selections.
Getting to this point depends on operators evolving from the bureaucratic entities many are, into modern, flexible businesses in which units seamlessly interact. Wholesale and retail billing systems must be linked with marketing and revenue assurance amongst others, and be fast enough to allow operators to change tariffs quickly. Information held in disparate databases should be accessible to authorised staff who can interrogate it to discover patterns of customer behaviour and/or the success of different services and change parameters as necessary.
Automating business processes
Successful implementations of OSS depend on close working relationships between vendor and service provider. Working with one incumbent operator, Lucent deployed OSS software that increased provisioning efficiencies by more than 50 per cent in the provider's large multi-vendor, multi-platform network with more than a dozen regional management centres delivering a variety of services to numerous customers.
Using Lucent's software to automate their subscriber management and activation processes to rapidly meet demand, another mobile operator now activates upwards of 600,000 orders (adds and changes) daily with 95 per cent flow-through.
Network traffic patterning products have helped operators improve network operations and revenue leakage analysis as well as reduce disputes with other carriers/partners and significantly increase call completion rates.
"We are helping service providers pioneer dual and triple mode access authorisation for 3G data and WiFi," says Davidson. "We have recently begun field-testing IP configuration software that can automatically analyse the capacity of and re-route the IP paths based on operations needs."
Ultimately, suggests Davidson, operators who understand the market dynamics and offer high quality services for a range of access devices will keep customers and increase ARPU.
In the triple play of home entertainment services (voice, video and data), operators must know and have more control over their networks and their subscribers if they are to increase performance, customer loyalty and take-up rates. Bandwidth hungry business applications depend on fast networks and high application throughput.
Whichever markets they sell into, guaranteeing service levels is as important for all operators as good customer relations and the quantity and quality of applications. 
"Ultimately service providers must anticipate the needs of customers and position themselves as the supplier of 'first convenience'," concludes Davidson. "The value of communications services will be measured by how well they satisfy and empower end users at home, at work and on the move."

Priscilla Awde is a freelance communications writer

 [l=www.lucent.com/OSS]http://www.lucent.com/OSS[/l]

Operators must learn how to exploit the technologies that will drive the WLAN phenomenon, suggests Kevin Dorton

Mobile providers worldwide are evaluating public wireless LAN (WLAN) networks as the newest way to provide communications services to consumers on the go. With so many consumers around the world eager to download content services, delivering content through mobile devices has proven to be a viable means of generating profits.
The number of hotspots in Western Europe is expected to grow from 1,500 at the end of 2002 to 32,500 in 2007, generating total revenue of $1.4 billion over a period of five years (figures from analyst firm, IDC).  Whether it is business users working remotely or individual consumers looking to arrange a night out with friends or seek information about the latest films before visiting the cinema, demand is pushing the deployment of hotspots across this region.
WLAN service providers are in an excellent position to exploit this positive environment. However, in order for wireless service providers to understand how to bill and collect the revenue for WLAN services, they must first know how to exploit the technologies that will power a successful WLAN programme. When to invest in new or upgraded systems and when to leverage existing solutions are key issues for service providers.
The myth that operators need to replace their existing billing systems to support billing for WLAN is dissolved by the simple fact that WLAN billing is essentially handled the same way in which other mobile data services are rated and billed. Simply put, WLAN is just another way to access data. Operators' systems that can handle billing for mobile data can handle billing for WLAN. 
As the core of the billing system should handle billing for basic WLAN services, operators have the choice to invest in complementary software to handle more complex WLAN offerings. Roaming and revenue sharing solutions not only enhance the WLAN experience for consumers, but can also be used as an opportunity to grow service providers' revenues and preserve their brand as users roam from hotspot to hotspot. 
Mobile users are growing more familiar with accessing customer service when and where they choose – including from small PDA devices that are WLAN connected. Extending online self-care applications to the travelling PDA user will enable wireless providers to take advantage of reduced customer service costs and allow them to deliver a timely self-care experience that increases customer satisfaction.
Reducing customer care costs by offering self-care applications has proven to be a significant return on investment for service providers. Gartner estimates that it costs an operator on average $5.50 each time a customer contacts a call centre compared with just 24 cents for electronic customer self-care. In addition, giving the consumer the opportunity to update their account information or change services at their own convenience inadvertently fuels loyalty to the provider – thus driving customer retention.
Investing for the future
Another way to profit from offering WLAN services is to partner with venues that can offer hotspots and with third parties that have the content consumers want. These types of relationships require revenue sharing agreements so that the parties involved are able to set up clear revenue sharing parameters and split profits among the players. If the service provider uses an automated revenue sharing solution, they control the management of the partnerships and the reconciliation of the revenues, putting the service provider in charge of the revenue flow.
Hot spot venues, aggregators, network operators/ WISPs will all need to be able to share revenues. They also need to be able to make settlements with content, application and merchant partners. The complexities grow as partnerships become more diverse and the number of partners increases, so it pays to be in charge of the revenue flow.
Organic growth
As WLAN services grow in popularity, the organic growth will permeate across all business models and payment methods. New and casual users will opt to prepay for WLAN services if the choice is presented to them, as the prepay payment method has a much lower perceived risk. Since WLAN is a relatively new service, many new users will want to first try the service through several different providers before signing up for a subscription. In addition, the individual WLAN provider typically has limited network coverage – leading consumers to find it easier to pay as they go rather than sign up with multiple vendors in order to ensure they always have WLAN coverage whenever they need it.
However, many legacy prepaid billing systems can't handle service authorisation and session management for multiple users accessing data, content and voice services. Existing billing systems can usually handle the simple 'pay in advance' single session authorisation used in most hotspots. Nonetheless, WLAN services that complement 3G data and content services, real-time multi-session service authorisation systems must track simultaneous user sessions and determine when to stop or re-authorise a user session.
In a world where staying connected no matter where you go is important – especially for the business traveler – a roaming solution is essential. The user should be able to access the service in a familiar way and service charges should be added to the regular bill, or debited from a pre-pay account. Therefore, as customers use different hotspots linked to different wireless providers, transaction information must be sent between the providers so they can appropriately bill the end user.
The WLAN environment 
Roaming is particularly important in the WLAN environment as coverage may only be a few hundred meters.  UK consultancy BWCS recently stated that wireless service providers risk losing up to 30 per cent of potential hot spot revenues if there is no roaming agreement in place. So, while the industry awaits a universal standard for WLAN roaming records to be established, accepted and implemented, providers must have WLAN roaming solutions that allow them to export and import usage records for multiple roaming interfaces, some of which are proprietary. A seamless service and a single bill for consumers, regardless of whether the hotspots used are in a coffee shop, airport or hotel at different ends of the region – should be the result.
The keys to capturing revenue from WLAN services and cutting customer care costs are in the effective use of billing and customer care systems. As hotspots continue to populate airports, coffee shops and other public arenas, service providers can reap the benefits by re-evaluating current systems for maximum return on recent and future investments.     

Kevin Dorton, CSG Systems, can be contacted via e-mail:    Kevin_dorton@csgsystems.com [l=www.csgsystems.com/]http://www.csgsystems.com/[/l]

While operators focus on multimedia and IP-based content as future revenue generators, it is important that they do not forget the quality of their core product. Alun Lewis explains

The last decade has been a hectic time for the telecommunications industry. A combination of new technologies and radical regulatory change – combined with the financial quagmire of recent years – has challenged the ability of operators to stay focused and on track. It's now becoming clear that these distractions have impacted heavily on the core application that our industry has grown rich on: traditional voice services.
Even the familiar acronym of POTS – Plain Old Telephone Services – illustrates the relatively low regard shown for voice services when compared with the much-hyped potential of multimedia content and IP-based applications. Despite this perception, the global market for voice – according to international research consultancy Ovum – is expected to continue to grow from $784 billion last year to around $1000 billion by 2007, so there's still considerable market share to fight for.
One of the key problems facing operators, irrespective of whether they're running fixed, mobile or IP networks, is making sure that the quality of the voice services that they offer is acceptable to the markets that they're targeting. While voice generally used to be seen as a 'one size fits all' service – sensible enough when everything was being carried pretty much by one operator over one network – the current cat's cradle of complexity involving interconnections between different carrier technologies and service providers creates major headaches for all involved.
A business, for example, may be happy to pay a lower charge for lower quality voice services for internal company use – but more than happy to pay a premium for external calls to customers. Similarly, the youth market for mobile services may be attracted away from SMS communication to speech interaction at slightly higher tariffs – and be happy for a lower grade of service than the majority of other mobile users.
In particular, the impact of VoIP and the steady growth of supporting IP-based access technologies such as DSL and WLAN are already forcing service providers of all sizes and types into a re-examination of the whole voice services market. Additionally, for operators with more traditional networks, it's also essential that they understand exactly where in the infrastructure to make the right investments that will generate ARPU or market differentiators through enhanced or managed voice quality.
But how exactly do you measure voice quality? Unlike the hard technical parameters used to measure the efficiency of data networks – packet loss, latency and so on – perceived voice quality also depends on the hardware and software of that most complex of mechanisms, the human brain and nervous system. The telecoms value chain is complex enough when it's just based on copper and silicon – add in the human mouth and ear and that complexity increases by an order of magnitude.
The good news is that part of the ITU has been working on this problem for a number of years, coordinating research to develop standardised methods that can be applied across multiple networks and different technologies. Their approach is based on a concept known as PESQ – Perceptual Evaluation of Speech Quality – which measures end-to-end voice quality based on a database of subjective listeners' experiences of call quality defined by Mean Opinion Scores (MOS):

Listening quality MOS scale
Score            Quality of the speech 
5                            Excellent 
4                              Good 
3                               Fair 
2                               Poor 
1                               Bad
While it's obviously possible for an operator to carry out regular customer surveys of voice quality, clearly the optimum solution is to automate the whole process through the use of software algorithms, so that network performance can be monitored on a regular basis. This means that they're effectively 'listening in' to calls and then reporting back to other management systems.

Recommended standards
At the end of 2003, the ITU announced one of the first of these recommended standards, based on a non-intrusive algorithm for use in PSTN and mobile networks, following intensive evaluation of different solutions from a number of different companies.
Iain Wood of BT spin-off Psytechnics, one of the winning solution providers and a specialist in voice quality metrics, takes up the story: "The mobile market in particular is going to have to start addressing the issue of voice quality very seriously. If they're to keep pushing ARPU it's vital that they have some systems in place to be able to monitor the customer's end experience of voice services. Was a call ended prematurely because of voice quality problems, or is a customer churning because of poor reception? The kinds of data that you get from the usual QoS systems doesn't take account of this human-centric issue, even though it's vital to understanding the overall customer experience.
"Psytechnics has recently been looking at the performance of a number of mobile networks both in the UK and abroad and has found some interesting results. One of the most significant was that there was no discernable difference in speech quality between the most expensive and the cheapest handsets - despite a price differential of hundreds of pounds. Given that for many adult users voice easily remains their primary mobile service, does this mean that operators and handset manufacturers are putting too much effort and investment into unused – and ultimately unprofitable – features and capabilities?
"Similar issues apply when you consider the attempts that some mobile operators around the world are making to compete directly with the fixed line  market through homezone tariffing. With a typical UK PSTN line providing a MOS of 4.3, UK GSM network typically operate at between 2.9 and 4.1, and this issue of perceived voice quality will make it unlikely that high spending business users will migrate unless service levels are improved.
"Even if the customer's getting a full signal strength bar, there are a host of other factors that degrade speech quality and an understanding of how voice quality is performing is an invaluable tool in how to best engineer the value chain and all its underlying components.
"This technology also has an equally valuable role to play for the end customer, especially the large corporate thats unsure whether it's receiving service value for its money. While it will certainly have strict Service Level Agreements in place to monitor its data networks – along with stringent penalties if these aren't met – both businesses are effectively flying in the dark when it comes to measuring speech quality. Because of this, we're currently working on an application that can be downloaded onto the mobile device itself. This can then use SMS to report back to the operator – or another party – to provide MOS feedback, as well as displaying voice quality metrics to the actual user."
Important role to play
While voice quality metrics have an important role to play with mobile service providers, they have an absolutely vital one when it comes to supporting the wide-scale roll out of VoIP services in both public and enterprise networks. The price and performance benefits of moving to an all-IP environment might be well understood, but customers are often rightly sceptical of its suitability for carrying mission critical voice services. The whole industry is full of anecdotes about disastrous VoIP implementations that have failed to deliver and caused nightmares for customer and vendor alike.
For companies like Psytechnics, this presents another market opportunity, as Iain Wood explains.
"IP was designed originally for computer-to-computer interaction and unfortunately humans are a lot more critical than machines when it comes to carrying on conversations. In trying to replicate the behaviour of the PSTN in a packet environment, there are all sorts of subtle perceptual cues that have to be taken into account if we are to provide a truly satisfying speech experience.
"Unfortunately, the legacy management tools that came with the IP world are incapable of supporting voice services – as the industry found to its cost in the past. Fortunately, we now have a range of techniques – similar to those now being deployed in the circuit switched environment – that can be deployed to ensure voice quality remains appropriate for the context that it's used in.
"What's important to remember, however, is that delivering voice quality in an IP environment must be an iterative process. You might start by testing a LAN for its suitability even before a VoIP solution is deployed. That can be done by running simulated VoIP traffic over it and monitoring performance at strategic locations. Then, during commissioning, it's useful to carry out end-to-end testing to fine tune the configurations and get it ready for live use. After thats done, regular monitoring remains essential, particularly given the constantly changing topology of most IT networks and the natural variations in traffic flow that affect any real world environment. In some of these scenarios, it's appropriate to use testing tools; in others, constant monitoring is best.
"We've found two ways into this rapidly growing market. Firstly through supporting system vendors by allowing them to incorporate our technology into their management systems, and secondly through providing a consultancy service for service providers, system integrators and end customers."
With the ITU about to announce similar standardised approaches for VoIP as they have already done for mobile and PSTN services, approaches to voice quality finally look like becoming definable – rather than infinitely debateable. Maybe it's time to update the old saying of the billing industry that 'if you can't bill for it, don't offer it,' to 'if you can't measure it, don't try to sell it'...   

Alun Lewis is a telecommunications writer and  consultant. alunlewis@compuserve.com

 [l=www.psytechnics.com/]http://www.psytechnics.com/[/l]

If providers want broadband customer loyalty, they must prove they can be trusted right from the start, says Kieran Moynihan

The transformation of the world's wireline networks is already well underway in many countries, with the ability to deliver broadband content and entertainment services to a wide variety of non-telephony devices within the domestic environment already centre stage within the strategic planning departments of most fixed line operators.
Even straightforward voice services are about to be reborn in fresh guises, using new protocols like SIP -- along with simple presence and location information -- to widen the depth and breadth of traditional service portfolios.
However, in this headlong rush towards new revenue streams, it's vitally important that service providers don't lose sight of the actual customer sitting at the end of what is going to be an ever-lengthening value chain. It's not going to be any good just pumping more and more bits at them in the hope they're going to pay for this on the basis of sheer volume alone. For the customer -- who's already becoming far more discerning about the range of possibilities on tap to them -- other criteria are also important, particularly the overall quality of the total service experience that they get from their provider. The raw truth is that if you want them to come on the broadband journey with you, you're going to have to prove to them -- right from the start -- that you're a trustworthy partner.
For some fixed service providers, negotiating the service quality management route in the broadband environment could end up becoming rather a problematic kind of journey. Managing speech quality in the circuit-switched world, while not simple, is at least a well-understood and standardised discipline that's been practised for many decades. By contrast, making the move into the broadband IP world necessarily involves far higher value transactions for content services from the customer's perspective -- along with an accompanying increased exposure to uncertainty and risk for the provider. Before asking the customer to transfer all their communications services -- including broadcast  -- onto the shoulders of one single provider, we'd better make sure that we're up to the job.
Fortunately, one part of our industry -- the mobile sector -- has already been here before and there are valuable lessons for the fixed operator to learn from that experience.
Consider the essentials of the wired broadband mix:
*  A variety of different access technologies, each with their own particular behaviours and physical limits
*  A need to support a multiplicity of services in a consistent manner
*  Great variation in the actual terminal devices being used by the customer
*  A potential separation between ownership of the actual physical network infrastructure and the providers of services to run on them              *  Complex and often highly distributed value chains, involving third parties and intricate settlement procedures
*  A fickle and often fragmented customer base, prepared to churn at a moment's notice
*  Customers who will use the network for both business and personal use -- with appropriate levels of security and reliability
Sounds a bit like the mobile environment -- but with one important difference. Customer expectations in the mobile space have largely kept pace with the network's actual ability to deliver on these. In comparison, consumers of broadband services will usually be approaching their relationship with their fixed line service provider with already high expectations, based on years of consistent performance. To them it's irrelevant that something called VoIP is letting them talk cheaply to friends on the other side of the country -- they just expect it to work.
In this context, the main lesson that could be transferred from the wireless community to their fixed line cousins is the importance of appreciating the totality of the customer experience -- and that means taking a far more holistic approach to the systems that support service quality. In the steady-state world of circuit-switched services, the data needed to monitor service quality was pretty basic, as only a handful of performance parameters actually impacted on the user's experience. Did the call connect, did it complete, was it successfully billed for and so on?
By contrast, there are a host of different issues that can impact on the broadband experience and, just like the mobile environment, not all of these will be under the direct control of either the service provider or the network owner. In taking a leaf from the mobile book, fixed line operators are going to have to start seeing things from the customers' -- not the network's -- perspective, and this involves a shift in focus to what in the mobile arena is called customer-centric service management.
That involves pulling together all the currently separate and discrete functions of network operations, customer care, billing, enterprise account and product management and integrating them in ways that add value to the customer, to the whole business and not just to narrow departmental responsibilities. With the right kind of service quality management system in place, it becomes possible for faults that will have an immediate impact on customer service to be quickly identified and resolved, while triage can readily be performed to prioritise less urgent problems. Internal Service Level Agreements (SLAs) can also be defined and monitored across the organisation, leading to a greater coordination between all the different departments actually involved in creating, deploying, marketing and billing new services.
Given the critical place we're at in the rollout of broadband services, there's a strong argument to be made for adapting that old adage 'If you can't bill for it -- don't offer it' into 'If you can't guarantee its quality -- don't offer it". Fortunately, help -- and real world experience -- is at hand.                                                        n

Kieran Moynihan is General Manager, Service Management Division, Vallent Corporation
www.vallent.co.uk

Just as was predicted long ago, much of the technology underpinning telecoms  networks these days has become a commodity. As a result, the areas that service providers now need to concentrate on instead involve finding the most efficient ways to set up, manage and control the myriad processes and events that take place on top of those enabling technologies.

Kim Perdikou is the Executive Vice President and General Manager of Juniper  Networks' Infrastructure Product Group. She tells European Communications how her company can help carriers and service providers meet the challenges that they face today

EC: What challenges are your customers facing in the evolution of their business models?

NEC's investment in NetCracker as the global brand for all its telecoms software and services assets positions the company as a multi-billion dollar entity operating across the entire range of OSS/BSS activity. Andrew Feinberg, President and CEO of NetCracker, talks to Keith Dyer, Editor of European Communications, about how the company's recent physical and strategic expansion is positioned to meet the transformational needs of today's Tier 1 global operators

    

@eurocomms

Other Categories in Features