Features

ARQIVA/EXCEL
Mobile phone use plays a key role in most business activity – and no more so than when people are away at an all day event. At the ExCeL London International Conference and Exhibition centre, literally everyone on site – staff, exhibitors and business visitors – is working in a remote capacity using a variety of equipment including mobile phones, laptops and PDAs.
Timo Bayford, Head of IT at ExCeL London explains: “When the first part of the centre opened in 2000, it was evident that mobile network coverage was a problem: there was none. And in a situation where you have lots of exhibitors needing to connect computers or wireless devices, as well as visitors who need to keep in touch while they are at an event, that’s a big problem.”
The lack of signal was caused by the building’s strengthened, column free structure, which shielded out signals from external macro cells. This was not only potentially problematic for visitors and exhibitors, it was causing operational difficulties for ExCeL London staff.
ExCeL London invited Arqiva to provide it with an InBuilding system offering high-quality, stable coverage.
Arqiva supplied the centre with an active DAS system (Distributed Antenna System) that included 40 antenna points. These were distributed evenly throughout the centre to cater for occasions when the venue hosts multiple events and the exhibition halls are split into a number of different spaces. The antennas were then connected to base equipment located in a centralised control room. Since the centre was still unfinished at the time, Arqiva had to tailor the InBuilding implementation process both to avoid any impact on events taking place at ExCel London and to accommodate the ongoing building work. The system was implemented in stages as each part of the centre became available, and is now being upgraded to 3G without any further disruption.
Details: www.arqiva.com

INTEC/ORANGE
TA Orange has grown to become one of the top three players in Thailand’s mobile communications market, with innovation a cornerstone of its success. TA Orange was the first company in the country to handle SMS messages in the Thai script. With subscriber numbers growing exponentially and new technologies, such as 3G and IP, and additional packages being rolled out nationally, fast and flawless service activation across a wide variety of network resources and protocols is commercially vital.
Bunjert Thungvarathum, Head of Billing and Customer Care at TA Orange, remembers: “Our existing systems were regularly reaching capacity in their existing HLRs.  Additionally, the costs associated with implementing changes to existing systems were high and the time to market too long. This did not put us in a good position, and risked creating customer care issues.”
In order to streamline its business processes and gain greater operational efficiency, TA Orange sought to consolidate its multiple activation systems onto a single platform. In addition, standard changes to the core system required costly consultancy time from the vendor. This discouraged proactive enhancements and hindered the launch of new product offerings.
TA Orange’s mediation capabilities needed a boost as well. “We were predicting a volume rise from 25 to 35 million CDRs a day,” Thungvarathum recalls.
Installation of Inter-activatE and Inter-mediatE from Intec was completed, without any hindrance to existing customer services, within six weeks.
A combination of several benefits has proven to Thungvarathum that the right decision was made: the time to market has improved noticeably; the graphical user interface (GUI) format of Inter-activatE and Inter-mediatE makes the solutions significantly more user friendly than the previous systems; the system can be adapted in-house in real time by the comprehensively trained TA Orange team; TA Orange has found the Intec team to be very responsive; and the software is extremely scalable.
“The rapid installation phase, improved time to market for new services and the opportunity to alter the core system have all contributed to enhancements for our customers,” concluded Thungvarathum.
For full version go to: http://www.intec-telecom-systems.com/its/pressroom/casestudies/

Marketing hype or must-have technology, ADSL2+ promises big things in the broadband market. Sean Stephenson examines how it will develop and who will really benefit from its emergence

The emergence of broadband in the UK around seven years ago, saw ISPs beginning to roll out ADSL (Asymmetric Digital Subscriber Line) services that would provide consumers with speeds of up to 2Mbps – which is 50 times faster than a 56k modem. The ability to provide such speed came as a result of ADSL allowing more data to be sent over existing copper telephone lines.

Following the success of broadband in America, and much encouragement from the UK Government for Britain to enter the ‘digital age’, providers began competing to offer new unmetered, faster services. Compared to the situation today, the only differing factor is the speeds that providers want to offer. By doubling the bandwidth – and therefore increasing data transfer rates – ISPs are offering users a much faster Internet service and opening the doors for other digital services.

Evolution of ADSL2+
Evolving from the original ADSL broadband standard, ADSL2+ is intended to improve and enhance the broadband experience through increased download speeds of up to 24Mbps, compared with up to 2Mbps that broadband usually offers. The new generation of Internet access addresses many of the problems and issues that affected the original standard. ADSL2+ transceivers are now equipped with refined diagnostic capabilities, allowing problems to be automatically fixed both during and after installation, with minimum disruption to the user. For example, the modem is able to measure noise levels on the line, fix any problems and also prevent future service disruption or failure.
Numerous other advances have also been implemented to provide users with a better service, compared with the old ADSL standard. Such features include greater download speeds, reduce modem start-up times by around seven seconds, and power management modes that can help reduce overall power consumption, whilst still retaining the ‘always-on’ capability that characterises ADSL. In addition, ADSL2+ is able to overcome the narrowband interference over long lines through improved modulation efficiency, so that signal processing is enhanced. This allows crosstalk to be eliminated by adapting and changing data rates to new conditions without any disturbance to the user.
The evolution of ADSL2+ will also pave the way for Internet Protocol Television (IPTV), which combines the capabilities of the Internet with the technologies of a television set. It enables subscribers to watch high quality television and films using just one high-speed network. Set to revolutionise the telecoms market, research from Screen Digest has predicted that the number of subscribers to IPTV will soar to 8.7 million in 2009. TV delivered over broadband will allow providers to offer a ‘triple-play’ of voice, video and data services on one single bill. Cable and satellite providers are already looking to adopt IPTV technologies in order to retain a competitive advantage in the market, as it is predicted that the new technology will give ISPs a share of the pay-TV market. With the advent of IPTV, users will eventually be able to select programmes they want to watch from thousands of channels around the world, in a similar way to how radio services are currently streamed. 

Availability
ISPs are beginning to roll out Local Loop Unbundled networks (LLU) and take over BT exchanges. In terms of how this will affect ADSL2+, we’ll see that – as these exchanges are taken over – the new speeds will start becoming more available.
Although some operators are attempting to trial the service regionally to make it accessible to those living in more rural areas, ADSL2+ will initially only be available to subscribers in the London area. However, there are no guarantees that the faster speeds will be available. Additional factors will affect the accessibility of ADSL2+, such as the distance a customer is from the telephone exchange. Those living more than seven kilometres away are unlikely to get the higher speeds, with only those living within 300 metres able to get speeds of over 20Mbps. Official figures from broadband analysts Point Topic suggest that 45 per cent of the population will have access to speeds of 8Mbps, with just five per cent getting the very high speeds of between 18-24Mbps.   

So is ADSL2+ worth the hype?
The answer to this will depend on the type of user. Whilst there are obvious advantages to having ADSL2+, it will lend itself to businesses more than consumers. Consumers are not likely to notice a faster download speed, as only so much speed is needed to be efficient. Although, with ISPs cutting prices and download speeds increasing, consumers are likely to see the attraction of ADSL2+. With regards to businesses, ADSL2+ may have significant benefits. The diagnostic capabilities, which enable problems to be resolved quickly and efficiently, will be particularly advantageous as downtime is likely to be reduced, an essential for mission critical applications in a business environment. Companies, such as architects or design agencies for example, downloading very large documents or images, are more likely to see the value of having ADSL2+, as the extra fast speeds will greatly improve download times across a company network.
So, whilst there is no denying ADSL2+ is an improvement on the original ADSL standard, it is unlikely to have a tremendous impact on the average broadband user. Also, considering the likelihood that many people will be unable to take advantage of the high speeds that ADSL2+ can offer, it seems unnecessary for residential consumers, who are generally more concerned with price than with having the fastest possible speeds.
Despite the benefits that ADSL2+ may bring, there are few applications that really demand the extra speed.  The buzz surrounding ADSL2+ does appear to have been hyped by many ISPs and some of the media and, at the moment, users would be essentially paying for go-faster stripes. Browsing and downloading may be faster, but it doesn’t represent a leap forward in terms of user experience.
Getting ADSL2+ available across the whole country will take considerable time and investment. Until then, we may see two-tier availability; those on traditional broadband speeds in rural areas and towns, and those using super fast speeds in big cities. At the moment, the market generally provides services to the lowest common denominator (e.g. designing websites for dial-up speeds), so content that takes advantage of these faster speeds won’t come overnight. This time will be put to good use, as ISPs, content providers and broadcasters, discuss other thorny issues such as billing and revenue splits.                                           

Sean Stephenson is head of products at PIPEX
www.pipex.net

Global Innovation: Building the ICT Future
27-30 March 2006 Conference   28-29 March 2006 Exhibition
Business Design Centre, London, England

The 21st Century Communications World Forum conference and exhibition provides a global venue in which service providers, enterprise end-users, and industry analysts will come together to discuss issues, challenges, and opportunities raised by the emergence of next-generation ICT services and applications – as well as IP-based network architectures and technologies.

Produced by the International Engineering Consortium (IEC) this global assembly of telecom leaders will examine the impact of emerging information and communications technology in shaping the evolving digital networked economy.
Special emphasis will be placed on bringing together the three key constituencies shaping the future of information communication technology:
• Network operators, service providers, and application developers
• Chief technology officers, enterprise chief information officers, and end-user professionals
• Corporate strategists, industry analysts, system integrators, and technology thought leaders
As the host-sponsor, BT will open the conference with a keynote address presented by Matt Bross, group CTO, that will outline BT’s visionary plan to create its 21st Century network. This multi-billion dollar project is designed to provide the UK with an IP-based multi-service network that will deliver a wide range of fully converged services to businesses and consumers, including broadband voice, video, data and mobility.
Additional featured keynote addresses include, Thomas Ganswindt, member of the corporate executive committee, Siemens; and Terry Matthews, chairman of Mitel.
As part of the executive speaker line-up, a special Plenary Panel on ‘Next-Generation Network Rollout Plans’ will convey the ‘Carrier’s View.’ Speakers include: Mick Reeve (Chairperson) Group Chief Technology Officer, BT; J. Trevor Anderson, Senior VP, Technology, Bell Canada; Massimo Coronaro, Technology Officer, Telecom Italia; Tadanobu Okada, Executive Director, Information Sharing Laboratory Group, NTT; and Berit Svendsen, Executive Vice President, Technology, and Chief Technology Officer, Telenor ASA.
Complimenting the educational program is the technology exhibition featuring more than 70 exhibiting companies offering a wide range of service delivery platforms and innovative solutions on the expansive show floor. Attendees can spend their time walking the exhibit hall, or maximising their time by attending sessions within the educational program.
A new element introduced at this year’s program is the IMS Global ComForum, revealing the latest IMS research and developments, applications and implementation by leading industry experts. Delegates will examine the promise of IMS-myth vs. reality through a series of dedicated sessions over the four-day program.
The inaugural 21st Century Communications World Forum 2005 drew more than 2,000 registrants to London, eager to discuss the latest advances behind the move toward customer-centric information and communications technologies.                             

For more information on the 21st Century Communications World Forum 2006, visit: www.iec.org

Billing is now widely considered to be strategic -- a key element in the struggle for better customer service and cash flow management. But it has also split into two parts, says Alex Leslie

The cynics amongst us -- and by that I mean those of us who spend too much time at telecoms conferences -- often become philosophical in bars at airports. 'The problem,' we say, after a sip of alleged Chardonnay, 'is that nothing changes.' We then nod, and take another sip, and wonder when the plane will arrive to take us home.
I believe we think this because we are suffering from Powerpoint poisoning, and simply do not recognise the symptoms. We have come to believe that there is nothing new because the slides look the same as they did several years ago. I have been guilty of this. I used to show a slide at billing conferences. It said that the next generation of billing system needs to be scalable and flexible and have far better reporting capabilities and be properly integrated with customer care. True, but possibly boring -- until I flipped to the next slide, which said that this list of 'requirements' was presented at a conference in 1994.
It is when you rewind to 1994 itself that you see the awesome changes that have actually occurred. A plethora of new services, true competition, almost universal mobile phones, 'free' voice -- all of which would have been greeted, in fact was greeted, by conference audiences back in 1994 as pie in the sky fantasy.
The fact that the slides have not changed much actually means that we got the fundamental 'to do' list right. We will always need greater flexibility and scalability and, to an extent, speed, and certainly greater integration. Those truths are, as Mr Jefferson said, self evident. It also means that, depending on where you are in the world, and where you are in the development of the telecoms market, you will be somewhere on a 'line' of maturity, where the focus changes depending on whether there is huge subscriber growth, or a more sophisticated, customer centric, mature market place.

Consider billing

Here in Europe, I think billing is coming of age. Billing managers now have regular meetings with CFOs, which is a major breakthrough. Billing, by which I mean the whole revenue management process, not just the system in the middle of the process, is widely considered as strategic -- a key element in the struggle for better customer service, and cash flow management. It is now a process that is regularly measured, whereas before it was not. There are now teams -- that, as often as not, spring from billing -- that roam the corridors looking for revenue leakage. Revenue assurance is becoming a way of life, not an audit.                              ©
Billing has changed in other ways. It has split into two parts. One part is responsible for the process, the whole process. This job has, as its primary goal, to make the process completely independent of people, who are generally the things that change and mess things up and make the midnight pizza delivery guys rich people. These process people do not care, except in a high brow intellectual way, about 3G and VoIP and Triple Play, and all the things that we go to conferences to watch slides about. When a new service is launched they want the CDRs, or whatever event record is used, to go through the process smoothly, produce a bill and thus produce money.
In this part of billing, this new maturity was hard won. The Telecoms Troubles gave them no capital, less people and more responsibility. The days of buying a new system to launch a new service disappeared. The process became king. Many vendors reinvented themselves as revenue assurance specialists. Many operators at this point joined the ranks of the cynical.
The other part of billing is responsible for figuring out whether the process is capable of supporting the new services that marketing wants to launch on the world. In some operators, this role is actually now part of product management or strategy. In one or two operators, this role has the right of veto on a new product that they cannot bill for. This is the person that vendors take out to dinner. The process person would probably join them, but he is too busy shouting at the network people who did not tell him they had upgraded the switches without telling him, again. More midnight pizzas were delivered.
Whilst this maturity was hard won, it was at least, won. Looking at the billing industry now, it is a mature industry, and 10 years ago it was certainly not that.

Can I talk about convergence, please?

I hate to add another favourite from slide packs, but the evolution of the billing process is, to a great extent, being driven by convergence. I cannot remember when I first saw a slide with convergence written on it, but it was certainly back in the days when conference speakers had to take their word processed text to a graphic design shop, wait a week, and then collect a box of 35mm slides. I miss those days -- there was less writing on the slides.
The funny thing is that we are now realising that convergence is about the customer. The headline grabbing projects involving hundreds of millions of euros, converting businesses to IP, are about offering the customers a range of services, cheaper, faster and better than the competition.
For the billing process, this actually means less emphasis on the billing system itself and more of a focus on order processing (being automated at a telco near you), service provisioning and CRM, and integrating these into the whole process, better than before. The goal is to provide a single view of the customer.
The mature billing world, in Europe, is becoming a world where the process is stable and independent of people, and integrated. And because of all this, the customer experience is becoming better.
It also means that if a new system is needed, then mature and tough negotiations take place, and this has ramifications which I am not too happy about, but I fully understand. The downward pressure on the cost of billing systems means that the resources being ploughed into R&D and new functionalities are under pressure. It also means some vendors decided to provide their own professional consultancy and integration services and now this has been taken for granted among the operator community, which left the systems integration community exposed, and they were already under threat from the fashion for offshoring.

Around the world

I have to confess that once I had decided to provide a round up of the major regions and where they were in the 'maturity matrix' of billing processes, I found that my knowledge was not as up to date as it should be, and so, faced with the £64,000 question, I decided to use a lifeline and phone a friend -- well several, actually.
First I phoned Andreas at Orga Systems, to help me out with what is going on in Latin America. In the mobile market, which completely dominates, massive growth is the theme. In Brazil, net additional subscribers for 2004 was just under 20 million. In Argentina the growth rate is 75 per cent. In Colombia just under 60 per cent. The vast majority of the market is prepaid. Competition is fierce, and thus pressure on ARPU is intense.
The result is that the underlying issues in the region are not too different from the ones we know from the recent past in Europe. Although they are happening faster, and all at once, in Europe we had the luxury of seeing the fastest growth period happen during a mainly 'voice' period. In Latin America it is happening at the same time. Billing processes are therefore still relatively unstable, as one might expect, only the first attempts at measuring and controlling them are emerging, and the keys for success or survival are real time     Â© systems, and scalability.Then I phoned Mike at Portal Software in Cupertino, and asked about the state of the market in North America. He was upbeat. The telecoms market in the US, generally speaking, is improving and is about the three 'C's'. Consolidation is ongoing, and on a scale that is awesome, and as we thought several years ago, is shaking out into the dominance of a very few players. The challenges that consolidation brings in terms of the billing process and the systems that support the process are huge, and generally take time to sort out. Consolidation, as many of you will know, is a real enemy of a stable process!
The second 'C' driving the market in North America is our friend Convergence. It is happening, IPTV is in the wings, and is not only a huge opportunity but a huge challenge. In fact, it is a completely new business. Everything over IP is, as I have said, about the customer, and delivering services better, faster and cheaper than the competition. It also enables innovation in pricing, which brings with it sophistication and the potential for differentiation.
The third 'C' is content. Content is becoming king in North America, driven as much by the fabulous popularity of iPods and games, as anything else. The US, particularly, is now becoming about prepaid. It was, as we know, slow to take off, but is now forecast to be the biggest growth area in mobile, helped along by the emergence of MVNOs.
The market in Africa would require a separate article to do it justice, it is simply too complex. However, it would be fair to say that, again, mobile is not only the driving force, but in some cases is driving the economy. The constraints are the lack of capital and the challenges of supporting billing implementations.
I phoned the GBA's Asia team to get an up to date view of that huge and varied market. They surprised me by being less upbeat. New contracts for billing vendors are few and far between, and those few are hard won. Content is one area where there is light, but it is not providing many opportunities for innovative and sophisticated pricing and billing. Indeed the emphasis seems to be on the content provider providing priced records to the billing system, whether the content provider is the dominant partner, or not. In fact we are now seeing content providers providing value chain pricing as part of their offerings. Perhaps Asia is once again breaking the mould and bending other people's innovations around their own processes.

In summary

The trends in billing must be broken into two parts. The trends in the process part will be towards more and more stable processes, independent of people and reorganisations. It will also be towards quicker processes -- in mature markets there is now an emphasis on shortening the 'time to cash' as a constant goal. Part of this is integrating the 'front end' of the process better. In emerging markets, stable processes will seem a dream at the moment, but the journey is already starting, and will follow a well trodden and rocky path.
In terms of billing development, the focus is on convergence, and now, not just on slides, but in the real world. IP will deliver services better, faster and cheaper, and if the process is mature, then the customer experience will genuinely be enhanced.
There is also an opportunity in the mid-market section of our community. The larger billing vendors used to concentrate on the very top end of the market, and the smaller ones provided niche players with billing systems during the past few years. There is an opportunity in the middle, both here in Europe, and in North America, where the tried and trusted drivers -- time to market and flexibility, will create these opportunities.
The winds of change are blowing, and blowing at different speeds around the world. In Latin America, they are blowing hard and fast, in more mature markets such as Europe, they may be easing off, but they still seem to have some surprising eddies in them.           n

Alex Leslie is CEO, Global Billing Association, and can be contacted via: alex@globalbilling.org

External Links

Global Billing Association

Does your organisation really know everything it needs to know about its assets in general and its fixed assets in particular? Equally to the point, is it accounting for them properly? Nicola Byers investigates

A major new legislative climate in the United States has been ushered in by the Sarbanes-Oxley Act of 2002. This Act has revolutionised attitudes to corporate governance in the States. The Act has immediate consequences for UK organisations that have US parent corporations, but it also has pressing implications even for organisations with no particular US connection. As the Financial Times stated in April 2005:
 "It seems that the more obvious demands imposed by Sarbanes-Oxley in financial accounting -- the expense, the time investment, the extra audits -- are just the tip of the iceberg. The required mix of 'proper' business controls and personal liability is causing a chain reaction that affects Boards, organisational structures, professional advisers and the daily efficiencies of all public companies -- and many private ones, even though technically they are not covered by the Act."
Beyond question, Sarbanes-Oxley places a major new focus on corporate procedures. There are important consequences here for all aspects of an organisation's accounting procedures, and especially for how it accounts for its assets, which are such a major part of any organisation's fundamental structure.
The days when major organisations could confidently expect to be able to handle the accounting of their assets using simplistic in-house systems or databases appear to be coming to an end. In its place, a climate is developing where organisations must be able to bring a new, highly flexible, interactive approach to asset accounting, based around using a specialist asset management solution. The good news is that organisations that adopt this will enjoy a handle on their assets, and a knowledge of them, that will allow them to work their assets even harder in the future.
Assets are a significant part of an organisation's accumulated wealth and a fundamental resource used to generate its profit. The careful monitoring of assets is both a commercial and a statutory requirement. But how should an organisation best monitor and account for its assets? If you are involved with the management of your organisation's assets, do you know what assets your business holds, where they are located, and what they are currently worth? Do you have the facility to furnish this information for any particular moment in the past that might come under scrutiny? Are you able quickly and efficiently to know what your asset status was a year ago? Does the information you have about your assets come complete with a detailed audit trail? Can you relate the physical asset to a financial record?
Historically, the term 'fixed assets' has been used when discussing asset registers. This was because a 'fixed asset' referred to a purchased item that would have been a benefit to an organisation for a fixed term period of greater than one year. Before the advent of information technology in the accounting arena, the fixed asset ledger contained a schedule of all major capital purchases made by an organisation. As these would have to be manually depreciated, they generally consisted of large non-portable items such as vehicles, engineering equipment, land and buildings. The management and depreciation of these fixed assets was obviously a resource intensive process and as such, the ledger entries were often in a summary form and sacrificed detail.
The impact of IT improved this situation. New accounting software products, in replicating the old ledgers, were able to automate some of the processes and reduce the resource issue but even they had their limitations. But the way assets are accounted for has moved on. Organisations now need to account for (and manage) assets that no longer fit the old 'fixed asset' definition. Today's fixed assets can be small, portable and also intangible. There is a requirement to manage the asset register in order to record all purchase details and accounting history, tracking movement, audit all events and actions relating to the individual asset and finally, relate multiple physical assets to the single fixed asset entry in the accounts. This detailed, micro-management requirement was often not possible with old accounting software products and alternatives have had to be found.
For small businesses with a relatively low number of fixed assets or non-depreciating assets, the accounting requirements are managed by in-house systems or databases. For practical purposes, an organisation turning over less than £5 million annually, or one with less than 500 assets, will probably be able to make do with such a system. The proviso should, however, be made that if the organisation is in a strong growth cycle and likely to accumulate a good number of new assets over, say, the next six months, it should think hard about whether a simple accounting record will really meet its asset accounting needs.
Any asset may go missing or be lost. We live in an age when relatively small assets -- laptops, for example -- are extremely valuable. The physical tracking of such assets is a crucial aspect of the quality of an organisation's management information, and can be overlooked due to time restraints or poor monitoring systems. If these requirements for asset monitoring and tracking are not adequately met, the consequences can be extremely serious. The issue of corporate governance is certainly putting pressure on many organisations that have previously put the issue of asset management on the back burner.
Research conducted by Real Asset Management suggests that up to about 40 per cent of senior managers in major UK organisations are not confident that their registers of fixed assets are up-to-date. Similarly, a rather alarming 35 per cent believe they have assets missing from their register, while 30 per cent have no facility to track their assets by furnishing some form of electronic identification. In practice, such senior managers will typically have a simple accounting record of the assets. But the trouble is, large organisations cannot expect to 'make do' with simple systems for complex asset situations, any more than you can run a major business today using technology no more sophisticated than an abacus.
Ideally, any organisation with more than £5 million annual turnover or an asset register with more than 500 records, needs to have an asset management resource in place that provides rapid and effective affirmative answers to the following key questions:
*  Is the accounting resource linked to the actual asset or assets by means of a tracking system?
*  If the asset is disposed of, how does the finance team know?
*  If the asset is moved, is the finance department told where its new location is and who manages this resource?
*  Does the accounting resource facilitate a physical audit trail of its assets?
 In practice, many large UK organisations could probably not answer all these questions with a resounding affirmative. But UK organisations are far from alone in rarely being able to do this. Fixed asset measurement, management and overall control are also often          deficient in the US, too. Indeed, the whole matter of the monitoring and management of assets -- whether fixed assets or non-depreciating assets -- is one important area that is being improved in the United States following the passage into law of the new legislation mentioned above, the Sarbanes-Oxley Act.
The Act -- often known popularly among regulators as SOX -- is one of the most sweeping and influential pieces of legislation ever brought to law in the US. In a legislative environment where many passionate initiatives to change the law often wind up being diluted and compromised, SOX received widespread consensual backing, doubtless because of the scars Enron and the ongoing investigations into WorldCom scandals made on the US corporate mindset. Among the many vigorous provisions of SOX is a requirement for corporations to monitor much more closely how they record purchases of capital assets. Many US corporations that previously did not have a dedicated system for recording such purchases are now working to address the situation.
Under SOX, every transaction related to capital expenditure must be available for analysis and reporting. This provision has implications for all types of transactions in which an organisation engages and it also has especially important implications for fixed assets, which tend to have a high value.
Fortunately for financial directors, senior managers and anybody else who has a professional requirement for information about their organisation's assets in general, the power of specialist asset management solutions gives financial departments an important -- and in many cases essential -- tool for measuring, managing and monitoring their assets. Without such systems, they would be obliged to try to monitor and account for all their assets very much by the seat of their pants, or by using rules of thumb that do not, by definition, have very much that is scientific about them.
The benefit of a specialist asset management solution is that it offers tracking and inventory control, so that financial records can be related to physical items. Specialist systems also provide an audit trail facility and a centralised asset register so that when items are disposed of or moved, the accounting records are updated. Without such a resource it is difficult to see how a financial department will know when an asset is disposed of or moved. This is rarely information that other departments automatically pass on. It is easy to see how, under these circumstances, records can very easily get out of date.
Overall, organisations simply have to engineer themselves into a position where they can be confident that they do not belong to the thirty percent or so of UK organisations whose financial directors, according to our research, do not have a good knowledge of their assets. In order to prevent all the problems that go hand-in-hand with inadequate asset registers, ideally, organisations must know all of the following points relating to their asset or assets:
*  Physical location
*  Location history
*  Details of the actual user or manager responsible for the asset or assets
*  The serial number of assets
*  The actual value of the asset or assets
SOX requires companies to prove that in-house systems/databases formulas comply with US GAAP (Generally Accepted Accounting Principles) rules. This can be a difficult matter for in-house systems/databases, when the GAAP rules have been known to change on a daily basis. In-house systems/databases for asset accounting also bring some of the following problems:
*  They can only operate as single-user systems, whereas a multiple-user system is likely to be required
*  They provide no cost effective way of building an audit history
*  In most cases they have no data security facility or, if they do, it can be very complex to configure
*  They are, generally, prone to errors
*  In-house systems rarely have any external support
*  They are difficult to maintain and development can be time-consuming
*  It may be difficult to manage large quantities of data using this approach
*  They often demand considerable maintenance time from senior members of the finance department who have other responsibilities.
*  They offer no facility to relate financial records to physical items; that is, they offer no asset tracking.
An additional problem with in-house systems/databases is that the author or controller needs to be available if the system is to be used properly. This obviously causes difficulties if he or she has left the organisation or moved to another role outside the department.
For any organisation, the ideal situation is that it has a detailed, comprehensive and powerful knowledge of its assets. Essentially, an organisation needs a specialist asset management solution that allows it to keep precise records of every significant fact about an asset or assets.
The precise solution you choose for ensuring that your organisation has a top-quality tool in place to measure and manage your fixed assets is obviously up to you. However, ideally it should be able to do all the following:
*  Provide for both the physical and financial control of assets
*  Be able to hold movement history
*  Offer a full audit trail facility
*  Guarantee data security
*  Be multi-company, multi-currency or multi-book if needed
*  Be capable of analysing multi-depreciation via different methods: straight line, reducing balance, etc
*  Provide high-quality reports and have a facility to produce user reports
Conversely, if organisations don't relate accounting records to physical assets, the organisations can:
*  Wind up with inaccurate data
*  Lose substantial sums of money by inefficiently managing assets
*  Fail crucial audits
*  Fail to meet external requirements such as UK Government guidelines or the provisions of the SOX
*  Fail to qualify for additional funding
*  Lose shareholder value
*  Fail to identify redundant assets
The asset climate is changing, and organisations need asset management resources that meet the more stringent requirements of this new climate. The bad news is that some investment is likely to be required to introduce a new accounting system that meets the varied and complex needs of your asset register. The good news is that the new accounting system will indeed prove a truly powerful new resource that deals with a potentially thorny area of an organisation's activities and which will also help catapult your business to greater success.         n
 
Nicola Byers is Marketing Manager of Real Asset Management, and can be contacted via tel: +44  1689 892 100; e-mail: nbyers@ramplc.com

Louise Penson describes how the TDC Group in Denmark put together a plan to control customer exposure

How many telecommunication operators -- or any organisation for that matter -- can tell you at any point in time what their customer exposure is? A pipe dream perhaps? Not according to TDC, Denmark's leading supplier of telecommunication services, who are coming to the end of an ambitious project designed to give them a comprehensive view of the financial risk posed by their customers.
Established in 1990, the  TDC (formerly Tele Danmark) Group comprises a range of business lines including landline telephony, data communications, Internet services, mobile and cable, as well as interests in a number of other European telcos. Towards the end of 2003, TDC recognised the need to improve their credit and fraud management processes, which at the time were based upon traditional monitoring of individual customer entities, and limited to selected parts of the group's major business areas.   
TDC established a project team with the participation of key employees from TDC Solutions, TDC Mobile, IT and shared service functions. As the project evolved to be cross-organisational it was anchored in TDC's headquarters.
The scope of the project was extensive, comprising:
*  Network surveillance and fraud control
*  Bad debt
*  Procedures and governance rules
The main aim of the project team was to attain a complete and comprehensive view of TDC's customer exposure at any point in time -- as viewed from the following standpoints:
*  From individual telephone numbers, to the customer as a whole
*  From individual customers, to the customer base as a whole
*  From each individual business unit, to the TDC group as a whole1
The project team realised early on that efficient monitoring of customer exposure demanded organisational restructuring of the fraud and credit management functions. Previously, fraud management was handled by the network unit, and credit management was handled by the business units. The former decentralised handling of customer exposure was deemed to be incongruous. 
The project team suggested the following as the ideal organisational structure:

[img="http://www.hhcmailer.com/downloads/CustExp2.gif"]

It was believed that this structure would facilitate fast and efficient interaction between the various departments, supporting efficient monitoring of customer exposure. The next step comprised the consolidation and streamlining of the credit and fraud functions, standardisation of processes and procedures within those areas, education of key employees and the identification of requisite IT-support.
TDC put together specification documents that were initially aimed at finding two separate IT solutions, one for credit and one for fraud management. Unable to find a credit management solution that met their requirements they commissioned risk management solution provider Neural Technologies to work with them to transform the requirement specification used for the RFQ process into an operational requirement specification for a credit system, bearing in mind the solution needed to be a combined fraud and credit management system, or as TDC prefer to call it, a 'Customer Exposure System', to support the revised organisational structure. 

The Customer Exposure System

TDC's specification for the Customer Exposure System was extensive, comprising:
*  The ability to detect new types of fraud (for which they stipulated a neural network system)
*  Improved credit rating on new activations
*  Advanced warning of bad debt and prioritisation of alerts/cases. 
Crucially, TDC wanted to ensure that the acceptance of any large exposure was based on approved business procedures. The functionality of the new system will put an end to the traditional credit monitoring, based on fixed credit lines for each type of customer that is, in turn, often based on external parameters. Traditional credit monitoring is often performed on alerts based on high usage, excluding information of the customers' previous usage and payment behaviour. 
The system allows TDC to set an individual exposure limit for each customer based on the customers' behaviour. The exposure limits ensure that alerts are raised when customers change usage pattern or payment behaviour. Furthermore, the functionality impacts prioritisation of alerts, and the number of alerts that have to be investigated -- which will be determined on the basis of the actual customer exposure. Finally, neural functionality is expected to support the identification of the customers' usual usage and payment patterns. 
These exposure limits will be calculated on a daily basis for each customer (person or legal entity) and automatically adjusted over time based on the customers' usage, provided that the customers meet certain criteria (i.e. no adverse credit history in TDC and that their invoices are paid on time etc.). Adjusting exposure limits based upon usage and payment behaviour will enable TDC to achieve more efficient monitoring of customer exposure, which may even increase customer satisfaction.
TDC also set out to protect their customers, by having prior knowledge of customer exposure and behaviour, from unintended increase in usage -- often resulting in hefty invoices and possible default in payment. TDC stipulated that the solution should provide a full and complete segregation of customer data between the different business lines in order to comply with the aforementioned Danish law. Neural Technologies were able to create individual user profiles and segment these profiles along with their associated customer data to meet this challenge.
TDC Solutions and TDC Mobile have a large number of systems containing customer information needed for preparing customer exposure. Often, telecommunication companies monitor customer exposure on the basis of billed traffic: TDC includes unbilled usage etc. to get the total overview of the customer exposure.  The new customer exposure system aims at bringing all that data together from the disparate systems, allowing TDC a clear view of the exposure posed by any customer.
TDC expect the project to be an immense success.  One of the major factors behind this success is the resounding support from TDC's executive management and the strong leadership that was able to get 'buy in' and consensus from the various different business units. 
TDC's Senior Audit Manager, Marianne Holmbjerg, who was a key proponent and major driving force behind the project, notes: "At the start of the project the different business units had disparate working methods due to decentralisation. Now the business units are working with one common goal. We are expecting to reduce duplication of effort and to optimise and standardise our processes for customer response and monitoring of customer exposure. We now have the basis for an increased awareness of risk and have improved the sharing of knowledge between various functions -- Revenue Assurance, Fraud Management and Credit Management -- in each legal entity.
"Our customer exposure monitoring will cover all categories of customers. In addition, we will be able to monitor and assess fraud and credit risk on new services and price plans. Overall we have enhanced data visibility leading to a vast improvement in decision making".
Paul Bowler, Deployment Manager for Neural Technologies, adds: "The fact that fraud and credit risk is now managed within a single system gives TDC a holistic view of any issues, and means that potential losses are more likely to be identified. Particularly given that there is often crossover between the two functions. For example, what starts as a fraud can end up as a credit alert if the expected exposure is exceeded due to high usage. Furthermore, the solution ensures TDC fulfil their accounting principles by providing clear fraud and bad debt definitions."
The new system is supported by TDC's existing processes, which comprise, for example, credit vetting on new activations, monitoring of accounts receivables, dunning processes and monitoring of network performance. At the moment the system is in a testing phase on live data and is expected to be fully implemented by the end of the year. The user acceptance tests of the "Customer Exposure System" on synthetic data gave rise to almost no corrections. 
The Customer Exposure System initially covers the business units of TDC Solutions and TDC Mobile. It is envisaged that TDC's other business units will be incorporated over time.                                             n

1 Ensuring that the organisation working within the premise of current Danish law, which prohibits the exchange of customer information between TDC's separate legal entities.

Louise Penson, Neural Technologies, can be contacted via tel: +44 1730 260256;
e-mail: louise.penson@neuralt.com

Organisations are coming under increasing regulatory pressure to ensure stricter policy towards corporate governance. Michael Burling looks at what companies can expect

In the wake of the Enron and Worldcom accounting scandals, the regulations an enterprise implements to ensure its integrity are open to increasing scrutiny. This has given rise to a growing number of initiatives such as Basel II, the Sarbanes-Oxley Act and the new Companies Act, all designed to ensure that high-standards of corporate governance become part of day-to-day business culture.
Basel II, the forthcoming protocol for the financial sector, is designed to replace the 1988 Capital Accord. It recognises that managing and controlling financial risk and operational risk, such as IT, is an integral part of corporate governance and, as such, obligates companies to assess their vulnerability and make it public.
Basel II is based on three main areas that allow banks to effectively evaluate the risks financial institutions face: minimum capital requirements, supervisory review of an institution's capital adequacy, and internal assessment process and market discipline through effective disclosure to encourage safe and sound banking practices. 
Financial organisations that do not provide appropriate details must set aside 20 per cent of their revenue in order to cover losses or risk being prevented from trading. The first phase of Basel II will come into effect at the end of 2006, with the more advanced elements planned for implementation at the end of 2007.
The furthest reaching of these regulations is the Sarbanes-Oxley Act, which requires companies to comply with challenging new standards for the accuracy, completeness and timeliness of financial reporting, while increasing penalties for misleading investors. The Act, which applies to all companies (and their subsidiaries) on the US public markets, protects the interests of investors and serves the wider public interest by outlawing practices that have proved damaging, such as overly close relationships between auditors and managers. The law includes stiff penalties for executives of companies that are non-compliant, including fines of $5m dollars, and up to 20 years in prison per violation.
The forthcoming Companies (Audit, Investigations and Community Enterprise) Act is designed to help UK firms avoid the much-publicised accounting and auditing problems experienced by companies such as Enron, Worldcom and Parmalat. The Bill, which made mention in this year's Queen's speech and will be debated in this session of Parliament in order to come into force early next year, will impose new measures to ensure that data relating to trades, transactions and accounting throughout an organisation is fully auditable.
With reference to the Companies Act, Department for Trade and Industry minister Jacqui Smith has said: "We want the UK to have the best system of corporate governance in the world. There is no denying that financial markets around the world have been badly shaken by the corporate failures of the last few years.
 "This Bill completes a comprehensive package of measures aimed at restoring investor confidence in corporate governance, company accounting and auditing practices here in Britain. Its aim is to raise corporate performance across the board and beyond.
 "The Bill tightens the independent regulation of the audit profession and strengthens the enforcement of company accounting, both concerns highlighted by the Enron and Worldcom scandals. It gives auditors greater powers to get the information they need to do a proper job, and increases company investigators' powers to uncover misconduct." 

Network security
Basel II, the Sarbanes-Oxley Act and the Companies Bill all highlight the fact that board directors and executive management have a duty to protect the information resources of their organisations. As such, network security -- preventing unauthorised access to information and data -- is of the utmost importance, and the most effective way of achieving this is by deploying an effective provisioning solution that allows the enterprise to determine who has access to which applications and when.
However, implementing an identity and access management programme that ensures the correct level of security and internal controls over key information and data can be a difficult task for many large organisations.
Often, systems and access policies in use today were developed many years ago when security was not necessarily the highest priority. Not only are these legacy systems now unsuitable for use, but, since being implemented, many of the policies associated with them have not been reviewed, and access is granted either manually or by way of 'home grown' development.
Furthermore, many of the systems were not developed to cater for temporary changes such as the provisioning and de-provisioning of contract workers or account for a member of staff on leave. Adding to the problem is the fact that, often, companies have myriad systems and access policies, which have merged with another organisation's policies, systems and architectures.
These issues are now major problems that need to be addressed urgently. As well as the need to comply with corporate governance regulations, the situation has also given rise to an increased security threat; a fact highlighted by the Financial Services Authority's Financial Crime Sector Report: Countering Financial Crime Risks in Information Security. 

Secure enterprise provisioning

The latest enterprise provisioning technology allows organisations to alleviate these problems through centralised management of IT systems and applications, and the users who access them. Enterprise provisioning solutions, which automate the granting, managing and revoking of user-access rights and privileges, solve the problems created by complex user bases and IT infrastructures by enforcing policies that govern what users are allowed to access and then creating access for those users on the appropriate systems and applications.
The solution can execute provisioning transactions dynamically, based on the nature of the request and then initiate the appropriate approval workflows as defined by the appropriate policy. It will also provide robust reporting that enables the IT department to better manage user access rights from a global view. For example, systems administrators can view who has access to particular systems or the status of any individual access request (add, move, change, delete) in real time.
The best of the new breed of provisioning systems enforce organisational policies designed to ensure that financial enterprises comply with regulatory requirements by governing who can access particular systems and the information they contain. Reporting and auditing capabilities enable the organisation to demonstrate compliance by listing who has access to protected systems and reporting on how the access was granted and that appropriate approvals were obtained, thus demonstrating that proper policies designed to comply with regulations are being followed. The software can also demonstrate that users who have left the organisation have had access revoked from all the systems to which they were previously authorised.
These capabilities not only make regulatory compliance straightforward and easy to manage, but ensure increased productivity. Users can be connected to the resources they need to be productive in a fraction of the time, cost and effort previously required. Enterprises can compress the user set-up process from weeks to minutes and application integration from months to just days. In addition, the IT department's own productivity will increase dramatically as resources are freed up from the time-consuming tasks of managing user access and building integrations to managed systems and applications.
By ensuring regulatory compliance and at the same time reducing IT costs, secure enterprise provisioning solutions are sure to evolve from the great opportunity they currently present, to a critical element of the IT infrastructure of successful businesses.                    n 

Michael Burling is EMEA managing director of Thor Technologies and can be contacted via tel: +44 1932 268 456; e-mail: michael.burling@thortech.com
www.thortech.com

The telecoms industry has been working to develop an
architecture that could bring together the increasingly complex elements within the network. And finally, with IMS, it may well be succeeding, claims Grant Lenehan

For the last twenty years or so, the creation of an infrastructure that could support a 'network of networks' has been the long-term vision of many industry bodies, service providers and vendors. The aim has long been to create an agnostic environment that allows users to interact with services, whenever and wherever they are. Initiatives such as the International Telecommunications Union's (ITU's) IMT2000 directive set the way, although the 2G and ISDN technologies that it was designed to use now look rather dated in the face of the overwhelming success of IP.
While technologies, regulatory conditions, and operators' business models have changed dramatically, overall industry objectives have not. Confronted by continually increasing complexity in devices, protocols and applications -- and by the need to inter-work across multiple network boundaries -- the telecoms community has been working hard to develop an architecture that could bring these together in the simplest way possible. And finally, with the IP Multimedia Sub-system (IMS), it may well be succeeding.

Mobile first

The mobile community has got there first with the IP Multimedia Sub-system, mainly as a result of the economic downturn that limited fixed network investment at the start of this decade. 3GPP, the main global co-ordinating body for 3G network development, initiated the work on IMS that would act as the standard for the converged core network of the future.
With many fixed network operators now starting to confront similar issues, IMS-based solutions are also becoming attractive to them as they face a future based on offering the 'triple play' of integrated voice, data and content services. In this context, IMS is expected to play a major role in driving the continued convergence between the fixed, mobile and wireless sectors, and greatly simplifying usability from the end user's perspective.
IMS is rapidly gathering pace, with some systems planned to go live in late 2005. One of the significant drivers for IMS adoption is its potential for rationalising the heterogeneous network and service infrastructures that have been inherited by multinational mobile operators as a result of the industry consolidation of recent years.
Particularly acute is the issue of continued inter-working with legacy PSTN and cellular infrastructures. Investment has been enormous over the last century around the world in both access and switching equipment and it will be impossible to completely replace this for many decades. In fact, significant growth is still underway in circuit-switched cellular networks. For this reason, inter-working between the two domains will remain an important issue for the foreseeable future.
If truly global brands are to be established, they must offer consistent services across networks and global boundaries. In reality, this consistency must extend to the methods by which services are created, delivered and managed. The use of a single and coherent -- yet highly distributed -- architecture is desirable if systems integration and legacy support costs are not to become unmanageable.
The previously 'flat' structure of traditional telecoms networks is being replaced by an open, layered model that allows the delivery of richer multimedia services to a variety of devices from a variety of sources. Future services are being built around IP as the transport protocol, supported by Session Initiation Protocol (SIP) to control VoIP, and multimedia sessions and Diameter for handling customer authentication and billing procedures. Building on these, IMS has been designed to support other relevant protocols such as HTTP, Web services and Parlay, while also incorporating the work being done by the Open Mobile Alliance (OMA) in the applications layer.

All-packet core

At the heart of the IMS is an all-packet core that fully supports the ever-growing diversity of access technologies including 2G, 3G, WiFi and WiMax -- as well as the still largely open concept of '4G'. Supporting this, a number of other standards bodies are also developing appropriate extensions to allow IMS to inter-work with other access technologies such as xDSL, PacketCable (DOCSIS) and fibre in the local loop.
   The primary purposes behind IMS are to enable a richer set of services, as well as facilitate the seamless convergence of all the communications services that we presently use -- but which are currently partitioned by the nature of the networks that they run on. While we've become used to using the fixed Internet for some transactions, our mobile handsets for others and so on, this silo concept is increasingly inefficient and expensive for both user and service provider.

Moving up the value chain

With traditional voice revenues under constant erosion, it's essential that service providers of all types are able to move up the value chain, away from basic connectivity and towards more advanced communications services that include multimedia, messaging, business and lifestyle applications. IMS has been specifically designed to allow this type of rich interaction between services, allowing users to set up voice or multimedia sessions on the fly, exchange content and messages in highly flexible ways, direct fixed line voicemails to mobile in-boxes, or use presence and availability information to direct calls to the most appropriate person within an enterprise.
IMS also supports the core next generation network objective of openness and transparency. On one hand, the presence of standards ensures that multi-vendor purchasing strategies can be pursued without an accompanying rise in integration overheads. On the other, commercial relationships with content and application owners and aggregators can be protected within a secure framework for both financial transactions, supported by underlying techniques to guarantee quality of service across multiple network and operational domains.
The changing role of industry standards within telecoms is particularly important here. In direct contrast to the IT industry, communications standards have generally emerged through consensus, facilitated by the workings of national and international industrybodies  like the ITU. As the world's networks shift towards becoming open platforms, new technologies must be integrated at an ever-faster rate to achieve continued competitive advantage, placing a strain on increasingly fragile and multi-sector standards processes.
By providing what is, in effect, a common and open applications platform for both service providers and third parties to use, IMS goes a long way towards helping the telecommunications industry take its first steps towards becoming more commercial and even less utility based.
IMS will have an enormous impact on how the communications industry actually makes money in the future and, just as importantly, on how it will protect its traditional revenues from attack. While mobile service providers in particular have always been sensitive to the loss of revenues to third parties, they must reinvent themselves to add more value to transactions in a variety of ways to help both themselves and their business partners.

Opening up the value chain

IMS has a unique ability to open up the value chain while simultaneously allowing the network operator to retain control of certain essential value-added functions. Telcordia in fact, is concentrating much of its development on these 'value added' functions, often referred to by OMA as 'service enablers.' Since Telcordia believes in open ecosystems, and that a holistic approach is the best way to forward the objectives of IMS, we are actively contributing to the development of these specifications within OMA and 3GPP.
IMS supports a significant number of value-added functions both within the network and also within the business models that are the justification for the current interest in IMS. These functions all share the attributes of being common across most applications, and being far easier to implement within the network than hundreds or thousands of times within each application.
Firstly, there is presence and the availability information; with the network knowing whether users are     available for calls and what device they are using at that particular moment, it becomes possible to offer premium services to both private and business customers that ensure that calls or transactions always get through to the appropriate person or device.
Then there is location information; if the network knows where the user is, a broad portfolio of location-specific services and applications can be offered to the customer, extending to special promotions in shopping areas or traffic and weather alerts. Here, IMS can be combined with the availability of GPS and other positioning technologies to finally make location-based services a commercial reality.
Security and risk management are also both important to operators; sensitivity to security vulnerabilities is finally becoming a serious issue for both business and private users and the emergence onto the scene of viruses that target mobile devices or the impact of Denial of Service (DoS) attacks are hitting the headlines. For end users, the emergence of IMS means they can use the network in the knowledge that their data itself is secure from prying eyes.
Shared (or 'common') user data and profiles represent a valuable opportunity to simplify the development and usability of new services; service providers can extend and leverage the increasingly wide range of customer information that will be needed to support advanced services. It also opens the network to end users to express their own service preferences, order their own subscriptions and appear on specific group lists.
Then there's flexible charging. The highly diverse, multimedia nature of NGN services is about to revolutionise billing models, with service providers requiring almost infinite flexibility in how they package and price their services to meet ever smaller market niches and support far more complex relationships with third parties. IMS, with its well-defined session model and value-added functions, allows operators to add value, and also flexible methods to charge for services.  Charging may, in fact, be the most significant difference between IMS networks and other, 'dumber' IP networks.            IMS provides a framework to simplify all of these procedures, making it easier to associate particular service quality parameters with specific customers to create gold and silver service grades for example, or to rapidly create cross-charging and payment relationships with partners, such as TV programmes for televoting, or the original owners of brands and content, such as film studios.

Monetising every transaction

It's arguable that IMS could be interpreted as the 'IP Metering System' (IMS!), given its ability to track and charge for every conceivable transaction that takes place, irrespective of whether this is via standard credit or prepaid systems, or through service-specific micro-payments. This helps communications service providers and network owners to retain their market dominance and opens up bandwidth for the flow of money, as well as data, once the services themselves have been created. Of course, the next-generation charging systems must take advantage of IMS' inherent flexibility -- legacy systems in IMS will simply become barriers to innovation.
In fact, service creation was often a problematic area in the days of Intelligent Networks, but IMS is allowing the drag-and-drop creation of new, multi-network and multi-protocol services and applications by non-technical staff, driving the rapid low cost prototyping and introduction of new services. In turn, the revenue opportunities created by IMS are transparently apparent to the operator.
That is because, as IMS at the heart of both the network and the service environment, data can be readily gathered from a multitude of network elements, end devices and third parties to produce clarity in billing and associated reconciliation procedures.
If much of the IMS flight path remains clouded in commercial confidentiality, it is becoming clear that there are two areas where it could have a major impact: in fixed-mobile-wireless convergence, and the area of discovering more about how subscribers use services.
Firstly, in the area of convergence, major operators such as BT are examining the role of IMS as a tool to offer truly 'joined up' services to customers, allowing them to roam freely between fixed, WiFi and cellular networks both at home and in public. One important issue here lies in allowing customers to connect in the most appropriate way for the service required at the optimum cost -- but through a single account and customer profile.
Secondly, in terms of discovering more about service usage, and adding to work done by the OMA, the part of IMS dedicated to supporting customer details and preferences might be accelerated to enhance data, messaging and virtual operator services by providing a much richer, more personalised experience. This in turn, can help drive take up of advanced services by online communities, increasing both revenues and brand loyalty.

The ultimate implementation

However IMS is ultimately implemented by each individual service provider, it's clear that its impact is going to be truly transformational in the business process and operations areas. It is, however, going to bring a need for reassessment in several key areas.
With quality of service, it will no longer be possible to take a simple, deterministic view of service quality based purely on a few connectivity-based, network-centric parameters. This prescriptive approach will have to be replaced by far more flexible and dynamic methods that can aggregate multiple sources of quality of service data, based more on the customer's actual experience of a transaction.
Quality of service issues are magnified by the fact that communications will increasingly take place across different commercial and technological domains, only a few of which may be actually owned by the primary service provider. Protecting both the integrity of the service and of the service provider, without any direct control over the entire length of the value chain, will present challenges for technologists, business development specialists and lawyers.
Now that the long awaited 'network of networks' looks like it's finally emerging from the complex cat's cradle of co-existing and often competing technologies and protocols that have grown up in recent decades, it's important to remember that IMS is there as a true business enabler.
The invention of money revolutionised entire economies and social structures, replacing the inevitable time and space limitations imposed by bartering. By comparison, IMS is set to open up the communications environment to new ways of doing business. In the process, new value -- and new wealth -- will be created.                                                           n

Grant Lenahan is Vice President, Wireless Mobility, Telcordia, and can be contacted via tel: +1 732 699 4894; e-mail: glenahan@telcordia.com 

Internet Protocol Television (IPTV) looks set to revolutionise the world of entertainment. However, like any new technology, there are roadblocks to negotiate before profits can be realised. And one of the biggest challenges to IPTV is the flexibility of back office applications such as OSS and billing systems. Simon Gleave looks at the issues

Just when the telecoms industry has come to terms with the billing challenges of next generation networks, along comes a new technology that presents equally tough obstacles. This innovation is IPTV, the television service that provides real-time interactive experience for the viewer. 
Downloading a favourite film, music video or sitcom whenever you want it, will become possible with this entertainment system, not to mention bringing other interactive delights such as home shopping, television gambling, and video games on demand into the living room.
In short, television will evolve into something that its inventor, Logie Baird, never dreamt of. It will offer the variety of on-demand content that is currently associated with the Internet, providing consumers with the       opportunity to watch whatever they want to, whenever they want to.   

Technology vs. delivery

Technically, IPTV is very different from any service that telcos have previously deployed in their networks -- for a number of reasons.
Firstly, the consumer data created by the IPTV network is immense, which provides both a challenge and an opportunity for the CSP. It would require an advanced mediation system with which to capture and correlate the disparate records being fed into it, but once this is completed, supplies the CSP with detailed information on its user base -- what they watched, what adverts they clicked on/skipped. Providing valuable information with which to attract advertising revenues and upsell promotions for its own products.
Second, IPTV is enabled by large amounts of sophisticated software, as opposed to hardware, for traditional voice and services. This provides a higher abstraction level of the service, which enables the CSPs products to more intelligently interact with the network and thereby create more advanced services.
Third and most importantly, IPTV differs from satellite and cable television in one, very important way -- it's not a unidirectional broadcast technology where everyone receives the same signal. Although it does use IP multicast for standard channels, the video head-ends are in direct contact with the viewer's set-top box, and it is able to react instantaneously to requests. This gives the user much more control over what is viewed, when and how. For example, a video-on-demand would start instantly, instead of just being scheduled at regular times.  Enabling these facilities, however, requires a BSS/OSS system that can cope with this change to on-demand services.
Viewers can also access various types of media options by using the television remote to send control commands to the set-top box. 
At the same time, there is also a need to develop franchising and content rights to ease the way into film, game and television markets and, most importantly, the flexibility of OSS and billing systems for the telecoms operators scrambling to secure key market share.

Franchising woes

Much like 3G, IPTV involves everyone in the entertainment and communications sector, including television, gaming, music and movie studios, cable companies and telecoms operators. With so much at stake, it is not surprising that there is a great deal of interest in this new technology. In Europe and the US -- two markets where IPTV is expected to attract over 20 million users by 2008 -- the telecoms industry is sharpening its knives to get a piece of the action. The time is almost upon us, they believe, when compression technologies and broadband penetration will allow the interactive TV (iTV) promises of the 1990s to be finally realised. And they are prepared to do battle with entrenched cable and satellite companies to win over the consumer to this brave new media world.
One thing is clear -- it won't be easy. Who gets the rights to broadcast certain content is still in the balance.  In the US alone, there are more than 30,000 franchise areas for IPTV, each of which can take close to a year to negotiate entry. At this rate, it could take years before all licenses are procured, and there is likely to be a bidding war in the meantime.
Another major problem is exactly who delivers 'must have' content. The biggest consumer attraction to IPTV is the ability to watch content on demand. Traditionally aggregators of must-watch content made their money by showing the content, and financing this through ads or license fees. But what is worrying many a television channel executive today is that content producers are exploring options of delivering their content independently. They could achieve this by setting up IPTV delivery sites of their own, enabling consumers to go 'straight to the source' for content and making the old networks effectively redundant. Accordingly, many of the Hollywood studios are reticent to sign up to any favourable distribution deal for their films on IPTV.  And there is a danger that companies hoping to get ahead in the IPTV world will bid over the odds for content required to attract consumers.

OSS and billing challenges

Franchising and content rights are one IPTV headache.  Another is billing. This is one of the biggest challenges faced by any player entering the IPTV market. At the heart of any successful media roll-out is a flexible OSS back office system, and IPTV is no exception. 
Billing for cable has always been a straightforward process in the past, but the addition of telecoms operators venturing into the IPTV space means the billing process suddenly gets more complicated. It must now contend with the requirements of 'triple-play' billing with all the tracking and reporting challenges that brings.
With IPTV, telecoms operators will need to acquire the means to identify the users of the service and all vital statistics associated with billing -- such as what programmes the viewer watched and for how long, what their viewing patterns are in terms of hours watched and when, the content accessed -- which can then help drive targeted upsell marketing campaigns. In a country with a franchising model, the payment requirements for each franchise or content in a given area need to be supported.
Another billing challenge is trying to find ways to deal with the endless number of partnership agreements that are created to provide the content necessary for IPTV and to manage the thousands of franchises that go with it. Settlement between partners is a complex process when just a handful are involved -- with the potential for thousands, it becomes a problem on an entirely different scale, and IPTV providers will need to address these issues with new partner management and settlement systems.
Rating engines have to be adaptable to support different fees and local taxes, in addition to various channel line ups in the service bundles. One household, for example, might be given free access to a movie network, while another could be offered free children's programme viewing. You could also have pre-paid wallets (for VoD or mobile top up purchases) supporting a standard post paid monthly flat fee.
The variety of these issues now means that telcos are having to start talking to people they usually don't deal with, such as Hollywood studios and advertising agencies. IPTV providers must be able to cater to these players and their individual requirements and demands.
Billing systems should, therefore, provide specific rates for specific classifications. Flexibility is key, so that every IPTV service is charged fairly and in real-time, according to where and what service is enabled or downloaded.

New opportunities for OSS players

IPTV provides tremendous opportunities for billing providers, especially those that can offer an adaptable BOSS solution -- tying an intuitive customer interface with real-time rating, charging and activation on demand capabilities. Content partner management software can be installed to support the different rating requirements for the various partnerships agreements presented by IPTV. Meanwhile, a multi-service mediation system can be installed to manage the abundant levels of customer data produced by the IPTV network.
The opportunity for the BSS/OSS players lies in four main areas:
*  differentiating IPTV from current TV services
*  supporting on demand services
*  stimulating consumer spend
*  stimulating advertising spend
Achieving these aims means investing in the right back office systems and, in the process, making OpEx savings by helping to automate key customer support processes  -- e.g. buying a new service via the TV rather than the contact centre.
Like 3G, the opportunities to make money from IPTV are numerous, but without the right billing systems in place, it cannot realise its maximum potential as the biggest money-spinner in the history of television.      n

Simon Gleave is IPTV Market Manager at Intec.
www.intecbilling.com

A key component of converged multi-service networks is
intelligent functions for bandwidth management to ensure
Quality of Service (QoS), says Daniel Hydén

Operators worldwide are now building Next Generation Networks (NGN) for multi-services, where telephony, data and video co-exist in a shared infrastructure based on IP technology. The market demands an open, standards-based infrastructure that allows easy and quick introduction of new services to rapidly respond to customer demands. A single, converged IP network means considerably lower operational expenditures (Opex) as well as decreased capital expenditures (Capex) over time. A key component in converged multi-service networks is intelligent functions for bandwidth management to ensure Quality of Service (QoS).
State of the art bandwidth managers are a pre-requisite for guaranteeing quality and to meet the goals for decreased Opex in the next generation of multi-service networks.

Rationale for bandwidth management

A cornerstone of converged multi-service networks is the sharing of a common network infrastructure, and since IMS architectures promote deployment of a variety of applications all sharing network resources, user control is of paramount importance. An NGN must be able to control which user, which application and which service is utilising which part of the network at any given time. Without such control, facilitated by a carrier-class bandwidth manager, bandwidth requirements will be enormous and QoS will be dubious. 
When introducing NGNs, a variety of technical approaches can be utilised for provisioning of bandwidth reservations and QoS, but only multi-service, multi-technology bandwidth managers with session-based resource reservations meet the demands on operational efficiency.
For instance, a network can be over-provisioned and statistically dimensioned to a level where bandwidth should not be a bottleneck. Still, there are no guarantees and it would not be cost efficient in the long run. Over-provisioning is also not a solution in a ubiquitous network where users and applications are considered to be nomadic/mobile and where the need for network resources will vary in space and time in an unpredictable pattern.
Although call counting can be used for single applications, it is not efficient in a multi-service environment where it leads back to a 'stove pipe' network with capacity provisioned for each service. Call counting is neither viable for an operator providing network resources and customer access in a wholesale business model with a multitude of service providers seeking open interfaces for fast service introduction.
To meet the goals on an open infrastructure promoting fast introduction of new services with substantially decreased Opex compared to the PSTN, operators undergoing the transformation to next generation multi-service networks are introducing bandwidth managers for session based network resource control in their new infrastructure.

Carrier class bandwidth manager

One example of a carrier class bandwidth management system is the Operax Bandwidth Manager which is an end-to-end control system aligned with standards work from ETSI-TISPAN, MSF, 3GPP and ITU-T. It addresses bandwidth and policy management for broadband VAS and the migration of traditional services such as PSTN to multi-service IP networks. Such a bandwidth manager operates in multi-service, multi-technology, and multi-vendor networks and simplifies network management, increases network utilisation and avoids resource partitioning.
With this kind of bandwidth manager the multi-service infrastructure is shared under the authority of a unified single resource controller. Any application service can access the exposed QoS features and benefit from network VAS. There are many examples of services supported by bandwidth managers, some of them are QoS enabled VAS including derived voice and various forms of video, Generic ADQ (Application Driven Quality of Service) capabilities for wholesale QoS services and PSTN replacement. Bandwidth management enables a PSTN network operator to migrate existing PSTN infrastructure to an IP-based network while providing the network resource control functionality that is needed to guarantee service level and transport quality on the converged multi-service network. This ensures QoS end-to-end across core and access networks, necessary for delivery of voice and multimedia services.
Additionally this type of bandwidth manager is equipped with a generic data structure for representation of multi-technology network resources. This data structure, a.k.a. the resource map, models resources for virtually any network technology, including ATM, Ethernet, IP/DiffServ and MPLS.
NGNs are requiring vendor independent inter-working solutions and open standards and by choosing an open architecture system for bandwidth management, multi-service networks will support end-to-end QoS required for carrier-class voice services, while delivering the remainder of their existing services at a lower cost.  Furthermore, the separation of networks and applications implemented with a bandwidth manager and the open architecture of some bandwidth managers make it possible to use multiple vendors for network equipment such as routers, switches, BRASs as well as application functions (e.g. softswitches). 
A flexible bandwidth manager is the basis for cross-service resource sharing, both in real-time, driven by applications and on slower timescales, driven by management. This means that resources temporarily not being used for one service can instantly be utilised by another and that resource allocation between sets of services can easily be changed with minimal or no network reconfiguration.
Another feature of a carrier class bandwidth manager is that it effectively implements a bandwidth management structure capable of covering the entire network from access lines, via the backhaul onto the core. Furthermore it represents all contention points on the core, backhaul and access. This detailed resource representation provides for high-precision control which leads to guaranteed QoS and is also known as path-sensitive admission control.
The notion of a bandwidth manager has been discussed at length in the Multi Services Forum (MSF), where it is a clearly identified component in the multi-service architecture. Operax has actively contributed to the MSF for some time and the Operax bandwidth manager was the one used at the MSF global interoperability event (GMI) during 2004.

The NGN market

The NGN market is set to grow at a fast pace over the next few years. There are currently several high profile projects in which bandwidth management is a major consideration.  These include BT's 21st century network (21CN) to create and enable the infrastructure for the growth of the UK telecommunications industry. It is set to transform legacy/PSTN networks, delivering increased customer choice and control and, over the next five years, will transform BT's business and cost base, removing duplications across the current multiple service specific networks by creating a single multi-service network.
Another example is South Korea -- the world leader within broadband usage -- now building an NGN, the Broadband Convergence Network (BCN). Korea Telecom (KT) is at the forefront of the technology development and is currently testing a carrier class bandwidth manager. Around the globe operators are now kick-starting their network transformation into all-IP networks. For emerging markets like China and Eastern Europe the new technology provides major opportunities to leap frog into the latest technologies, with all the benefits a modern open infrastructure enables in service introduction and operational efficiency, underpinning the overall market growth and creating a thriving economy.
The convergence of services and terminals also means new business models and a new approach to infrastructure both by mobile and fixed operators. That is why fixed network operators are looking at IMS architecture when moving into NGN; Next Generation Networks are simply Fixed Mobile Convergence (FMC) in the eyes of a fixed network operator, at the same time mobile operators are closely following the fixed multi-service network convergence.
Regardless of whether it is a mobile or fixed operator moving into a ubiquitous, converged all-IP based infrastructure, the bandwidth manager is the essential component for providing multiple services in a highly competitive multi-technology, multi-vendor network.   n

Daniel Hydén is VP Marketing, Operax

A study published by Deloitte Consulting in April this year found that 70 per cent of participants had bad -- even costly -- experiences when it came to outsourcing. European Communications asked John Leigh, BT Global Service's head of
marketing for outsourcing services,

When John Leigh spoke at Gartner's 2005 Outsourcing and IT Services Summit in April about the practical issues that make the difference between success and failure of an outsourcing project, he was greeted by a keen audience, hungry for information. The debate at the conference wasn't about whether or not to outsource; rather, it was about how best to do it and who to partner with to ensure genuine customer success.
"We are entering a new era in business -- something we, at BT, are calling the digital networked economy." Leigh explains. "It's a relatively new way of working, brought about by the convergence of IT and communications technology on one hand, and the globalisation of business on the other. In this new economy, we face a global market where vendors are going to have to be able to sell services across the world at the best price point possible. One way to make this work is to concentrate on what you do best and outsource the rest."
BT seems to be thriving in this marketplace so far, with a number of new outsourcing contract wins under its belt. The mix of networks and IT services it offers is proving a big sell with companies. Recent headline deals have included a 'blockbuster' $3 billion deal with Reuters; a £25 million three-year IP VPN project with Visa across Europe; a seven-year contract with Unilever managing its global communications infrastructure across 104 countries, including the development of new technologies; and the upgrade and management of Manpower's worldwide data network encompassing 3,200 sites in 63 countries.
The company also practices what it preaches. As Leigh says: "We have done it for ourselves, so we know what we are talking about. We have outsourced call centres to developing countries. We have outsourced our HR administration to Accenture and our payroll and purchasing functions to Xansa. If you ask us if we believe in outsourcing, we have solid business evidence to say we get value from it ourselves, both as a user and a vendor. And that's pretty unusual."

Down-to-earth success

So what's the secret of success? Leigh, who has 30 years' experience in the computer services industry, working for Meta Group and Gartner before joining BT, offers a number of down-to-earth pointers.
Firstly, he says, when you go from doing things yourself to being a company that outsources, you have to change your skill set. He describes this as giving up managing assets and starting to manage results. He then explains what he means in an example that demystifies the usual approach to the subject:
"Imagine that you and your partner used to clean your house yourselves. You're busy people, so you decided to hire a cleaner to do it for you. When you did the work yourself, you didn't really have to think about it that much, because you had years of experience in the area and knew exactly what to do. Now you have to explain exactly what you need cleaning, how you like it to be done, how often, using what products and even how shiny you want it.
"Instead of buying cleaning products -- in effect, managing the assets used to do the job -- you've started defining what you want done -- that is, managing the results. And this is the issue. Most of the confusion around outsourcing is based on the fact that many purchasers don't really accept -- on an emotional level -- that a service provider doesn't instinctively know what they want. And some vendors fail to make the fact plain. Indeed, customers find it quite irritating when a vendor claims to be able to run their processes more efficiently than they can themselves."

Pointer 1:
Understand exactly what services you want, the        quality at which they need to be delivered and how to measure both.
Another stumbling block, according to Leigh, is demonstrated by research BT recently conducted with Industry Direct Ltd. Summarised in a white paper titled Strategic outsourcing to advance the organisation, cost control remains the primary objective for organisations  embarking on outsourcing agreements.
Leigh explains: "The issue is not that cost is unimportant -- we in BT know that reducing the costs of ICT is a core skill -- but that most contracts are overly focused on direct cost, not value. And value is much more difficult to quantify. If you were to focus on the direct cost of e-mail, for example, you simply wouldn't have it. But e-mail has a value that can't be expressed in direct cost -- for example, in allowing things to happen much faster."
Pointer 2:
Organisations should quantify the performance of their IT systems and make sure they understand the balance between direct cost and the value. Then value can be built into the contract.
"I tell my clients that, if you want to cut your costs, you should take a benchmark, decide what industry best practice performance would be and ask us to deliver to that standard," Leigh notes. "But don't just tell your supplier to take 30 per cent out of your direct costs. You might not have that kind of slack, and both you and the supplier could waste a lot of time trying to reduce the cost of something you have already optimised. You don't want to do that. Services are unusual. Their costs are embedded in the proposition, so there are only a limited number of ways to reduce them. The easiest is probably the least acceptable: to reduce the resources used to deliver the service. Other options include consolidation, standardisation, the use of new technology and offshore provision."
Pointer 3:
Know with great clarity what strategy the customer and vendor will apply together to reduce the operating costs.
So bearing these first three factors in mind, what else is important? "My fourth pointer," says Leigh, "is to reduce the risk as far as possible. Take scale, for example. Is the company you are looking at big enough to fulfil your needs?"
Pointer 4:
Reduce the risk by making sure the supplier has the scale and resources you need.
 "Sure, if you are outsourcing your only helpdesk to an IT shop around the corner, this may not apply,"Leigh notes. "But on a larger scale, any outsourcing project that includes a change management element carries risk of failure. We know that, and our job is to reduce the risk for our customers. But you know by our very size, scale and scope that you are taking a lower risk. Size mitigates risk -- just as it helps to have access to a powerful engine in your car. You may not need it often, but it certainly helps when you need to get yourself out of a difficult situation."
His fifth pointer, he says, is to look at depth of capability: "What I tell clients is that they should look at the investment model of the company they are looking to do business with. Where are they investing? If you are looking at buying a network, you can see that BT is planning to invest over £2 billion a year building a state-of-the-art IP network to support its customers across the world. If you visit BT's technology centre at Adastral Park, you can see the depth of our R&D and that we are people who are spending serious money advancing our network capability. Some of our  competitors may be doing the same, but not all outsourcing companies are."
Pointer 5:
Check to see where the supplier is investing. Is it in areas that will help them deliver for you?

The deal clincher

And finally -- and this, he says is the real clincher -- you need to be clear how your service provider justifies its efficiency.
"If I go to a customer saying I can do this better than you can, I'd better have real evidence to back up my claims.
"For example, if you asked BT to show you why its network operation centres are more secure than your own, we could take you through our approach in detail. We could show you how our security works and explain why, because of what we invest in this area, we lose only a tenth of what most similar companies would lose through hacking and unauthorised access.
"Or if we are saying we can run your call centres more cheaply, I'd be able to show you how our CRM methodology allows us to maximise the impact of technology investments. If you wanted to offshore as well as outsource, I'd be able to show you our capability in India, and I'd be able to show you our capability to deploy customer data more effectively through a wide area network and our unique management package."
Pointer 6:
Ask to see evidence that backs up the supplier's claims.
In addition, Leigh says, it's vital that you chose a vendor that fits your needs: "Most companies just chose from the top five or six brands, so they start with a list that contains unsuitable partners from the outset. I think companies should start by taking a serious look at the outsourcing market and making sure they really understand the value propositions of the main players.
"It may sound simplistic to say you need to know your own needs and wants, but often they can be difficult to reconcile within the political environments of large corporates. In fact, often the most difficult thing to do is to agree a set of expectations with all the stakeholders. You know what happens -- the finance director wants the lowest cost, the business unit leaders want better service, the marketing director wants access to new know how, etc, etc."
In conclusion, he notes: "Outsourcing isn't about perfection. But what you do have to be is better. People are looking for improvement, not nirvana.
"So what I say to my clients is: focus on the two or three things that will really make a difference. If cost is one of them, your supplier needs to show that its business model will demonstrably deliver better costs, and not just offer a facile statement of intent. Then make sure all your stakeholders understand what they are going to get before negotiating to get those things -- no more, no less. And once you've done that, set measurable targets for the improvements you want so you can check you get them. It may not be simple but it is straightforward. A science, perhaps, but not a black art."                                                  n

Penguins, bandwagons, new-technology hype -- what are the factors at play behind the introduction of new products and services, asks Nick Outram

Why is it that some new products and services are billed as the next big thing only to fade into obscurity, re-emerging years later and taking the market by storm? For example, the portable MP3 player. This technology was certainly available in the early 2000's and received a lot of media attention but only later did the Apple iPod capture a significant fraction of the market amidst a frenzy of buying activity. This article looks at some of the reasons behind this phenomenon and explores what strategies may be employed to help speed products past the initial hype and into the fast take-up phase of product roll-out.
Firstly, let me introduce you to two concepts: penguins and bandwagons. You will probably be aware of a bandwagon effect -- everyone jumping on to a 'new and proven' idea that captures the imagination and provides real value and benefits to the consumer. The bandwagon phase of product adoption corresponds to the sweet spot for sales. Most -- if not all -- of the product issues have been ironed out and the product is flying off the shelves. This is a very good time to be a Product Manager!
The penguin phase comes before this, and is so named because the largest majority of potential buyers are like penguins standing on the ice flow -- unsure whether to go into the waters or not. Although most of the penguins realise there are advantages to be gained from getting their feet wet, none but the most adventurous want to be first in as dangers may lie therein. In the wild, the danger is clearly that they may be eaten alive if first in.
Purchase of a new product has obviously lower risks than this but humans can go to extraordinary lengths to ensure that they feel good about their purchase. Books have been written regarding the psychology of buyers and complex models of buyer behaviour created, but ultimately it boils down to the basics of whether a person is happy to purchase an item or not. If someone perceives value in a product then -- when all barriers to the purchase have been removed -- the person will buy. In the wild, penguins will wait until they see a certain number of other penguins in the sea before they will venture in, and humans also exhibit this wariness. Every potential buyer has a different threshold of purchase, just as every penguin has a different threshold of fear.
The first set of people to buy can be critical to the success or failure of a product for the simple reason that they can provide a seed from which to grow the support that will encourage people with higher purchase thresholds to take up the product. Often, these early adopters are willing to put up with a certain level of product teething troubles and pain because they realise the immaturity of the product yet can grasp its longer term benefits.
Gartner Group defined five hype phases for a new technology or product. These are:
1. On the rise
2. At the peak
3. Sliding into the trough
4. Climbing the slope
5. Entering the plateau
Applying the penguin analogy I would argue that the first two represent the period during which the fearful penguins rule -- only towards the end of the third period do the majority of penguins begin to wake up and realise that the benefits of diving in outweigh the disadvantages.
Thun, Grobler and Milling of Mannheim University have attempted to model a generic product that exhibits positive network effects using a system dynamics approach [see reference]. In this approach the overall product purchase behaviour of individuals is modelled over time, certain key input variables can then be tweaked to see what effects differing conditions have on the overall outcome. For example the level of positive product recommendations between 'early product adopters' and the more cautious 'late adopters' can be factored in to find out what effect this variable may have. Also, once the feedback and 'inter-penguin chatter' has begun, another variable can model how rapidly this spreads through the 'penguin population' creating the bandwagon of purchases. While the models they present tend tobe fairly academic and generic they can provide valuable insights into methods for speeding up product introduction and increasing adoption rates.
In Figure 2, the solid lines represent a fairly typical product adoption and installed base S-Curve as one can find in most text books on product marketing. The dashed lines show the outcome of adding in strong Penguin --'P'-- and Bandwagon --'B'-- effects to the normal product adoption model. The most noticeable feature of the Strong PB model is the much delayed onset of a, then, much stronger take-up phase. For certain products one could argue that the original 'sliding into trough' phase in the Hype Curve is a consequence of the slow initial adoption rates due, in turn, to the mass of bystander 'penguins' waiting for the all-clear signal before diving into the new product.
There have been many situations like this in the past for products: consider the Sony BetaMax vs VHS 'war' in the 80s and its more recent re-run: Sony BlueRay vs HD-DVD. When one of these products reached a critical 'threshold of faith' that it would not be a dead technology in a couple of years, the perceived value of the product's 'technology network' increased dramatically leading to a winner-takes-all outcome as the public jumped onto the VHS bandwagon. Incidentally, I predict this time the winner will be Sony BlueRay, simply due to the clever strategy of incorporating one into every Sony Playstation3 to be sold next year, dramatically increasing the network of end users and consequently the overall perceived value of this product's network. I also expect to see heavy subsidies of Playstation3 as Sony senses the much greater opportunity potential of getting everyone -- not just Playstation3 owners -- more quickly onto the fast growth phase in order to resell their content back catalogue in the new high definition format.
As mentioned previously, the class of products that seem to exhibit some of the strongest effects are the communication products. They also seem to exhibit the hype phases, as we all love thinking and talking about a new piece of technology -- and many of us like to feel the pain by being first in the water! A new technology still in the development phase is Push To Talk (PTT). It has so far exhibited all the trends of the classic Gartner hype phases and consequent delayed uptake, outlined above. PTT is meant to turn your phone into a walkie-talkie, you need friends to talk with and they need a PTT enabled phone. This is a classic chicken and egg situation, but it gets worse -- so far only a small subset of phones and networks support the product and they are pretty much incompatible. It is no wonder that this service, once hyped as being the next big thing, is wallowing in the trough along with so many other mobile products; video telephony, mobile instant messaging and content streaming to name a few. Only when the service is standardised, widely available and the public perceives a value in it, will it start to take off -- but take off it will and, at some point in the next five years, the PTT bandwagon will begin to roll.

Shortening product take-off

Based on the analysis of their research model, Thun, Grobler and Milling's paper considered ways to shorten the time before a product takes off, the results were:
1. The classic method of increasing the public's exposure to a new product -- advertising it to make sure people know of its existence.
2. Increasing the contact rate between non-users and users who can convince non-users to adopt. Viral marketing is one mechanism that springs to mind here.
3. Focusing on augmenting the pool of interesting communication partners of every user in the installed base -- examples given are:
a. Marketing measures to make users communicate more with each other and formerly unknown people -- e.g. generate communities of users
b. Technical advances that make it possible to use the product in new ways, increasing utility or communication with new people (e.g. SMS) or more than one partner (e.g. conference, PTT)
c. Extending the installed base indirectly by creating compatibilities with existing products (e.g. connecting Mobile Instant Messaging to PC based Instant Messaging).
Furthermore, from general experience, take a fresh look at the service from the consumer perspective: 
1. Is the product usability maximised? -- Think of both its utility and usefulness.
2. Are you continuously enhancing the product with feedback from the end-user?
3. Is the end-to-end process of purchase as simple and intuitive as it could be? If you are a Product Manager, become a mystery shopper for your product.
4. Have you overcome all possible barriers, both psychological and physical, as to why a consumer should not buy your product? This could involve more detailed analysis of the end users themselves, not treating them as a homogenous mass but segmenting them better and understanding their real needs and wants in addition to where they perceive the most value lies in your product offering.
5. Are you re-enforcing their choice to buy your product as a good one post-buy? Turning them from product penguins to product evangelists.
In summary there are a great many factors that can affect a products' success in the market. Even when all the barriers to adoption are removed, a product may be slow to take-off due to slow recognition of its benefits within a potential community of users. In addition to the basics of removing these perceived barriers to product purchase, things can be done to spread the word, increase adoption and raise the installed base.         n               

Reference:
Proceedings of the 18th International Conference of the System Dynamics Society:
'The Diffusion of Goods Considering Network Effects' -- A System Dynamics-Based Approach
Jorn-Henrik Thun, Andreas Grobler, Peter M. Milling.
University of Mannheim

Nick Outram, SopraGroup UK can be contacted via e-mail at: noutram@sopragroup.com

    

 

European Communications is now
Mobile Europe and European Communications

  

From June 2018, European Communications magazine 
has merged with its sister title Mobile Europe, into 
Mobile Europe and European Communications.

No more new content is being published on this site - 

for the latest news and features, please go to:
www.mobileeurope.co.uk 

 

@eurocomms

Other Categories in Features