Sheer size alone is no longer the asset it once was in the ICT sector. That said, some industry giants do retain a dominant position – even as they step into the third century of their existence.

Interview – Joined-up thinking

One such organisation is NEC, the $42 billion market leader from Japan. While everyone knows its name – and has a broad idea of its size and scope – many Europeans may still not be aware all of its areas of expertise.
A solid global brand to be sure, with strong credentials in IT, communications, display technologies and identity management solutions, NEC has been an active player in Europe for over 30 years.
European Communications recently met the newly appointed Managing Director, NEC UK, David Payette,  and Kevin Buckley, Director of Mobile Network Solutions Division, NEC UK, to ask them what future they saw for the European communications sector and, more specifically, what a changing NEC had to offer the region.

EC: David – to start off on a somewhat personal note, you took Asian studies and Japanese as a discipline at university, what attracted you to this subject and did this affect your decision to join NEC?
DP: I felt early on that the interplay between Asia and the rest of the world would be a stimulating environment to work in, so after university I went to Japan in the early '90s and joined a Tokyo-based systems integrator. In a series of increasingly senior roles, I moved to EDS in Asia Pacific, then Lucent in Australia and five years ago ended up here in Europe with Avaya, before being headhunted to this post only a few months ago. These international experiences have been both challenging and culturally enriching, and yes, the fact that NEC offers me the opportunity to put my Japan experience to use in this part of the world, makes this new role a particularly attractive fit for me.

EC: And what are the specific benefits you feel that NEC brings to the market?
DP: For a start, we have a long history of bringing innovation and service excellence to Europe, and with that comes an emphasis on continuity and trust with customers and partners that can sometimes be hard to find in today's industry. This focus on long-term credibility is well received in the market, right down through the value chain to the end user. 
As an innovator, we're proud of the many market firsts we've achieved here in areas such as 3G networks and devices, and more recently with i-Mode, to name a few examples. We believe these experiences provide a platform that will help us go from strength to strength and deliver ever increasing value to our customers.

EC: It is widely recognised that much of Asia Pacific is well ahead of the West when it comes to innovation and deployment of advanced technologies. What cultural factors do you think influence this?
DP: When you talk about high standards of manufacturing and quality control, I suspect values and cultural factors are significant, particularly in countries such as Japan. But where innovation is concerned, I believe this is more a matter of basic economics. For example, it was only natural that the high population density you see in Asia's developed markets brought a high mobile subscriber base or 'teledensity' more rapidly there than in other parts of the world. Once this stage is reached, market forces relentlessly drive innovation through intensified competition and increasingly insatiable consumer appetites for more functionality. The game changes from simply laying the infrastructure and acquiring the subscriber base to building loyalty by delivering powerful data services to the handset and fortifying the network with higher technologies like HSPDA and IMS, which drive the necessary throughput and convergence.
We are seeing the same thing happening right now in the UK and throughout Western Europe. And, because NEC has always been a major player at the forefront of Asia's development we're well suited to bring that experience and technology to Europe, and in a time tested and proven manner. And i-Mode for instance is one of the more recent examples of this.

EC: For certain, the NEC portfolio is extremely broad.  Is it possible that it's too broad, or that you are trying to cover too much?
DP: We don't think so at all. Highly specialised companies and individuals will always be essential – and we partner with many of them, but I'd argue that there's also a growing requirement for corporations to take wider scopes, given the perpetually increasing 'joined-up' nature of the systems that support our societies, businesses and lifestyles.
The future of telecommunications is about breaking down boundaries and making the customer experience seamless. NEC sees the big picture and our joined-up thinking makes us the first choice for service providers of all types around the world. The future for ourselves, our customers and our customers' customers will be about enabling communications, information and commerce anywhere, anytime. For me, being able to provide broad based individual expertise across areas as seemingly diverse as supercomputing, identity management, wireless and so on, which in fact increasingly complement the communications arena – combined with our ability to join disparate components together to achieve higher value – will play an important role in the industry and that's good news for us and our customers.

EC: Kevin, how do you see this 'joined-up' thinking coming to market, specifically in the European mobile sector?
KB: I think what David just intimated about continuity and breadth is very important. If you look at the forces that now characterise the European communications sector as a whole, then openness, interoperability and adaptability are key drivers. The network's no longer closed and, in the case, say, of a service provider offering a converged fixed-mobile portfolio of services, they not only need access to 2G, 2.5G, 3G, HSDPA, i-Mode, WiFi, WiMax, DECT and Bluetooth expertise, but also to particular applications such as mobile TV – an NEC speciality – as well as enabling technologies such as identity and security management.

EC: Investing in R&D in such a fast changing industry is like trying to hit a moving target while aiming through a fogged-up mirror. How does NEC plan its investment?
KB: NEC itself has an interesting balancing act to play as it tries to help the whole industry move forward. On one hand we do possess what might be recognised as traditional strengths. We invest around £4 million each and every day in R&D; we play a major part in all the relevant standards bodies around the world; and we also place a lot of emphasis on local partnering with both academia and business.
On the other hand we have the ability to move very quickly when conditions demand. That necessarily involves being as close to the customer as possible and eliminating the unnecessary formality that can slow down the all-important 'time-to-market' factor.

EC: What joint R&D programmes do you run in EMEA with commercial or academic partners?
KB: One good example of the NEC character working together lies in some of our 3G work in Europe. In the background, we have Mobisphere, a long-term R&D  partnership with Siemens aimed at driving 3G development work onwards. Out in the 'real world', we worked closely in the field with Siemens deploying 3's UK and Irish networks in record time. At one point, that involved installing 140 cell sites each week.
DP: Particularly with a complex technology like 3G, the kind of end-to-end thinking from NEC that Kevin's just highlighted becomes highly relevant. NEC has been involved in nearly every first-stage roll out of 3G systems worldwide – we developed the world's very first 3G handsets and we were the first to bring both of these to Europe. But we also know that great technology in the network is only part of the answer. To maximise ROI, you also have to understand the wider behaviours of the customer and the whole service and applications environment as well.

EC: David, you mentioned i-Mode, which you are currently involved in bringing to Europe. What are the particular benefits it brings?
DP: i-Mode is a mobile experience that gives subscribers incredibly fast Internet access, among other things, starting in the 2.5G environment, with a rich and friendly user interface, and content designed specifically with mobility in mind. This winning combination has delivered proven results in user take rates, which is key for service providers, who also appreciate the time tested success and technological depth of the solution. 
Our long history with i-Mode in Japan has enabled us to help service providers across Europe differentiate themselves and grow data services revenues by enriching the mobile experience for their subscribers. And, for the content and applications provider – irrespective of whether they're Disney or a developer creating ring tones – the i-Mode handset is a compelling route to market, and anything that improves the user experience is good for both them and the actual network owner. We've already installed i-Mode systems in Russia, Greece, France, Italy and Spain and have just completed supplying the core infrastructure and terminals for O2's i-Mode launches in the UK and Ireland.

EC: Kevin, you mentioned Mobile TV as one specific application that NEC is involved in – along with the higher speed broadband radio technologies like HSDPA. What's your take on how the industry can deploy these new technologies to best advantage?
KB: There's a lot of talk out there about how the content owners or retail-oriented third parties like Google, Amazon and E-Bay might reduce the traditional telecommunications operators to a utility status as lowest possible cost, bit-pipe carriers.
Yes, the industry is transforming itself and yes, competition is now coming from a variety of new entrants who have radically different business models from the traditional telecommunications mindset. That said, the content and applications are still only part of the total offering.
If you don't have the appropriate handsets and devices for each emerging market niche, or network technologies that guarantee seamless delivery across multiple network platforms, then all the creativity of the content developers will count for nothing.
Consumers are becoming increasingly sophisticated buyers and, while parts of the communications industry will always reflect fast-changing lifestyle and gadget choices, fashion can also be a very fickle master to follow. Customers have long memories, with many buying decisions increasingly being made on word of mouth disseminated around on-line communities.At the risk of repeating ourselves, this is where NEC's corporate characteristics of continuity and extremely high reliability plays out very well, amongst both consumers and our network owner customers.

EC: What are your strategic priorities?
DP: Our strategy is three-fold and involves combining innovation and service excellence in unique and creative ways, exposing our major customers to more of NEC's diverse portfolio, and providing service providers with not just the right technology based solutions, but evolving value-add offers, such as professional services and end-to-end mobility offerings, that enhance their competitiveness and help them promote attractive data services and devices to their customers.
With that strategy, comes an appreciation from NEC of the consumer experience and the cultural characteristics of local markets. You referred to Mobile TV, which is another good example of this. We are a leading vendor of digital TV transmission networks in Europe, and with the mobile standard now being ready (DVB-H), we can offer both terrestrial and mobile distribution via the same network.
The same is true for our experience as a leader in microwave radio, ensuring that all the new high bandwidth variants of existing technologies – such as HSDPA, HSUPA and WiMax – perform out in the field in Europe with the same consistency and quality of services that they've shown in the lab and in the market in Japan.
EC: Do you think that traditional telecommunications business cultures are up to promoting content and applications as well as basic connectivity – especially where it comes to CRM?
DP: Yes, generally speaking, I do. I think we are really starting to see a sea change in this respect. But what I find interesting, is that while content, networks, and applications all help drive customer loyalty, it's 3G and Wi-Fi which have truly kicked the door open when it comes to opening up the world's markets to the next generation of data-based services, and it will be enhancers like HSPDA and IMS, which are major elements of our strategy, that help bring these services to fruition and create new opportunities for all.

EC: David, you've obviously got your own ideas about which directions NEC should be heading in over the coming months and years. Would you like to talk us through some of the changes that you intend making to the ways that NEC does business with its customers and partners?
DP: Sure. NEC intends to be more than just a high technology provider. We are well positioned to bundle our technologies, put more value-add services on top, and provide quantifiable business advantages to our customers in the B to B world from the CXO level on down and out into the consumer arena where the most important decisions are ultimately made. 
This will involve promoting our brand identity more proactively, building further operational synergies across our business units, and driving a greater level of cross-product training internally. Our people are already highly motivated and skilled professionals in their respective areas and they will increasingly promote more of the overall technology continuum in support of our customers.
I'll also be looking to build on the good feedback routes that we already have in place with our R&D operations in Japan and across the globe.
NEC can actively help the wider industry achieve its now inevitable transformation by continuing to lead in many of the vital background areas that underpin it. Super computing, client server and display technologies, as well identity management solutions, all complement the communications environment and form part of a broader continuum. We believe companies that are adept at joining up this disparity and translating it into tangible business value will have the edge in the future. That, fortunately, has always been part of NEC's broader strategic vision.

External Links


Nick Outram looks at the potential of the ‘Pixecode’ in bringing Internet commerce to mobile handsets, providing mobile operators with a new stream of revenue

Imagine today’s Internet without it’s clickable links. Well it’s impossible because the hyperlink is one of the most fundamental parts that make the Internet what it is: that key ability to freely jump to and access relevant information and services at the click of a button. Now imagine if the power of that linking ability could be applied to the everyday real world objects that surround us.

Firstly, let me give you a brief bit of historic background to put this idea in context. All great new media tend to be built upon media that has gone before; the television was a combination of radio and moving pictures, and in its turn radio was talking text. With convergence, the Internet is becoming the ultimate combination of all media types. By building upon what has gone before we learn to accept the new environment and gradually exploit the additional possibilities it offers. The linkages that form between the old and new can drive take-up and enhance both. 
For example, a printed radio program guide provided in a newspaper helps drive newspaper sales and informs of radio content that users will listen to. However, when it comes to a mobile phone accessing the Internet, a problem currently exists. The problem of firing up that initial link is even greater than on the PC as there is no keyboard to quickly type text without a struggle. This problem is holding back people's access to a rich world of content, information and services because there is a missing and simple link between the real, day-to-day world we know and live in and the world of the Internet. What if we could easily link these worlds together? What if we could shatter the barrier between the static – but familiar – world of printed media and the dynamic new Internet world with all its possibilities? Imagine if information, product purchases and other services where just a single click away.
Let me introduce you to an idea I call a “Pixecode”.
The word “Pixecode” (short for Pixel Encoding or perhaps Pixel E-commerce Code – take your pick) is simply my own way of saying “a small two dimensional barcode”. Well it sounded better and was a lot more fun than the official words: “Datamatrix” or “PDF417”... The main point is that these visual barcodes are capable of packing a big information punch – in fact you can easily compress long and complex text strings into a tiny space. This makes them perfect for encapsulating URL sequences along with sub folders, extra reference data, etc. The code also packs a punch when it comes to error correction, so if your printing process is not 100 per cent and the code becomes smeared, skewed or not quite right it will still be usable. And of course, when compared to other technologies that might be used to provide similar functionality like RFID or other electronic methods, they are effectively free to mass-produce.
Take for example an Internet site that gives supplemental wine information. The Pixecode bitmap image is easily printed onto the wine label during the standard label print process. The mobile is now sufficiently capable to allow access to rich media and most have the three essentials needed to enable this usage: a camera acting as a 2D scanner, a browser client and a data connection. By framing the printed Pixecode image, taking a snapshot and letting the camera software decode it, a reference link is made available to a web page that can be browsed – the printed code acts as the access key to the new media world.
In the case shown above the online dynamic site could include more text about the grapes and soil of the vineyard in question, perhaps a short streamed video of the growers harvesting the wine that year and more about what makes this wine so special. When used in this way Pixecodes can help to inform and improve the value add of any product.
This is taking the idea of a data service to the next level by providing context sensitive, timely information and leveraging the key advantage of the mobile device -it's mobility. It goes beyond the mere bits and bytes that have claimed so much hype in recent years, whether it be Wifi, HSPDA, 3G, IMS, iMode, etc, etc. Put bluntly, most users couldn't give two hoots about how they get their information: they just want and need access to this richer environment while mobile here and now.
Another example: While out and about you spot a poster advertising the new Harry Potter book with the title “Buy the new Harry Potter from us now for only £9.99”. Down below, a Pixecode links directly to the vendor's online purchase page for this specific book. The user photographs the Pixecode, the browser fires up, a few seconds later the user's finger is hovering over the 'Purchase Now' button – impulse purchase – deal done. The fact that a user is potentially just a couple of clicks away from any purchase is itself a small wonder. Online Amazon 'One-Click' purchasing is so big they protect this process. With Pixecodes, Amazon – or a competitor – could easily begin to sell books via printed media, opening up the dis-intermediation benefits of online shopping to a whole new mass of consumers with mobiles: remember globally the mobile is the new PC, with much higher sales and potential audience.
There are dozens of uses of Pixecodes: how about © consumer data capture? A strawberry McDonalds shake cup has a Pixecode on it. “Click to enter free prize draw for a Florida holiday for 4!” says the enticing competition tagline. Within seconds their consumer database is enriched with information about the purchaser and their consumption habits. With adaptation of the encoded service code (the initial “http://” text) Pixecodes could be used to quickly dial numbers, upload contact and diary PIM information from printed business cards/event calendars or even encode the dreaded crazy frog ring tone as a printed bitmap. And then there's cashless vending and CEOs – Controllable Electronic Objects – but perhaps these uses are best left to an online presentation. (See Ref. 1 at the end for the link).

Vision to reality
So just what is needed to make this vision of mobile access to rich media a reality? From a technical to-do list point of view the answer is surprisingly little. All this stuff exists already as individual parts of the value chain. The hard part is in overcoming the chicken and egg situation, in “getting the first penguins into the water”. This situation exists as no one is printing these 2D codes on their products because there is no mass of mobile decoders available, which in turn is due to the fact that no one is printing the codes, etc.
How to start a data revolution:
• If you are a product producer – start printing these codes on your products now! Create a mini online information source then go and get a 2D code that links to it. They're free to create, (see Ref. 2 for a source of bitmap images). Remember: as the online version is dynamic you can update it over the years.
• If you are a mobile phone manufacturer investigate putting some decoder software onto your next mobile as a quickly accessible feature, perhaps an option in the cameras 'what do you want to do with this picture?' menu (e.g. “send to browser”) – or provide the feature as a download for existing phones. Camera lenses with inbuilt macro capability work best as they allow focusing at shorter distances, although this is not strictly required. For experimentation, a java decoder can be bought and downloaded from the website at Ref. 3. There are many companies that can provide source for decoders or make your own, the specifications for the coding and decoding of these codes are all standardised and freely available.
• If you make mobile phone clients, especially browsers (e.g. OpenWave), consider making an easy imager/decoder button part of your one-click user interface. May I suggest a Pixecode as the button icon graphic?
• If you are a mobile operator, start a small pilot project that provides the whole value chain, and study if user behaviour is positive towards this method of mobile usage. At the same time calculate the data service revenues that widespread mass adoption of this access method could generate. Consider the additional revenues from becoming a 'referral conduit for instant m-commerce purchases' and thus taking a small percentage of any products purchase price in Visa style. Mobile operators fear becoming simply used as data pipes – this is one possible way to have their cake and eat it.
All the technology components exist to do this today, so we are almost there. With a little nudge in the right direction, this little thing called a Pixecode could become the next big thing. It's not a service, true, it's more than that: it's the missing link that can help to bind the physical world to the virtual world and greatly increase the value of both to all of us.                                  n

1. Pixecode presentation: www.pixecode.com
2. Datamatrix download: www.idautomation.com
3. Decoder download: www.semacode.com

Nick Outram is a Management Consultant specialising in the Telecommunication Sector and currently works at Sopra Group UK. He can be contacted at noutram@sopragroup.com

As the technology develops throughout Europe, Margrit Sessions and John Lilley take a closer look at tariffing for 3G data services in the region, and the impact VoIP could have on the market

When 3G made a dynamic entry into the mobile market some two years ago (i.e. UK with H3G), it was then believed that the slashing of prices would serve to poach customers from rival networks. Today, a couple of years after the launch of 3G in a number of European countries, 3G operators have some attractive offerings in place, but there are aspects about its pricing that still make it unattractive for consumers.

Tarifica’s extensive collection of GSM voice and data tariffs was recently enhanced with 3G prices. Overall, domestic pricing proves to be attractive, but UMTS, when roaming, can be viewed as a challenging topic, especially with recent media attention given to vastly expensive 3G data prices whilst abroad. However, 3G mobile operators are, in fact, making it favourable to use this service within the domestic context, especially by implementing pricing strategies for cheap video calling.
Following on from this, Tarifica has chosen to focus on two aspects of 3G pricing, which unravel the differences in tariffs being implemented: a comparison between voice and video national call charges and the price discrepancy between 3G data usage within the domestic context and whilst roaming.
Looking at the cost of voice versus video calls (which requires a 3G network and the customer to have a 3G compatible mobile phone), there is not a great deal of difference in price. With voice still being the killer application in mobile telephony the price is obviously going to remain affordable. Yet, video calling requires a bigger investment on the operator’s side, hence these calls need to be priced higher than voice calls.
Operators such as 3 Ireland, Orange Switzerland and Proximus Belgium are currently offering the cost of a video call at the same rate as voice call. Of course, this is a method to encourage subscribers to use the service. 
On the other hand, several operators have price discrepancies between their voice and video calling offerings. The operator with the biggest difference in the price of an off-net voice and off-net video call is T-Mobile UK, which charges four times the amount when making a video call. The next most expensive is 3 Italy, followed by T-Mobile Austria.
If consumers have been given incentives to make 3G calls domestically, this has certainly not been the case with business users with 3G data cards. 3G data usage is the second pricing aspect of 3G, which Tarifica discusses below.
When roaming, the pricing for downloading and sending e-mails with attachments tends to be unattractive for business customers. Taking Swisscom Mobile as an example, its ‘Data Option 1000’ provides customers with a relatively appealing domestic rate per MB. However, this price is in stark contrast to the cost of customers using the operator’s UMTS data card abroad, where they would be charged EUR 11.55 per MB; a difference of approximately 39 times higher than the domestic price. Another example is Vodafone Netherlands using a factor of 24.
Consumers, particularly those using video calling, are being heavily encouraged to use 3G services. As with all new services, mobile operators have a desire to drive usage and to increase traffic on their network. Further to this, it is possible to draw two main conclusions from the discussion of the data collected.  Pricing of two different services at the same cost, as revealed above, is generally known as ‘tapas pricing’. However, Tarifica’s comparison reveals that not all operators have opted for this approach.
On the other hand, 3G data usage abroad, typically attributed to business users, does not appear to be subsidised in any way, given the high charges that can be incurred. Presumably, with business customers having their monthly bills paid for by their organisations, operators are not taking disposable income into consideration as they do with residential subscribers. However, moving forward, as 3G data roaming becomes more commonplace, especially with greater network interoperability and more affordable handsets, prices will fall, as is now experienced with voice calling on 2G when roaming (i.e. Vodafone Passport).
Further downward pressure on 3G pricing is coming from advances with Voice-over-IP technology, which is impacting both consumers and business users, according to mobile market intelligence specialist InfoTech, a Tarifica sister company. Software such as Skype has raised awareness amongst consumers of how to achieve cheaper calling using WiFi networks. In the enterprise market, recent months have seen a plethora of launches of Fixed Mobile Convergence solutions using VoIP over wireless LANs. As VoIP technology moves into the mainstream, mobile users can expect to see price competition intensify. 

The information in this article has been extracted from Tarifica’s 3G Pricing Service launched in October 2005.

Margrit Sessions, Tarifica, can be contacted via e-mail: msessions@accessintel.com
John Lilley, InfoTech EMEA, can be contacted via e-mail: jlilley@accessintel.com

According to telecoms body, the Forum for International Irregular Network Access (FIINA), fraud is estimated to cost the industry
$55 billion globally every year. This risks damaging customer
relationships as demand for new mobile phone content and
additional services hits an all-time high. For mobile phone
companies that have invested billions of pounds developing networks, acquiring 3G licenses and offering competitive call rates to build and keep their customer bases, stamping out fraud is absolutely vital, says Mark Bell

As mobile phones become ever more central to consumers’ lives – not only for communication but increasingly for information and entertainment – investment in mobile marketing is expected to explode in the next five years. But against this backdrop the threat of telecoms fraud looms. According to FIINA, whose members include fraud experts for service providers throughout the world, the telecoms industry could be losing $60 million in lost revenues annually. 

Further research conducted by International Data Corporation indicates that typically 30-50 per cent of bad debt among European telecoms companies is a direct result of fraudulent activities.
Retailers and network providers that fail to prevent fraud not only risk damage to their bottom line through direct fraud losses – they risk alienating consumers as the mobile market flourishes.

Opening the door to fraud 
New technologies are, in many cases, making things easier for fraudsters. Next generation technologies for instance have brought a whole new set of services to business and consumer users, but they also threaten to open the door to unprecedented levels of fraud.
Drive-by hacking into wireless local area networks, which can lead to deeper infiltration into company intranets and the speedy transfer of fraudulently acquired information across GPRS networks, are just two major problems faced by businesses, service providers and organisations charged with preventing fraud.       
A clear message from within the telecoms industry is that, at the very least, network owners and service providers must begin to improve weaknesses imposed by wireless access networks and devices, routing equipment and international network interconnection boundaries. Start-up operators, which typically focus on growth, new customers and increasing revenues must also dedicate further resource to preventing fraud risks.
However, mobile phone retailers selling the latest high-value handsets and accessories are also a major target for fraudsters and for this very reason they too have a vital role to play in reducing fraud.
Card fraud is now a major problem around the world. Credit card skimming and counterfeit cards scams are now widespread in Europe, Asia, and Latin America, and rapidly growing in the US for instance. They range from large-scale operations – where thousands of cards can be produced in one night and be in a suitcase headed to Europe the following day – to much smaller-scale operations.
Consumers are all too aware of the risks that the various types of card fraud present to them and mobile operators are increasingly looking to retailers to help inspire confidence in customers which, in turn, will help to ensure continued loyalty, enhance churn prevention activities and improve retention strategies.

How can mobile retailers help? 
By simply understanding who they are dealing with, mobile retailers can help to prevent fraud. But this is easier said than done given the rapid and extensive take-up of handsets and mobile services by consumers.
According to Jupiter Research, by the end of 2005 over 299 million people will subscribe to a mobile services network in Europe. While penetration levels are beginning to plateau as they reach saturation point, the handset sector continues to evolve. Over 36 per cent of Europeans now have camera functionality on their phones for instance, and this figure will nearly double to 70 per cent by 2010. 
To help mobile retailers reduce fraud risks, GB Group, in partnership with BT, has developed a secure on-line system called URU – designed to stop would-be fraudsters making applications for mobile phones. It instantly verifies a new customer’s identity using a combination of passport, driving license, utility bill and a host of other publicly held information.
Customers are taken through a hassle-free ID check by sales and customer service representatives at point of sale, whether on the high street or via a call centre. E-tailers have embedded the system into their new customer registration pages. Within seconds, customers are able to buy their handsets and sign up to their desired tariffs. All of this means that operators now have the confidence that fraud levels can be reduced without   negatively impacting the customer experience.

Responsible mobile retailing
Spend on mobile marketing in Europe is estimated to increase six-fold from €110m (£75m) in 2005 to €688m (£470m) by 2010, according to Jupiter Research. Add this to the fact that personalisation (logos, ring tones and wallpapers) and ‘infotainment’ (news, sports, gaming) markets are set to triple in size from €3.3bn (£2.25bn) to €9bn (£6bn) between 2005 and 2010, and it becomes clear why mobile phone retailers will be asked to play an ever more important role in customer identification to reduce fraud.
Ultimately, mobile retailers stand to benefit on a number of levels from implementing robust customer identity checking systems to stamp out fraud.
As well as adding value to their own businesses by reducing fraud, they can fortify relationships with mobile phone operators who are highly sensitive to the damage it can do to corporate reputations and customer relationships. This would be the start of a long and fruitful relationship.
Mobiles will soon be the most-used visual, multimedia, infotainment devices available to consumers. However, while operators have much to gain from rapidly increasing spend on mobile marketing, they also have a lot to lose. As mobile media is legitimately exploited by operators, so it will continue to be by the unscrupulous.
Operators will therefore use everything in their armoury to mitigate abuse and encourage best practice to build consumer confidence.
As mobile retailers are a major public ‘face’ for the industry, with millions of new customers signing up with operators via major high street chains and independent retailers and e-tailers, it will not be long before a true fraud prevention and customer protection partnership is born.                                                 

Mark Bell works for GB Group’s DataAuthentication division.  www.gb.co.uk/authentication

Coming to a handset near you, DVB-H technology looks set to provide users and operators with a cost-effective means of
bringing video to the mobile generation, says Brian Lancaster

Over the last six months, the media has been abuzz with the ‘next big thing’ in mobile phones – the streaming of TV and video transmissions to user’s mobiles whilst on the move. Rather than simply include a power-hungry TV tuner in a user’s handset – resulting in rather shaky signals for users on the move – the industry has developed a technology called DVB-H, short for digital video broadcasting for handhelds.

DVB technology is already in widespread usage on the world’s satellite TV networks and, increasingly, on digital terrestrial television, but the extension of the technology to mobile phones is no mean feat, mainly because the signal paths involved are chaotic at best.
By using a similar error-checking system to that used for mobile Internet and data access, and buffering the signal transmission on the mobile for several hundred milliseconds, the DVB-H user experience is greatly enhanced.
Current trials in the US and Europe suggest that the ‘TV experience’ on a user’s mobile is similar to downloading a complete multimedia file to the handset and then viewing that file. With DVB-H, however, the transmission stream is constant and continuous for the period of the transmission. As a result, users can start viewing their required multimedia stream within a second or two of selecting the feed on their handset’s menu.
Most of the trials of DVB-H announced so far have involved the use of GSM handsets, mainly for the reason that GSM technology is now in active use in more than 200 countries around the world. This does not mean that DVB-H technology cannot be used with other cellular networks and handsets – far from it; because the DVB-H transmissions are normally carried by an overlay network, it does not matter whether the user’s handset and network are GSM, CDMA, AMPS or a similar cellular variant.
The fact that an overlay network carries the DVB-H transmissions is an important one, especially for those carriers that do not currently plan to migrate over to 3G cellular technology.
3G network technology, whilst giving users access to a wider range of services, thanks to a much higher underlying data transmission speed, still requires carriers to install another network. For carriers in countries with a mature market and high levels of cellular penetration, it is highly likely that the original network deployment costs will have long since been accounted for.
In relatively new markets, such as Africa and Asian countries such as India and Pakistan, the costs of the initial GSM/CDMA networks have yet to be offset, effectively prohibiting carriers from rolling out new 3G networks. Which is where DVB-H technology offers carriers an interesting alternative to 3G, and with the extra bonus of seeing the cost of a DVB-H network rollout defrayed by the income stream from mobile TV/video transmissions.
DVB-H isn’t an alternative to 3G, of course, but offers a 3G-like experience to 2G network users. For carriers and mobile phone users alike, this is a win-win situation.
For the carrier, it means avoiding the need to spend millions of pounds/dollars in licensing and creating a 3G network, whilst for the users, they get access to 3G-like video streaming, but faster and at lower cost than they could ever have expected.
Whilst AlanDick has been quietly assisting carriers around the world in their preparations for various DVB-H trials, a number of hardware vendors have being following a similar strategy. Our observations suggest that several vendors have already realised the potential that DVB-H has for carriers with no current 3G network plans.
At the recent CommunicAsia event in Singapore, for example, Frontier Silicon revealed it is developing a multi-standard and multi-band chipset that supports the fledgling DVB-H standard, as well as the rival digital multimedia broadcast (DMB) system. Although still at the development stage, the DMB system is being viewed by many as a viable competitor to DVB-H, despite the fact that DVB-H has garnered a lot of support in the West.
Just to make life interesting, however, DMB is currently being developed in two distinct flavours, Korean and European.
Frontier Silicon’s Kino chipset can support both flavours, as well as the DVB-H standard. It can do this, company officials told audiences at CommunicAsia in mid-June, because it will be the world’s first device to combine a silicon tuner with a broad tuning range and a baseband processor using software defined radio techniques.
This strategy is interesting, since it means the software-driven chipset can compete on price, size and power consultation with devices that support just a single standard.
Anthony Sethill, Frontier’s CEO, went on record at CommunicAsia as saying that he fully expects that regulatory, spectrum allocation and installed infrastructure issues to slow down the deployment of mobile digital TV around the world. Frontier’s aim, he said, is to remove this barrier by quickly introducing a solution, and we are on course to introduce the Kino 3 chipset to market in 2006.

DVB-H vs DMB - which is best?
Our analysis suggests that, far from splitting the marketplace into a VHS vs Betamax war, the development of DMB alongside DVB-H will actually accelerate the development of streaming-to-mobile services, providing multi-standard handsets become the norm.
This appears quite likely to happen, as single-band GSM handsets are now history, and even the most budget of budget mobiles supports two or three GSM frequencies/standards by default.
Frontier Silicon’s Sethill said at CommunicAsia that he expects commercial mobile digital TV services based on DMB technology to arrive in Korea later this year, followed by UK and German services some time in 2006. Commercial services based on DVB-H, he added, are expected go live in the US during 2006, with parts of Europe adopting the standard on a commercial basis in the latter part of 2006 and 2007.
Sethill should know what he is talking about; his company’s chipsets are found in more than 70 per cent of all digital audio broadcast radios, a percentile he plans on emulating in the fledgling digital mobile TV market as it quickly evolves.
Despite Frontier Silicon making its bold moves in the multi-standard digital mobile TV marketplace, telecoms giant Nokia is throwing its weight almost entirely behind the DVB-H standard.
In May of this year, Nokia unveiled its plans for DVB-H, saying that, in revealing its technical specifications, it is seeking to jump-start the digital mobile TV industry.
Nokia isn’t alone in wanting DVB-H to succeed. Supporters of the technology include O2 in Europe and Crown Castle Mobile Media in the US, as well a number of chip vendors including Texas Instruments and Intel.
Nokia’s DVB-H strategy is not confined to one area of the globe. The company has already joined force with The Bridge Networks in Australia to become the system and handset provider for that country’s first mobile TV trial. The Sydney-based trial is expected to go live with more than 500 trialists this summer, all of whom will use a Nokia 7710 smartphone equipped with an add-on device to receive mobile TV broadcasts. The Nokia 7710 smartphone is notable for being GSM/GPRS-only and coming in a distinctly non-standard handheld gaming console format.
Despite its lack of 3G facilities, the Sydney DVB-H trial will involve the transmission of direct links to the Internet for access to background information on TV programs or sports results.
Although using GSM/GPRS for the carriage of the Internet data, which operates at around the same speed as a regular dial-up Internet connection, Nokia claims that users will be happy to download Web-based data at such speeds in the background, whilst they watch TV on their Nokia 7710 handsets.
So how will we see DVB-H transmissions develop over the next few years? Our analysis of market trends suggests that the development will mirror that of digital audio broadcast (DAB) radio, at least in the early days, with multiple channels available in and around most city areas.
The real advantage of DVB-H, however, is that its transmissions can be national, regional or area-based. This means, for example, that a city could have its own DVB-H TV channel, as the transmission costs involved are significantly less that that for a digital terrestrial or satellite channel. Furthermore, because DVB-H is an interactive digital transmission system, various segments of programs can be daisy-chained together from multiple channels to produce a highly customised channel.
This concept is one that many TV stations will exploit to great advantage, creating a local brand identity using limited local programming and tapping into national network resources.

Several steps further
DVB-H goes several steps further, as it is also possible to produce highly localised and customised channels at relatively low cost. Furthermore, because users can access their DVB-H programming using their mobile from anywhere there is coverage, you can expect to see people on the move, even on public transport, using the technology in a similar fashion to the iPod and similar mobile music devices.
The two-way interactive side of DVB-H is not to be underestimated, as users will be able to tie into extra information, as is the case with Bridge Networks trial of DVB-H in Sydney, Australia detailed earlier.
The crucial factor with DVB-H, however, that many in the industry seem to have missed, it is that it offers networks and their users access to a 3G-like experience, but without the relatively high costs of licensing and deploying a 3G network infrastructure. For many areas of the world, such as most of Africa and parts of Asia, where 3G networks are still on the distant horizon, this could be an interesting option.                               

Brian Lancaster, is a commercial engineer with AlanDick, and can be contacted via tel: +44 1242 518500; e-mail: contact@alandickgroup.com

Picocellular technology is now being combined with a unique GSM wireless gateway architecture to enable standard cost-effective communications, opening new markets such as airline, maritime, military, emergency preparedness and remote wireless
communications. Mike Fitzgerald explains the reasoning and new technology behind the delivery of personal mobile
communications to these hitherto neglected target markets

Making GSM services available and viable for even the smallest, remote communities, as well as for the airline and maritime traveler, has been a major challenge for mobile operators, service providers and infrastructure suppliers. Until now, delivering viable GSM services to ‘remote community groups’ of not more than 100, has not been feasible from a business perspective. The key barriers to entry have been the capital costs associated with building and providing services to the towers required to get to these communities or the operational overhead costs associated with the provision of backhaul over satellite.

As a result, after two decades of wireless network roll-out, billions of people across the globe in rural communities, remote territories, travelling onboard aircraft or maritime vessels, and even the world’s armies, remain cut off from the unparalleled economies of scale GSM has achieved. The GSM Association and its members have recently addressed a key critical milestone in cost-effective communications with the provision of a sub-US$50 wireless handset. The operational overhead of running these subscriber groups now needs a similar shift in cost reduction. While the leading network vendors have achieved this for high-capacity network solutions, these ‘remote hot spots’ still remain cut off. The goal is to provide a cost-effective service to these respective subscriber groups and, in the particular case of the remote community/territory of emerging markets, this requirement means a sub-US$5 per subscriber, per month operating cost.
To achieve this goal, a new lateral methodology for network design and operation has evolved; a new ‘split architecture’, which overlaps both the satellite and wireless architectures and also the respective business models of these industries. This ‘split architecture’ maximises efficiency and, with it, minimises overhead transmission costs, while remaining fully compliant to existing communications standards and meeting the operation and maintenance demands of the network operators.
This new architecture is now live in some of the world’s most challenging remote locations, such as the Antarctic and Iceland. It is also live on maritime vessels, and operating live onboard commercial jets under the required test licences. It is also attracting significant interest from various governments to meet required secure and transportable military and emergency communication requirements.

Remote hot spots until now
Traditional network roll-out incorporates the use of large, reliable, feature-rich switches and high-capacity base stations covering dense populations. A typical ‘phase 1’ operator roll-out would target cities, major roads and towns. This traditional network roll-out approach is particularly suited to the requirements of developed economies. Emerging markets, by their very definition, have, however, requirements that are more suited to their particularly socio-economic needs. Emerging economies have poor existing telecommunications infrastructure, and whilst cities and towns exist in these countries, beyond them communities are typically tribal and remote. Cost-effective wireless communications in these communities is, therefore, a necessity and a key tool to assist in the development of those micro-economies. If a community is, say, 10km away from a major town, then the use of a microwave link and a Micro/Pico base station makes good economic sense, as several towers would not be necessary.
The governments of these emerging economies have put statutory coverage requirements on the licence conditions of the wireless service providers. However, the cost associated with building towers to get to these communities, or the utilising of satellite as a backhaul method, have led to poor business cases and increasing losses to operators. If operators continue to expand into territories without an achievable ROI, the overall economy suffers as the general population will need to subsidise these costs through increased charges.
Any new wireless architecture would need to meet the demands of the local communities but satisfy strict operational requirements of the service provider such as service transparency. Service providers have invested huge sums of money to ensure that they can offer high-quality, feature-rich services, insisting that these services be available to all subscribers throughout the network. 
Certainly, there are already configurations combining base stations and satellite systems, which can offer isolated community communications today for GSM and CDMA-based networks. Satellite is an expensive medium for signalling overheads, particularly when a significant majority of this signalling could be non-revenue generating. If the base station is then linked into a satellite back-haul, all of this non-revenue-generating traffic will pour across the satellite bearer.

The new technology and why it’s viable
Cellular can now be delivered to these scenarios by a patent pending, open, hardware-agnostic, two-way wireless communications gateway – the Altobridge AM Gateway Platform – between a wireless access point (Picocell) and the PLMN (Public Land Mobile Networks). Most satellite communication bearers can be incorporated into the gateway architecture, with the overall solution removing the typical operational overheads and capital costs associated with remote wireless communications backhaul. The traditional cost barriers for backhaul over such technologies as satellite can now be removed.
In the Aeronautical market, the new Altobridge technology is being adopted by players such as Telenor, ARINC and Honeywell as part of their application portfolios. In the maritime market, shipping lines are looking at the solution for use by crews to bring them back into cost-effective communications with their own personal mobile communications handsets, which have become part of all our lives. Now that standard wireless services can be introduced onboard ships, then other services available to the terrestrial world, such as remote wireless monitoring and tracking, can now be available on an ongoing basis on maritime vessels.
In the military and emergency preparedness market, increasing numbers of governments and corporations are recognising the valuable role this architecture can provide. Portable, suitcase-size, remote wireless network solutions can bring military personnel, emergency personnel or the general public into full communication regardless of the state of the public telecommunications network.
In the area of operational savings, the new gateway architecture overcomes probably the biggest barrier of entry, which has prohibited the take-up of such solutions – that of the ability to limit signalling to/from the core remote wireless base station and maximising the transmission efficiency of the revenue-generating voice and messaging elements.
The ‘One-To-Many’ relationship of the gateway, where one ground gateway can connect to a large number of remote gateways, also limits the required signalling links into the PLMN to an absolute minimum, thereby keeping network infrastructure costs low.
One of the exceptional aspects of the split architecture solution is that it addresses the particular requirements of respective markets from a common platform; for aeronautical communications it supports new-generation broadband satellite communications, such as Inmarsat’s SwiftBroadband, while also supporting legacy satellite equipment, e.g. Classic Satcom already invested in by a huge number of aircraft; for the medium-ARPU seafarers on a merchant vessel the architecture supports economical satellite communications bearers such as Mini-M, and for the lowest ARPU remote subscriber of the emerging markets the solution will deliver ROI for 100 subscribers over the locally-available satellite service.
Aeronautical, maritime and remote community GSM solutions now offer a major boost to industry through enabling subscriber uptake in scenarios where services have previously not been viable. For the very first time in the history of GSM, using this new split architecture, the words ‘anytime, anywhere’ personal communications really do mean what they say.                     

Mike Fitzgerald is CEO of Altobridge

European Communications checks out 3GSM World Congress to be held in Barcelona 13-16 February 2006

The 3GSM Congress is getting bigger. Given the potential market-grab of 3G services, that’s hardly surprising, and the conference organisers are certainly not missing the opportunity to emphasise growth – not only in terms of expected attendance figures (around 40,000) and venue size, but also in the scope and variety of the conference programme and events.

And while we’re on the theme of size, 3GSM is undoubtedly fielding big-hitters among its keynote speakers for 2006. Among the CEOs headlining the event are Steve Ballmer of Microsoft, Antonio Viana-Baptista of Telefonica Moviles, Arun Sarin of Vodafone, Sanjiv Ahuja of Orange, Olli-Pekka Kallasvuo of Nokia, Carl-Henric Svanberg of Ericsson, Peter Erskine of O2, Ed Zander of Motorola, and Mun Hwa Park of LG Electronics Mobile Communications. A veritable Who’s-Who of the good and the great in mobile telecoms, representing between them access to some 850 million subscribers worldwide. A slightly later addition to the keynote line-up, Wang Jin Zhou, President of China Mobile, is not only a considerable feather in the 3GSM Congress cap, but serves to underline the considerable attention now being given to the massive potential of emerging markets. Indeed, the organisers are keen to point out that the Congress agenda, as a whole, will focus on increasingly important global issues, with keynotes examining developing market opportunities – the regions that will drive growth over the coming years.  Given the expected explosion in mobile take-up alone – in such countries as China and India – 3GSM could hardly fail to focus on these issues.
As a result, the ‘Growth Market Analysis Seminar’ is intended to examine these very markets. Designed for business analysts, financial analysts and strategy managers, the seminar will focus on some of the key growth markets that have enabled the GSM community to reach its current impressive subscriber numbers.  Some countries are attracting particular attention as they expand and create new opportunities for the industry, so the seminar will focus on six specific areas: China and India, Asia Pacific, the Middle East, Africa, Eastern Europe and Latin America. The sessions are intended to provide an overall picture of the market, summarise the latest key trends and statistics, and highlight key service and enabler topics. The sessions will also provide forecasts and suggestions for the future developments of specific markets.
As well as identifying new geographical markets, 3GSM is clearly also concerned with the breadth of opportunity now coming under the ‘mobile communication’ umbrella. 
“The line-up of speakers and the issues they will address underlines the 3GSM World Congress’s role as a key force, driving multimedia developments for business and consumers alike,” comment the GSM Association’s Chief Marketing Officer, Bill Gajda.
Certainly, 3GSM aims to reflect the multifaceted world of mobile media, with the spotlight on mobile advertising, mobile entertainment (music, games, video and mobile TV), multimedia handsets, and, of course, the evolution of 3G technology. Other sessions being given the high-profile treatment include low-cost handsets, service convergence, and customer loyalty and retention.
The Congress week begins on Monday, February 13th, with the GSMA’s Leadership Summit, a by-invitation-only gathering of CEOs from the global mobile operator community, to discuss industry strategies and opportunities for the future, in the appropriately sumptuous surroundings of Barcelona’s National Palace (the Palau Nacional de Montjuic). At the 3GSM World Congress 2005, over 200 CEOs representing global GSM operators and vendors joined together to hear about the strategic and commercial priorities that impact the future vitality of the industry, directly from the GSMA Board. According to the GSMA, the event resulted in “tangible outcomes, setting in motion many of the GSMA’s central strategic initiatives”.
The GSMA will also hold its second Global Governance Mobile Forum, where it will host more than 20 senior government delegations in global regulatory policy and strategy discussions. A closed-door event, the Forum will include combined sessions with operator CEOs within the Leadership Summit, as well as keynotes and presentations by Ministers and global institutions.
The first day of 3GSM 2006 also witnesses the Mobile Innovation Forum, a Growth Market Seminar, a Billing Seminar, as well as the opening of the Congress Exhibition, and, of course, the welcome party.
The Mobile Innovation Forum basically gives the kids on the block the chance to show the grown-ups what they can do. Designed to strengthen innovation in the mobile industry, the Forum provides a platform for young, small and start-up companies to pitch innovative ideas to mobile operators and strategic investors from the venture capital arena. According to the GSMA, over 150 such ‘young’ organisations applied to participate in the Forum for 2006. Among the ‘grown-ups’ this year will be Cingular Wireless, Hutchinson Group, O2, Orange, Smart Communications, Telefonica Movilles, T-Mobile and Vodafone.
“Opportunities for smaller and independent organisations to pitch great ideas and inventions to a gathering of strategic investors in one place are very rare indeed,” comments Craig Ehrlich, Chairman of the GSM Association. “We are providing a platform to pitch to the entire industry, and one successful organisation will gain considerable recognition by winning the Mobile Innovation Award. The potential for access to funding, and a route to the market is obviously tremendous.”
In early December, a panel of judges, comprising the supporting operator and venture capital representatives, shortlisted some 15 or so organisations that will be invited to pitch their ideas at the Mobile Innovation Forum. The panellists will assess brief ‘elevator pitch’ presentations by the short-listed organisations, three of which will go forward to present their ideas in the ‘Identifying and Encouraging Innovation in the Mobile Industry’ session on Tuesday, February 14th, allowing a further question and answer grilling from the judges and the audience. The Forum culminates in the selection of one final winner to be presented with the award during the GSMA’s Award Night at the National Palace on the Tuesday evening.
Awards are, without doubt, becoming a major feature of the 3GSM World Congress. With the growing convergence of information and entertainment, it’s probably hardly surprising that the staid, old communications industry is placing itself firmly in the glitz and glamour arena more readily identified with the entertainment world. To this end, there will be awards-a-plenty at 3GSM 2006, intended, according to the GSMA, to “celebrate excellence, recognise achievement, catalyse innovation, and promote creative content and applications across the GSM world”.  Along with the established awards for such areas as Mobile Handsets and Devices, Marketing and Promotion (including best broadcast commercial), Billing and Customer Solution, Mobile in the Community, and the GSM Association Chairman’s Award for ‘Outstanding contribution by an organisation or individual’ (Oscar, eat your heart out), a whole clutch of new awards is also on offer. 
Quite apart from the already described Mobile Innovation Award, other new categories are intended to reflect, according to the GSMA “the dynamic pace of global mobile communication evolution and innovation”. These include the Mobile Entertainment Awards, covering ‘best made for mobile’ game; music service; video service; and sports infotainment; Mobile Applications Awards, covering enterprise products or services, and messaging service; and under the established Network Products and Solutions Awards, new categories for Radio Access, Network Quality Initiative, Roaming Product or Service, and Service Delivery Platform – all delivered at, of course, a black-tie gala dinner.
“Entertainment’ is a word which, these days, seems to trip naturally off the tongue along with ‘communication’.  The 3GSM Mobile Entertainment Summit, should, therefore, come as no surprise, offering a forum for entertainment and content brands to come together with the mobile industry, as the two worlds increasingly converge. The session is intended to address the key issues affecting the mobile infotainment business, and act as a platform to define the path to its future.
On the exhibition floor at 3GSM, life might not be as glamorous as the images conjured by the entertainment world, but it will – if all predictions of attendance are accurate – be just as buzzy. Exhibition space sold out early, with more than 600 companies signed up and key names such as Intel, Microsoft, Telefonica Movilles, Motorola, Panasonic, HP, IBM, Sun Microsystems, Huawei, and ZTE showing their wares.  The total of slightly less luminous names is too great to list, but covers a wide cross section of the industry, and will include (taking a broad, random, but, hopefully, representative sweep) the likes of cable and antenna systems designer and manufacturer Radio Frenquency Systems: telecom software provider Aepona: mobile entertainment specialists BeTomorrow; billing solutions suppliers Cerillion, Martin Dawes Systems, Formula Telecom, CBOSS, and Orga Systems; mobile data outsourcing solutions provider End2End; support systems developer Telcordia; revenue assurance specialists Azure Solutions; and smart card solutions suppliers Giesecke & Devrient.
While the organisers point out that the new venue – the Fira de Barcelona – offers more than double the capacity available last year, they are keen to stress that the ‘village’ atmosphere of the exhibition, established in Cannes, will be maintained. Indeed, following the long-running, faithful relationship with Cannes, 3GSM has now shifted its allegiance very firmly to Barcelona, pointing out, for instance, that the city welcomes some of the largest international congresses in Southern Europe and tops the International Congress and Convention Association (ICCA) ranking.
The 3GSM World Congress 2006 will clearly be bigger than before – it now remains to be seen whether it will also be better...

Is VoIP now ready to provide the icing on the cake for the telecommunications industry? Olivier Hersent looks at the variety of flavours VoIP has to offer

The business case justifying deployment of voice over IP (VoIP) is constantly evolving. In 2004 and 2005, VoIP deployments were justified primarily by lower operating costs for the operator: subscriptions combining voice and DSL access experienced less churn than DSL-only subscriptions, enabling reductions in marketing expenditure, better resistance to price erosion and fewer charges for changes in pre-selection subscriptions on lines leased from incumbent operators.

The market has matured and operators seem to be more ambitious about VoIP. The Netcentrex consulting team, which helps service providers to define their business plans, is now frequently asked to go beyond cost optimisation and use VoIP to raise the average revenue per user (ARPU).
In 2006, we can expect operators to explore a variety of ways to achieve this. 

Use of VoIP for primary line telephony
In 2006, use of VoIP for the primary home phone line and not just for secondary lines will increase. In order for a VoIP phone to become a personal object, (in the same way that mobile phones are personal), it must become mobile inside the home. Operators can use DECT, Bluetooth and Wi-Fi phones for this.
Following an initial preference for Bluetooth in 2004, it seems that DECT is now the favourite with operators for the very short term and Wi-Fi for 2006. This development will most likely reduce call traffic over revenue-generating lines but will probably not require a reduction in the subscription rate, because the added mobility and allocation of a personal telephone number will be perceived as having very high added value. One can also expect the number of revenue-generating lines to double and operators even hope that, in time, this figure will overtake the number of homes equipped with a PC and DSL.
It is interesting to point out here that the majority of market studies for residential use of VoIP base their findings on the assumption that a certain amount of traditional fixed lines will be replaced by VoIP. The reasoning behind this is doubly false because VoIP is often deployed in addition to and not in replacement of the traditional fixed line, as homes raise the number of phone lines from one to two or more! There is no major technical or regulatory obstacle for this type of offer, which is likely to meet with significant success in all the European countries where VoIP presence is already strong.

VoIP-only telephony in the home
Primary line use of VoIP where it is the only voice service in the home will also develop. The VoIP service will be delivered over a DSL-only, standard bandwidth access that is sold without any additional telephony services, with the DSL access included in the VoIP offer. Alternative operators automatically raise their ARPU in this type of offer through the value of the basic telephony subscription. If they don't own their own network, they can increase their margins to cover the cost of unbundling, line leasing or DSL access. This offer will become unavoidable in 2006, even though primary line services are much more complex to deploy than secondary lines.
Primary line services require the operator to offer number portability, to guarantee a very high level of availability, currently limited by DSL and cable technologies, and to deploy the full range of emergency and legal services. Consequently, the deployment and support costs for this type of offer are high, making deployment more accessible for operators with a large installed base and with their own network. Primary line subscriptions are still in their infancy in 2005 and under the strict control of the operators, who see their support costs exploding in the first months. The only operator with significant experience in primary line VoIP is Fastweb in Italy who has been delivering this offer for over three years and now has more than 300,000 primary line subscriptions.

Combined MVNO/fixed line VoIP offers   
2006 will see the rise of combined Mobile Virtual Network Operator (MVNO)/fixed line VoIP offers using hybrid GSM/VoIP phones. The aim is that the personal VoIP phone (see above) becomes the user's only communication device, whether he or she is in or out of the home. The potential of this type of offer is generating considerable interest among Internet Service Providers (ISPs) and a large majority are already studying the possibility of becoming an MVNO. It has also stimulated mobile operator interest in ISP activities, because the synergy between the offers is high and linked exclusively to phone use.
Some manufacturers have made significant progress in the optimisation of terminals that combine GSM and VoIP over Wi-Fi. Taking an optimistic point of view, we can assume that all major technical difficulties will have been resolved by 2006. It is still questionable, however, whether these packages will see the day on the residential market. The regulatory and financial stakes are enormous and have an impact on players much more powerful than the ISPs. If we look at the natural balance of power, it is the mobile operators that should take over the activities of the ISPs and extend their mobile networks to include management of domestic VoIP lines (Unlicensed Mobile Access (UMA) type offers). However, according to the regulatory environment, only the ISPs can offer such hybrid terminals, using the mobile networks in the street under an MVNO license but under complete ISP control in the home.
There is very little general market visibility in Europe. In Germany, an offer of this type from a major ISP has already been aborted at the last minute, even though initial press announcements had been made, because the hybrid phones were “no longer for sale”!

New uses for phones
Another trend in 2006 will be the extension of our use of phones. Users are already familiar with video communication offers, which have been promoted by both mobile operators (3G operators) and ISPs (videophones using DSL access) in 2005. Adoption has taken off rather more rapidly than expected, with average communication times double that of audio communications, and more importantly, a call rate that increases considerably with length of subscription, which is a very good sign. The success of these offers depends for the most part on the user base.
The strong synergies between video communication services from ISPs and 3G services from mobile operators are bringing the two types of operator closer. To extend the user base, it is necessary to adapt the marketing approach to each market segment: the grandparents' phone can be expensive (no discounts) because it makes a good Christmas present from the family, but the monthly subscription must be low because the phone receives calls but can not be used to make outgoing calls and does not use DSL. On the other hand, outgoing calls can be charged at a relatively high rate. A video terminal for a couple with a young child may be discounted but the subscription rate can be priced higher because the couple are regular Internet users and make numerous outgoing calls.
The marketing approach for video must be radically different to the approach used for voice as long as the market remains immature. However, due to a lack of time, current video offers are still too close in structure to those of voice. If operators spend a little more time on the marketing and regulatory elements of video communication, we should see well-targeted video 'packs' appear in 2006, such as: Parents/Grandparents Xmas Pack, Home to Family Overseas Ethnic Pack, 3G/Fixed Line Video Pack, etc...
2006 will reveal which elements, from all these possibilities, will become part of our everyday lives. It's a safe bet to say that between now and then, more discoveries will have arrived to surprise us in our communications. The main outlines for 2006-2010 are becoming clear however: personal wire line and/or wireless telephones, videophones and TVs that work in the home or in the street, in your own country or in a hotel, in all homes even those without a PC...the boundaries between television and phones will become fuzzy with everyone becoming a producer as well as a consumer of video content, direct and personal viewing gradually taking over from stocked, and often copied, content. There is still much work to be done!

Olivier Hersent is the Chairman and CTO of Netcentrex.

External Links


VoIP Internet Telephony offers enormous potential, but without adequate security, it also invites serious risks. Tareque Choudhury puts forward the case for Unified Threat Management

Revolutionary Internet-based technologies within the telecoms industry are generating new applications and revenue streams while lowering operating costs. Such developments are now challenging the role and business model of traditional telecommunications companies, according to the Organisation for Economic Co-operation and Development (OECD) in its 2005 Communications Outlook report.

Revolutionary Internet-based technologies within the telecoms industry are generating new applications and revenue streams while lowering operating costs. Such developments are now challenging the role and business model of traditional telecommunications companies, according to the Organisation for Economic Co-operation and Development (OECD) in its 2005 Communications Outlook report.
BT, Deutsche Telekom, and France Telecom have all seen some of their traditional lines of business shrink, yet enjoyed tremendous growth in providing broadband access. Now, a growing number of carriers are committing themselves to a faster transition to the ‘next generation of networks’ such as Internet Telephony.
It is in Internet Telephony that Voice Over Internet Protocol (VoIP) firms Skype and Vonage are expected to offer significant competition, threatening the fixed line revenues of traditional carriers, especially for international calls. For example, through Internet Telephony, a comparison of the cost of calls via Skype, and via traditional fixed-line carriers in OECD countries revealed an average saving of 80 per cent using Skype.
Although for now, the consumer VoIP market is not large, it is expected to grow rapidly. Even the OECD is prepared to believe estimates that 50 per cent of the world’s telephone traffic may be based on VoIP by 2006. It is figures like this that have driven up the potential value of Skype’s Internet Telephony technology to such  a degree that eBay is prepared to pay $2.6bn for it. Skype only generated $7m in revenues in 2004, but is expected to raise $60m in 2005 and more than $200m in 2006.
This competition is forcing telecoms providers to rethink their services, prices, ways of attracting and retaining customers, and critically, their security to maintain those services.
VoIP may be the hot technology, but telecoms providers still have the usual suspects – hackers, viruses, spam, and blended threats of all three – to combat. Blended threats are exploits that use more than one mechanism to spread and/or execute. For example, a malicious e-mail might download a key logger, ‘blending’ the classic Trojan attack with spyware.
Now, as the threats are blended, so organisations must adopt an ‘application layer firewall’ to provide an equally intelligent, blended defence. A single vendor wanting to protect its customers from these attacks must offer products that defend in depth at the packet and application layers, and customers are demanding their vendors provide this cross-layer protection.
It is clear that VoIP Internet Telephony offers enormous potential, but without adequate security, it also invites serious risks, which has meant the scale of security threats to telecoms providers has increased markedly.
Instead of threats to ‘plain’ telecoms such as analogue connections, circuit switches, breaking into PBXs, the more that companies go over to VoIP and implement Multiprotocol label switching (MPLS) networks – essentially Virtual Private Networks using IP addressing – the more they expose themselves to greater threats. Because MPLS is routable, and uses an IP address, the whole landscape of security can be affected, including 3G mobile services.
Telecoms providers are constantly aware of the need to provide additional security. Overall network availability has gone up to over 99.999 per cent, but that can generate more customer complaints as businesses place greater business reliance on those services.
In response to a recent survey of security concerns, one major UK telecommunications company categorised its security priorities: Ensuring the integrity, confidentiality, and availability of its own and customers’ data, concerns about the volumes of data that need to be secured, keeping and maintaining the security profile of security devices such as firewalls up-to-date and in a known condition, and critically, the need to avoid the embarrassment of a major security incident, were top of its list.
However, this need for security must be balanced by performance issues, and not jeopardise the capability of the systems to operate as designed.
Clearly, a solution is needed to all this complexity, because managing virus detection and prevention, firewalls, network intrusion detection and prevention, patch management, access control and penetration testing is labour intensive and expensive to manage and co-ordinate. Indeed, the emphasis has moved from threat avoidance to threat management for telecoms providers.
These solutions need to be cost effective, co-ordinated, offer streamlined administration, be interoperable, and comprehensive. It is no use offering services that are a mile wide and only a few inches deep. They have to be a mile wide and a mile deep, and manageable as well.
Telecoms providers such as France Telecom and BT Syntegra have been ramping up their managed services to clients for the last couple of years, and require effective security products to manage their virus-checking, URL blocking, content filtering, and blended threats.
They need a ‘Mercedes-like’ security plan that, when deployed, provides proper perimeter and desktop protection, as well as proactive filtering defences. In other words, a ‘hybrid’ approach to technical strategy and architecture, which, combined with a central management solution enables global security policies and policy changes to be made quickly, easily and accurately. 
This Unified Threat Management (UTM) approach achieves a secure and cost effective solution that makes use of unified multiple architectures at the gateway, protecting everything that goes in and out of the network. Offering a security framework that protects the entire stream, providing ‘total stream protection’, it utilises a gateway security device such as a firewall that can inspect traffic to the demilitarised zone and deliver proactive security, URL filtering and virus scanning, while at the same time inspecting traffic for the internal users on the corporate network.
The threats will continue, in new and varied forms. According to BT’s former chief technologist Peter Cochrane, the next security irritant for telecoms companies and their users will be VoIP spam, an “endless series of automated advertising and human enquiry annoyances delivered over a voice network”.  It has already been suggested that 80 per cent of the Internet’s capacity is already consumed by spam, viruses, worms, spyware, denial of service attacks and other malware.
All organisations have to manage their security in a threatening world. But given our reliance on communications – from mobile 3G services to Internet Telephony – having the effective and comprehensive management capability on hand to be able to deliver a secure and reliable service to customers is a prerequisite. What Unified Threat Management does is ensure telecoms providers can go a long way towards achieving that goal.

CyberGuard Corporation is exhibiting at Infosecurity Europe 2006 on the 25th - 27th April 2006, in the Grand Hall, Olympia, London. www.infosec.co.uk

Tareque Choudhury MSc CISSP, is Technical Director, EMEA, CyberGuard Corporation.

Network planners need to consider a number of initial architecture decisions when designing a triple-play network. And the choice of an MPLS and VPLS-based Ethernet infrastructure is key to a successful deployment, say David Ginsburg and Gary Holland

The successful deployment of new triple-play services very much depends on the choice of infrastructure. Carriers who have built triple-play networks using DSL for subscriber access, ATM for DSLAM aggregation, and IP for Internet access are increasingly running into problems. These problems include DSLAM bandwidth limitations and ATM scalability as the number of subscribers grows. Consequently, carriers are rapidly changing their network infrastructure, migrating to IP DSLAMs instead of ATM DSLAMs and using Gigabit Ethernet for DSL aggregation instead of ATM. 

Choice of infrastructure has also proved critical to the success of new Ethernet business services. Many carriers who built Metro Ethernet networks to offer higher bandwidth, lower cost, transparent and more flexible services to large business customers have learnt the hard way that Ethernet on its own is lacking. True Carrier Ethernet networks require MPLS-based Virtual Private LAN Service (VPLS) to provide carrier attributes such as scalability, reliability, and traffic engineering that Ethernet alone cannot offer.
As many carriers now realise, Ethernet and VPLS also provide an ideal combination to build triple-play networks for the following reasons:
• Satisfies the technical requirements for triple-play services
• Allows new business models for new services
• Provides bandwidth flexibility
• Offers scalability and reliability
• Supports QoS and traffic engineering
• Includes security features
What's more, new enhancements to VPLS, such as Hierarchical VPLS (H-VPLS) and VPLS multicast, will provide the enhancements that carriers need to expand their triple-play networks significantly.
Network planners need to consider a number of initial architecture decisions when designing a triple-play network. These include where to deploy the Broadband Remote Access Servers (BRAS) used to authenticate subscribers, how to provide Quality of Service (QoS), how and where to deploy IP multicast, and network bandwidth sizing. These are discussed in the following sections.

BRAS Placement
A great deal of discussion has been given to the role and placement of the BRAS within triple-play networks. This is often referred to as the single versus multiple service edge discussion. In a single service edge, all traffic, including video, flows through the BRAS that is either integrated or co-located with the DSLAM. In a multiple service edge, only Internet traffic leaving the triple-play network traverses the BRAS. The discussion focuses on whether subscriber management should be placed in the data path of all traffic, or only for traffic destined for the Internet. A number of factors need to be considered here, including the cost of the BRAS, suitability for video traffic, and whether other methods of policy enforcement are possible.
BRAS were initially deployed as part of oversubscribed ADSL deployments, with the average bandwidth per-subscriber limited to about 30-50 Kbps, and where scalability was based on the number of subscribers the platform could support. Today, the access technology is ADSL2+, VDSL, or even fibre to deliver the megabits of traffic required for IPTV. Most carriers have selected a multiple service edge architecture for reasons of cost and manageability, where the BRAS is placed at the edge of the triple-play service area, with all other traffic flowing across Ethernet routers. The cost for the centralised model may be 2-5 times less per port than for the distributed model, and there are also fewer IP devices to manage. For these reasons, most carriers have selected the centralised architecture.
Quality of Service   
How to apply QoS across the network is closely related to BRAS placement. One model is where the DSLAM maps customer circuits into individual VLANs with associated QoS, and to map these into per-service VLANs (and then into MPLS LSPs) closer to the network core. Another model is where the DSLAM maps customer circuits directly into per-service VLANs with associated QoS. The advantages of the per-subscriber per-service model are that it includes finer grained control in the aggregation network and that less intelligent DSLAMs can be used.
However, the advantages are more than outweighed by the additional amount of management required in the aggregation network. For example, with 500 subscribers per DSLAM, and 4 services per subscriber, a single interface on the edge Ethernet router would need to support a large number of VLANs with their associated shapers, counters, and therefore cost. The complexity increases for every new subscriber and configuration change. In addition, operations are further complicated by the demands of video traffic, where multicast traffic needs to traverse a shared VLAN while unicast traffic maps to per-subscriber VLANs.
In contrast, per-service only mapping preserves the 'transport' intent of the aggregation network where control is on the aggregate and not individual customers. Configuration in this model remains static as subscribers are added, and the physical interfaces in the network are independent of the number of subscriber. In this model, the IP DSLAM needs to control the unicast and multicast traffic in conjunction with the BRAS and the video servers at the edge of the aggregation network. In many cases, QoS is provided across the Ethernet aggregation network on a per-service basis. This requires network modelling in order to implement. The proper choice of a QoS model is still subject of debate is at the service edge - the part of the network between the DSLAM and the core of the Ethernet network. Here, both models are applicable.
At the high level, the network should be sized to be able to handle the expected peak rates of the different traffic types. Hardware-based hierarchical QoS using Diff-Serv AF queuing or 802.1p marking can be used to prioritise video and voice traffic and then provide aggregate QoS on a per-subscriber basis. Some data traffic, such as business customers' data traffic, can also be prioritised, with non-business data given best effort.
In a typical deployment, customers connected to the DSLAMs have one or more circuits depending upon the services required. These could include circuits for broadcast TV, Video on Demand, VoIP, and Internet access, with local 'on-net' services sharing one circuit and Internet access assigned to a second. The provider sets different upstream and downstream bandwidths for each of these services, and the DSLAM performs Priority Queuing to permit some traffic to have precedence over other traffic.
The circuits are mapped into per-service VLANs across the Ethernet uplink. Several VLANs are defined in the backbone; Multicast, Internet, Content on Demand, Local Services, and Management. Each VLAN is assigned a Class of Service (CoS) and aggregate bandwidth for traffic from the DSLAM, and each VLAN is then mapped into a VPLS instance on the Ethernet edge router.
The sizing of the core, taking into account current and projected traffic growth, is another important network design decision that impacts QoS. Most carriers have initially deployed multiple Gigabit Ethernet links across their backbone networks, and many will soon need to upgrade their backbones to 10 Gigabit Ethernet links to provide ample headroom for growth.

IP multicast
IP multicasting to support broadcast video also dictates the choice of technology and the network architecture. A Layer 3-centric approach may use Ethernet routing across the aggregation network. Here, the routers implement PIM, creating the multicast tree from the video source to the DSLAMs. The DSLAM may implement PIM as well, or more simply, IGMP pruning. In an alternative Layer 2-centric model, only the router co-located with the video server implements multicast, with the traffic replicated at that point and forwarded across the aggregation network. The DSLAMs receive all multicast groups and then implement PIM or IGMP Proxy.
One area of contention is the amount of time for channel surfing under both architectures. At least one vendor states that the Layer 2 approach results in quicker channel selection. The reason stated is the tree establishment times under PIM. In fact, this delay may be only 5 per cent of the total channel selection time, with the majority of the delay due to resynchronisation of the client.
Another area of contention is re-convergence of the multicast trees if part of the network fails. This focuses on the time required for the IGP to re-converge routing and then PIM to re-establish the multicast trees versus VPLS recovery times. Given proper network design, both approaches should result in an acceptable viewing experience to the end user.

The major advantages of deploying VPLS within the network are the resiliency capabilities of the MPLS. Unlike VLAN's Spanning Tree, the MPLS reroute and backup capabilities are designed to route around failed links or network elements with minimal interruption to the application. This implies that the traffic switches to the backup path in the sub-250ms range and also implies multi-vendor interoperability if the MPLS backbone extends across multiple vendors. 
Device level resiliency is also paramount for successful triple-play deployment.  This includes both hardware and software resiliency, and will become even more critical as carriers converge business and mobility services onto their Ethernet-based triple-play infrastructures.

Future Plans
VPLS is evolving as a standard and enhancements will be implemented in triple-play networks. The first will be the implementation of Hierarchical VPLS (H-VPLS). VPLS in its most basic form creates a full mesh between MPLS Provider Edge (PE) devices. 
H-VPLS extends this by creating a hierarchy through the use of 'spoke' circuits based on MPLS Martini or Q-in-Q. The mesh is therefore limited to a smaller number of PEs, and devices referred to as Multi-Tenant Units (MTUs) connect via these spokes to the PEs. This two layer hierarchy permits the carrier to substantially expand the VPLS deployment.
H-VPLS also provides the basis for inter-domain H-VPLS (also known as the E-NNI) whereby carriers may link their VPLS networks with those of other carriers. Alternatively, it could be used to partition the network as the subscriber count grows. The E-NNI will rely on multi-homed spoke circuits between multiple VPLS clouds, and is under discussion within both the IETF and the MEF.
As these triple-play deployments evolve, more efficient forms of multicasting across VPLS will be deployed. The VPLS tunnel and PIM methods described earlier will make way for true point-to-multipoint circuits across the VPLS backbone, permitting direct mapping of multicast groups into VPLS instances.  This is being addressed within the IETF.

Traffic engineering
On the traffic engineering side, the earlier DSLAM and network sizing discussions leads to the subject of High Definition TV (HDTV) and expected deployments. Unlike the US where migration to digital TV is mandated and is providing a catalyst for HDTV, Europe and some other regions are just entering the market and HD is currently only available via satellite. One comment sometimes made is that PAL delivers sufficiently better picture quality than NTSC to blur the differences. However, HDTV will arrive and triple-play networks need to have sufficient headroom to handle the additional bandwidth required.
The choice of an MPLS and VPLS-based Ethernet infrastructure is key to the successful deployment of new triple-play networks. It offers the flexibility to adapt to changing market conditions, and delivers cost reduction and simplification over traditional IP networks by using Layer 2 services and MPLS. Most importantly, it permits carriers to gain further network efficiencies and deliver true innovative triple-play services,  continuing to stay ahead of their competition.                                                               

David Ginsburg is VP of Marketing and Product Management, Riverstone Networks, and can be contacted via e-mail: gins@rstn.net
Gary Holland is Director of Marketing EMEA, Riverstone Networks, and can be contacted via e-mail: gary.holland@riverstonenet.com

External Links

Riverstone Networks

Session Initiation Protocol (SIP) can bring huge benefits in the ways that telecommunications services are created, marketed, delivered and billed for, says Alun Lewis

The telecommunications industry is sometimes its own worst enemy when it comes to giving its customers what they really want. Those customers however aren’t just the end users – they’re also those companies that want to sell communications services in new, creative and cost-effective ways and compete against already established players.

These new entrants have all too often in the past found their ambitions stifled by over-complex technologies or excessively high system costs, effectively blocking them from competing on a level playing field. That picture is now, however, changing quickly, thanks in part to the emergence of a new technology – Session Initiation Protocol (SIP) – which can bring huge benefits in the ways that telecommunications services are created, marketed, delivered and billed for. 
One of the main historic stumbling blocks has been the industry’s circuit-switched heritage, where the systems needed to support even the simplest voice services took literally months to build and implement. As new technologies and services were rolled out, each added yet another expensive new silo to the management overhead, requiring specialist skills and support services.
Such a self-crippling approach to service creation is no longer tenable. The new telecommunications environment – irrespective of whether you’re an ISP, alternative or virtual carrier, incumbent, fixed or mobile operator, or all of them put together – is characterised by openness, flexibility, innovation and a far speedier metabolism than was ever required in the long gone days of the monopolies. But how to support this in the technological shift away from the circuit world to one dominated by the increasingly ubiquitous IP/TCP protocols of the Internet world?
While there’s a huge amount of invaluable functionality embedded in the circuit switched network, we need to find ways to make this inter-work as easily as possible with all the new advanced services that the use of IP make possible. Essentially, it took just a few simple communications and mark up language protocols to unleash the trillion dollar industry that is the Internet today. What does the industry have up its sleeve now to drive a similar revolution in the ‘industry previously known as telecommunications’?                           
The answer, already familiar to the hardcore technologists amongst us, is SIP and it’s already being deployed in customer networks by some of the more nimble and innovative communications solutions providers such as the UK’s Digitalk.
I spoke recently to Justin Norris, MD of the company, about where he saw SIP heading and what its real potential was for opening up the telecoms landscape to new competition. His first thoughts were that: “For once, with SIP, we have a technology that actually does what is says on the box. SIP is concerned with setting up sessions between devices, applications, services and users – almost irrespective of whether those sessions involve a voice call, an Instant Message transaction, a request to view a video, or a request for information from a particular service on a user’s current device or location. In particular, it represents an important break with past traditions that have been holding us back as an industry.”
This historical contact is important to understand, as SIP’s heritage goes back to the late ‘90s when the telecommunications industry was casting envious eyes at the speed of service development in the web and Internet realms and wondering what lessons it could import to solve its own problems. While the industry had developed its own sets of protocols for initiating services and applications – most notably H.323 which was being used to drive PC/PBX telephony and video conferencing applications – these betrayed their telecommunications origins in that they imposed a heavy signalling overhead on the supporting networks and were generally over-engineered and over-complex for what they actually did.
In parallel to this, many in the industry were also becoming frustrated with the limitations imposed by Intelligent Network (IN) technologies. By effectively separating the switching network from its controlling mechanisms, IN was supposed to open up the telecoms world to new competition and services. But, for a variety of reasons, this brave new world was never realised and IN became as much a part of the problems as a solution.
Riding to the rescue came SIP, born partly from academia and partly from industry, and intended to solve many of the problems that were then bugging the closer integration of IT and telecoms. In direct contrast to the technologies mentioned above, SIP took its inheritance from the world of HTML – the language behind the World Wide Web. Using only simple test-based commands – rather than a complicated and dedicated programming language known only to a few expensive specialists – SIP-based services could be designed,   written, tested and deployed in a matter of days and weeks, rather than the months and even years that traditional technologies required. As a mark of the perceived potential of SIP at that time, Microsoft chose to incorporate SIP into Windows XP.
Sadly then, as we all know, the ‘nuclear winter’ hit the global telecoms industry and, with the whole business sector sinking into survival mode, investment in new service architectures slowed and almost halted.
Fast forward from those dark days to the present and it could be said – without too much marketing hyperbole – that SIP’s time has finally come. Digitalk’s Norris comments: “Most importantly, it has an increasingly key role in achieving what might be called ‘joined-up’ telecommunications, where users can move seamlessly between different devices, services, access networks and media – and service providers themselves can begin to bundle multiple services together in innovative and cost effective ways to reach clearly defined market niches.”

SIP the subversive
For a start, SIP might also deservedly be known as ‘Subversive Internet Prototcol’, in that it has had a key role to play in removing actual network ownership from the list of assets a service provider must possess. For many established telecommunications, the main threats that they see no longer come from other incumbents, but from the ‘left-field’ players like Skype, Vonnage, Google and eBay who are hitching a ride on the back of VoIP services which, in some situations, will never even touch the world’s traditional telecommunications infrastructure.
Using SIP to set up and take down call sessions and service transactions, each customer’s laptop, PC or PDA can now communicate directly with one another, moving easily between voice, text and video messaging – or running all three simultaneously. As long as the customers have a link into the Internet over broadband or WiFi and there’s sufficient bandwidth for these transactions, then call quality can be as good as the traditional PSTN.
This window of opportunity however isn’t going to last for long. Some incumbent service providers – such as BT with its 21st Century initiative – are already starting to switch their own networks to use SIP and offer services like VoIP, while mobile and wireless operators are intent on using SIP’s central role in 3G architectures to also support this new model.                           
As a vendor who does a significant proportion of their business with alternative service providers, I asked Digitalk’s Norris what exactly could their kind of customer do using SIP-based technologies to build a business.
“For a start, it makes it a great deal easier when it comes to creating a set of unified services, offered over a common platform, at an economic price to a particular group of users such as the student community or SMEs. Both are heavy users of different communications systems, both are looking to simplify their current options and both are concerned with price and performance.
 “On the basis of a comparatively limited investment in a switching, applications and billing platform – supported by appropriate roaming and inter-working agreements with network owners, it becomes possible for a service provider to offer mixed-service portfolios, combining elements of broadband, dial-up, mobile and wireless access with services such as voicemail and e-mail, instant messaging and texting – or even more advanced commercial services such as IP-based centrex. It’s also possible for service providers to radically cut their support costs, giving the customer direct web-based control over their own provisioning or payment mechanisms.”

Attractive to the youth market
One of the important issues here is that packages of services such as this will become increasingly attractive to the youth market in particular. Unlike previous generations, they have far less resistance to the idea of getting their different communications from one single supplier. By contrast, older generations have grown up with a mental model of distinct and separate suppliers for different services such as fixed voice, mobile voice, email, web browsing and entertainment. The current new wave of Fixed-Mobile Convergence is set to change this perception radically.
In terms of the commercially attractive functions that can be delivered by SIP-based technologies, two in particular stand out when it comes to the SME market. The first, known as Push-to-Talk (PTT) allows users to set up buddy lists of their community members – such as employees of a small company – and then use this to carry out walkie-talkie type conversation with individuals or the group over suitably equipped mobile devices. As opposed to standard voice calls, call setup is nearly instantaneous and services can be offered below the usual market rates. PTT services have been particularly successful so far in the US where there is a long tradition of personal radio use and the application has had especially high take up rates in industries such as construction, where the mobile handset can now provide the same functionality as a personal radio.
For Digitalk, this idea of ‘joined-up’ applications is an important aspect of  SIP’s capabilities, as Norris explains: “The technology can also be used to easily integrate communications with personal availability, location and contact information that is held on existing IT systems, such as Microsoft Outlook. This means that an incoming call to a team member can be automatically directed to them on the most appropriate device – fixed phone, mobile, PDA, etc – or redirected by default to the next available or relevant team member. This pushes up call completion rates, contributes to customer satisfaction and reduces user churn by increasing subscriber ‘stickiness’.”
Finally, when it comes to paying for services, SIP also has an important role to play in increasing the flexibility of the underlying tariffing and billing processes, from supporting standard post-paid models right through to pre-paid accounts and calling cards – an area that Digitalk has specifically targeted through powerful functions built into its Multiservice Platform.
Norris concludes: “It’s estimated that by 2008, VoIP applications alone using SIP will account for 13 per cent of the European voice market. That may not sound like a large proportion, but the total size of the market is measured in billions, so it’s still going to be a very healthy business for those with the right infrastructure. That figure also ignores the huge potential of all the other SIP-based services, from messaging to presence and availability. With SIP we really do have a building block for the future of telecommunications.”             

Alun Lewis is a freelance communications writer and consultant. alunlewis@compuserve.com

Revenue assurance teams have resolved many of the problems that faced their companies ten years ago through the introduction of new techniques, custom-built code and off-the-shelf software. However, all of these moves have failed to resolve one of the most costly issues facing the industry: the inaccuracy and inconsistency of data across the many data silos within a telco. Paul Hollingsworth looks at a possible solution

How much revenue do you loose due to inaccurate data across your systems? For example from failed orders and dissatisfied customers. How much cost could be negated by removing unnecessary data-entry duplication and data correction? Gaining control of shared data across multiple systems will make a large impact on both top and bottom-lines, yet few operators have begun to take-on this apparent revenue assurance hole. This article looks at some of the reasons for this and outlines a proven approach for a solution.

The reasons for the industry's failure to solve the problem are many but certainly include the fact that the problem doesn't belong to Billing, Customer Management or indeed any single functional domain. The problem could be considered one of revenue assurance yet it is not usually considered within scope. Why is this?
Take a simple definition of revenue assurance: 'The activities of ensuring and validating that the revenue collected equals the chargeable service used.' This dictates that the subject covers everything from chargeable event identification to debt management. However, it doesn't get close to covering the problems caused by the use of multiple databases. For example, customers may be unable to access a service that has not been set up correctly.
Let us consider all revenue assurance activities as falling into one of the following categories:
• Chargeable event capture, billing and settlement
• Reconciliation and analysis of revenue data
• Policing of customer and partner activity
• Data accuracy assurance and correction
This may not agree with everyone's decomposition analysis, but that doesn't matter for the following argument: nearly all of the discussion, operator development and software product activity have been focused on the first three above.
These are, of course, all important and necessary. However with so little focus on ensuring that the shared data across many applications is accurate and up to date, how do we know that the revenue identified by the existing analysis process is correct? Also we have left a large door open for revenue leakage:
• Failure to sell or provision a service correctly, hence take-up of the service by the customer is delayed (or abandoned)
• Inaccurate allocation of inventory or network resources and unnecessary truck rolls
• Failure to bill due to service data being wrong or absent
• Customer disputes and resulting credits caused by incorrect bills
The list could go on, but the point is that the industry has not spent the necessary effort, nor gained the necessary reward, by attacking the issue of data accuracy. This causes operators to fail to provision service  or to bill correctly and results in both lost revenue and a high cost to resolve the errors, if and when they are identified.
The major cause of the high level of data inaccuracy is that most telecommunications operators' business support infrastructures are built of many tens or hundreds of systems each with their own siloed copy of shared data. Even if data is created consistently across the architecture – which of course it often isn't – the chance of retaining data accuracy over a few months of operation is small. Therefore errors occur with high regularity.
If the systems supporting the business were stable then the problem might be manageable. But with constant additions and evolutions – to create greater flexibility and remove cost – the problem of managing duplicated data across these systems becomes impossible. Also, if the revenue assurance capability is not highly flexible then it potentially adds to the problems of slow business transformation and high cost. Therefore any solution cannot be a fixed set of processes nor have a fixed data model making the solution harder still.

Proven approach
A proven approach is to ensure that the data needed by multiple applications, is managed from a single place. This does not mean that the data need be mastered in a central repository but that data is federated from the existing applications. It is then possible to assure that data changes are correctly applied across multiple applications, and also that inadvertent modifications cannot happen without at least a warning being sounded.
Such an application, which federates data as opposed to creating a copy or aggregate of business data, does not add to the duplication of data across the system infrastructure. Federating the data ensures that the latest information is always available and that any inconsistencies are identified and fixed simply or even automatically.
Rules can be associated with data to ensure that any change is reported or passed on correctly to other systems sharing the same data. These rules provide part of the product, service and content definition and have associated life-cycles. Of course the system also requires a significant level of resilience since the enterprise data can change at any time and needs to be applied correctly across multiple systems otherwise errors will occur.

Additonal benefit
An additional benefit created by managing data through federation rather than a central catalogue master is that the new data assurance capability can be delivered in small measurable and controlled steps. With a central repository it is necessary to deliver the shift in a big bang – i.e. today data is mastered in multiple places, but tomorrow everything switches to a central repository and no data changes can be applied from existing systems. If data can still be managed from the existing systems then business processes can evolve over time.
This approach to the major problem of data accuracy assures the correct delivery of product, service and content inventory. It is proven in the field and can be delivered without any impact on existing system or business transformation plans, therefore increasing profitability immediately.
The approach also enables rules to be defined to ensure that whenever product, service or content inventory is modified that change is correctly resolved throughout the architecture and that any additional revenue assurance functions, aggregations or modifications are carried out. One of the largest remaining problems for revenue assurance – errors caused by inaccurate data – can therefore be removed.                 

Paul Hollingsworth is Director of Product Marketing at Celona Technologies, and can be contacted via tel: +44 207 566 9100: e-mail: paul.hollingsworth @celona.com   www.celona.com

External Links




European Communications is now
Mobile Europe and European Communications


From June 2018, European Communications magazine 
has merged with its sister title Mobile Europe, into 
Mobile Europe and European Communications.

No more new content is being published on this site - 

for the latest news and features, please go to:



Other Categories in Features