Features

A new report published by the Information Security Forum (ISF) warns that the cost of complying with the Sarbanes-Oxley legislation is diverting spending away from addressing other security threats. The global not-for-profit organisation says that many of its members expect to spend more than $10m on information security controls for Sarbanes-Oxley. The business imperative to comply also means that, in many cases, the true cost of compliance is unknown.

With increasing concerns about compliance, the new ISF report provides a high-level overview of the Sarbanes-Oxley Act 2002 and examines how information security is affected by the requirement to comply. The report provides practical guidance to address problematic areas in the compliance process. According to the ISF, these problem areas include poor documentation, informal controls and use of spreadsheets, lack of clarity when dealing with outsource providers, and insufficient understanding of the internal workings of large business applications.
What's more, the Act ignores security areas that are extremely important when dealing with risks to information, such as business continuity and disaster recovery. This makes it vital to integrate compliance into an overall IT security and corporate governance strategy.
"In the wake of financial scandals like Enron and WorldCom, the Sarbanes-Oxley Act was designed to improve corporate governance and accountability but has proved difficult to interpret for information security professionals," says Andy Jones, ISF Consultant. "As neither the legislation nor the official guidance specifically mentions the words 'information security', the impact on security policy and the security controls that need to be put into place must be determined by each individual organisation in the context of their business.
 "It is important that Sarbanes-Oxley does not push organisations into following a compliance-based approach rather than a risk-based approach that may compromise information security.  The ISF report helps companies to achieve compliance while also ensuring that they have the appropriate security controls in place."
The full Sarbanes-Oxley report is one of the latest additions to the ISF library of over 200 research reports that are available free of charge to ISF Members.
Details: June Chambers, ISF, tel: + 44 20 7213 2867; e-mail: june.chambers@securityforum.org
www.securityforum.org.

"An economic power shift from the US to Europe is now gaining steam and promises to have a far-reaching effect on the world technology sector," asserts Cutter Consortium Fellow Tom DeMarco -- a point vigorously debated by his colleagues on the Cutter Consortium Business Technology Trends Council.

The European Union's transition from marketplace arrangement to world superpower, and a possible new age for European IT, characterised by relatively frictionless commerce and exciting growth, is debated by the Cutter Business Technology Council along with Cutter Consortium Senior Consultants Tom Welsh (UK) and Borys Stokalski  (Poland) in the latest issue of Cutter Consortium's Business Technology Trends and Impacts Opinion.
According to DeMarco: "After decades of looking over its shoulder at Japan and the East, the US economy is fast being overtaken by someone else entirely: Europe. The colossus that has been assembled in what used to be called the Common Market has emerged as an economic superpower."
Cutter Consortium Fellow Lou Mazzucchelli counters: "The European Union is taking hold, albeit with fits and starts as evidenced by the results of recent constitutional referenda in France and the Netherlands. But I see little evidence of a massive shift of economic power from the US to the EU. Perhaps it is because the gap between them is small, relative to the gap between either the US or the EU and China or India or South America. The latter have so much more to gain, potentially at our expense." 
He continues: "It is unarguable that changes in Europe have an impact on the US and the world, but of all the forces working to shift economic power from the US, Europe may be the least threatening. Europe may have gotten a second wind as an economic power, but it seems unable to run a distance race against India and China."
Details: www.cutter.com

Telecom operators across Western Europe are launching IPTV services in an effort to increase revenues and improve customer satisfaction for their broadband services. In a new study on IPTV services in Western Europe, IDC has found that the potential for success with IPTV services varies widely from country to country, depending on the penetration of existing pay TV services, the level of broadband competition, and the commitment of incumbent and leading competitive operators to investing in the network upgrades and content necessary for high-quality IPTV services.

IDC estimates that the market for IPTV services in Western Europe was worth $62 million in 2004, with less than one per cent of households subscribing to IPTV services. The market will boom over the next five years, growing from $262 million in 2005 to $2.5 billion in 2009. By that year, six per cent of Western European households will subscribe to IPTV services. By 2009, IDC expects that all European incumbents and a large portion of the major alternative tier 2 providers will offer IPTV services. DSL will be the most widely used platform for the service, though a minority of households in a few countries will receive IPTV services over metro Ethernet connections.
IDC's study Western European IPTV Forecast 2004-2009 says that in order to be successful, broadband operators will need to differentiate their service bundles from the video services already available in the market. The area they will be able to do this is in high-quality content underpinned with interactivity.
Details: www.idc.com

Worldwide mobile phone sales totalled 190.5 million units in the second quarter of 2005, a 21.6 per cent growth from the same period last year, according to Gartner, Inc. In the second quarter of 2005, the mobile phone market experienced the second strongest quarter on record (in the fourth quarter of 2004 worldwide sales surpassed 195.3 million units).

Nokia and Motorola have strengthened their position in the marketplace, as the two companies accounted for 49.8 per cent of worldwide mobile phone sales in the second quarter of 2005. Nokia's market share grew 2.3 percentage points in the second quarter of 2005 to reach 31.9 per cent. "Nokia regained its top position in Latin America and stepped up to the third position in North America benefiting from the successful launch of its Virgin Mobile which helped its lagging code division multiple access (CDMA) sales," says Hugues de la Vergne, principal analyst for mobile terminals research at Gartner, based in Dallas, Texas.
Motorola was the second best-selling vendor in Western Europe, a significant improvement compared to the same time last year when Motorola finished as the No. 5 vendor in the market. In North America, Motorola was the market leader with its share reaching 33.5 percent, while it was the No. 2 vendor in Latin America with 31.9 per cent of sales in the region.
Details:  www.gartner.com

According to a recent study by Evalueserve, The Impact of Skype on Telecom Operators, the European Telecom market is expected to be hit the hardest due to the fast and accelerating uptake of Skype, which is by far the most successful P2P (peer-to-peer) VoIP solution available around the world today.

Skype has revolutionised VoIP telephony by offering high quality voice transmission, and by reducing the cost to zero for Skype-to-Skype calls, and to a fraction of current long-distance rates for Skype-to-Fixed/Mobile network calls. European operators are much more exposed due to the characteristics of European telecom markets where the calling and roaming rates, as well as the share of roaming calls, is higher and local calls are charged by the minute, as opposed to a flat monthly fee in the US. Worldwide, the figure of regular retail Skype users is likely to be between 140-245 million by 2008, the study reports.
The Evalueserve study further projects that incumbent telecom operators who combine fixed and mobile networks are likely to face a significant risk of permanent reduction in overall profitability by at least 22-26 per cent and reductions in revenue by 5-10 per cent as a direct impact of Skype by 2008.
At present, Skype has two million users in the US and 13 million users worldwide and the company claims 80,000 new subscribers daily.
Details: www.evalueserve.com

A key component of converged multi-service networks is
intelligent functions for bandwidth management to ensure
Quality of Service (QoS), says Daniel Hydén

Operators worldwide are now building Next Generation Networks (NGN) for multi-services, where telephony, data and video co-exist in a shared infrastructure based on IP technology. The market demands an open, standards-based infrastructure that allows easy and quick introduction of new services to rapidly respond to customer demands. A single, converged IP network means considerably lower operational expenditures (Opex) as well as decreased capital expenditures (Capex) over time. A key component in converged multi-service networks is intelligent functions for bandwidth management to ensure Quality of Service (QoS).
State of the art bandwidth managers are a pre-requisite for guaranteeing quality and to meet the goals for decreased Opex in the next generation of multi-service networks.

Rationale for bandwidth management

A cornerstone of converged multi-service networks is the sharing of a common network infrastructure, and since IMS architectures promote deployment of a variety of applications all sharing network resources, user control is of paramount importance. An NGN must be able to control which user, which application and which service is utilising which part of the network at any given time. Without such control, facilitated by a carrier-class bandwidth manager, bandwidth requirements will be enormous and QoS will be dubious. 
When introducing NGNs, a variety of technical approaches can be utilised for provisioning of bandwidth reservations and QoS, but only multi-service, multi-technology bandwidth managers with session-based resource reservations meet the demands on operational efficiency.
For instance, a network can be over-provisioned and statistically dimensioned to a level where bandwidth should not be a bottleneck. Still, there are no guarantees and it would not be cost efficient in the long run. Over-provisioning is also not a solution in a ubiquitous network where users and applications are considered to be nomadic/mobile and where the need for network resources will vary in space and time in an unpredictable pattern.
Although call counting can be used for single applications, it is not efficient in a multi-service environment where it leads back to a 'stove pipe' network with capacity provisioned for each service. Call counting is neither viable for an operator providing network resources and customer access in a wholesale business model with a multitude of service providers seeking open interfaces for fast service introduction.
To meet the goals on an open infrastructure promoting fast introduction of new services with substantially decreased Opex compared to the PSTN, operators undergoing the transformation to next generation multi-service networks are introducing bandwidth managers for session based network resource control in their new infrastructure.

Carrier class bandwidth manager

One example of a carrier class bandwidth management system is the Operax Bandwidth Manager which is an end-to-end control system aligned with standards work from ETSI-TISPAN, MSF, 3GPP and ITU-T. It addresses bandwidth and policy management for broadband VAS and the migration of traditional services such as PSTN to multi-service IP networks. Such a bandwidth manager operates in multi-service, multi-technology, and multi-vendor networks and simplifies network management, increases network utilisation and avoids resource partitioning.
With this kind of bandwidth manager the multi-service infrastructure is shared under the authority of a unified single resource controller. Any application service can access the exposed QoS features and benefit from network VAS. There are many examples of services supported by bandwidth managers, some of them are QoS enabled VAS including derived voice and various forms of video, Generic ADQ (Application Driven Quality of Service) capabilities for wholesale QoS services and PSTN replacement. Bandwidth management enables a PSTN network operator to migrate existing PSTN infrastructure to an IP-based network while providing the network resource control functionality that is needed to guarantee service level and transport quality on the converged multi-service network. This ensures QoS end-to-end across core and access networks, necessary for delivery of voice and multimedia services.
Additionally this type of bandwidth manager is equipped with a generic data structure for representation of multi-technology network resources. This data structure, a.k.a. the resource map, models resources for virtually any network technology, including ATM, Ethernet, IP/DiffServ and MPLS.
NGNs are requiring vendor independent inter-working solutions and open standards and by choosing an open architecture system for bandwidth management, multi-service networks will support end-to-end QoS required for carrier-class voice services, while delivering the remainder of their existing services at a lower cost.  Furthermore, the separation of networks and applications implemented with a bandwidth manager and the open architecture of some bandwidth managers make it possible to use multiple vendors for network equipment such as routers, switches, BRASs as well as application functions (e.g. softswitches). 
A flexible bandwidth manager is the basis for cross-service resource sharing, both in real-time, driven by applications and on slower timescales, driven by management. This means that resources temporarily not being used for one service can instantly be utilised by another and that resource allocation between sets of services can easily be changed with minimal or no network reconfiguration.
Another feature of a carrier class bandwidth manager is that it effectively implements a bandwidth management structure capable of covering the entire network from access lines, via the backhaul onto the core. Furthermore it represents all contention points on the core, backhaul and access. This detailed resource representation provides for high-precision control which leads to guaranteed QoS and is also known as path-sensitive admission control.
The notion of a bandwidth manager has been discussed at length in the Multi Services Forum (MSF), where it is a clearly identified component in the multi-service architecture. Operax has actively contributed to the MSF for some time and the Operax bandwidth manager was the one used at the MSF global interoperability event (GMI) during 2004.

The NGN market

The NGN market is set to grow at a fast pace over the next few years. There are currently several high profile projects in which bandwidth management is a major consideration.  These include BT's 21st century network (21CN) to create and enable the infrastructure for the growth of the UK telecommunications industry. It is set to transform legacy/PSTN networks, delivering increased customer choice and control and, over the next five years, will transform BT's business and cost base, removing duplications across the current multiple service specific networks by creating a single multi-service network.
Another example is South Korea -- the world leader within broadband usage -- now building an NGN, the Broadband Convergence Network (BCN). Korea Telecom (KT) is at the forefront of the technology development and is currently testing a carrier class bandwidth manager. Around the globe operators are now kick-starting their network transformation into all-IP networks. For emerging markets like China and Eastern Europe the new technology provides major opportunities to leap frog into the latest technologies, with all the benefits a modern open infrastructure enables in service introduction and operational efficiency, underpinning the overall market growth and creating a thriving economy.
The convergence of services and terminals also means new business models and a new approach to infrastructure both by mobile and fixed operators. That is why fixed network operators are looking at IMS architecture when moving into NGN; Next Generation Networks are simply Fixed Mobile Convergence (FMC) in the eyes of a fixed network operator, at the same time mobile operators are closely following the fixed multi-service network convergence.
Regardless of whether it is a mobile or fixed operator moving into a ubiquitous, converged all-IP based infrastructure, the bandwidth manager is the essential component for providing multiple services in a highly competitive multi-technology, multi-vendor network.   n

Daniel Hydén is VP Marketing, Operax

A study published by Deloitte Consulting in April this year found that 70 per cent of participants had bad -- even costly -- experiences when it came to outsourcing. European Communications asked John Leigh, BT Global Service's head of
marketing for outsourcing services,

When John Leigh spoke at Gartner's 2005 Outsourcing and IT Services Summit in April about the practical issues that make the difference between success and failure of an outsourcing project, he was greeted by a keen audience, hungry for information. The debate at the conference wasn't about whether or not to outsource; rather, it was about how best to do it and who to partner with to ensure genuine customer success.
"We are entering a new era in business -- something we, at BT, are calling the digital networked economy." Leigh explains. "It's a relatively new way of working, brought about by the convergence of IT and communications technology on one hand, and the globalisation of business on the other. In this new economy, we face a global market where vendors are going to have to be able to sell services across the world at the best price point possible. One way to make this work is to concentrate on what you do best and outsource the rest."
BT seems to be thriving in this marketplace so far, with a number of new outsourcing contract wins under its belt. The mix of networks and IT services it offers is proving a big sell with companies. Recent headline deals have included a 'blockbuster' $3 billion deal with Reuters; a £25 million three-year IP VPN project with Visa across Europe; a seven-year contract with Unilever managing its global communications infrastructure across 104 countries, including the development of new technologies; and the upgrade and management of Manpower's worldwide data network encompassing 3,200 sites in 63 countries.
The company also practices what it preaches. As Leigh says: "We have done it for ourselves, so we know what we are talking about. We have outsourced call centres to developing countries. We have outsourced our HR administration to Accenture and our payroll and purchasing functions to Xansa. If you ask us if we believe in outsourcing, we have solid business evidence to say we get value from it ourselves, both as a user and a vendor. And that's pretty unusual."

Down-to-earth success

So what's the secret of success? Leigh, who has 30 years' experience in the computer services industry, working for Meta Group and Gartner before joining BT, offers a number of down-to-earth pointers.
Firstly, he says, when you go from doing things yourself to being a company that outsources, you have to change your skill set. He describes this as giving up managing assets and starting to manage results. He then explains what he means in an example that demystifies the usual approach to the subject:
"Imagine that you and your partner used to clean your house yourselves. You're busy people, so you decided to hire a cleaner to do it for you. When you did the work yourself, you didn't really have to think about it that much, because you had years of experience in the area and knew exactly what to do. Now you have to explain exactly what you need cleaning, how you like it to be done, how often, using what products and even how shiny you want it.
"Instead of buying cleaning products -- in effect, managing the assets used to do the job -- you've started defining what you want done -- that is, managing the results. And this is the issue. Most of the confusion around outsourcing is based on the fact that many purchasers don't really accept -- on an emotional level -- that a service provider doesn't instinctively know what they want. And some vendors fail to make the fact plain. Indeed, customers find it quite irritating when a vendor claims to be able to run their processes more efficiently than they can themselves."

Pointer 1:
Understand exactly what services you want, the        quality at which they need to be delivered and how to measure both.
Another stumbling block, according to Leigh, is demonstrated by research BT recently conducted with Industry Direct Ltd. Summarised in a white paper titled Strategic outsourcing to advance the organisation, cost control remains the primary objective for organisations  embarking on outsourcing agreements.
Leigh explains: "The issue is not that cost is unimportant -- we in BT know that reducing the costs of ICT is a core skill -- but that most contracts are overly focused on direct cost, not value. And value is much more difficult to quantify. If you were to focus on the direct cost of e-mail, for example, you simply wouldn't have it. But e-mail has a value that can't be expressed in direct cost -- for example, in allowing things to happen much faster."
Pointer 2:
Organisations should quantify the performance of their IT systems and make sure they understand the balance between direct cost and the value. Then value can be built into the contract.
"I tell my clients that, if you want to cut your costs, you should take a benchmark, decide what industry best practice performance would be and ask us to deliver to that standard," Leigh notes. "But don't just tell your supplier to take 30 per cent out of your direct costs. You might not have that kind of slack, and both you and the supplier could waste a lot of time trying to reduce the cost of something you have already optimised. You don't want to do that. Services are unusual. Their costs are embedded in the proposition, so there are only a limited number of ways to reduce them. The easiest is probably the least acceptable: to reduce the resources used to deliver the service. Other options include consolidation, standardisation, the use of new technology and offshore provision."
Pointer 3:
Know with great clarity what strategy the customer and vendor will apply together to reduce the operating costs.
So bearing these first three factors in mind, what else is important? "My fourth pointer," says Leigh, "is to reduce the risk as far as possible. Take scale, for example. Is the company you are looking at big enough to fulfil your needs?"
Pointer 4:
Reduce the risk by making sure the supplier has the scale and resources you need.
 "Sure, if you are outsourcing your only helpdesk to an IT shop around the corner, this may not apply,"Leigh notes. "But on a larger scale, any outsourcing project that includes a change management element carries risk of failure. We know that, and our job is to reduce the risk for our customers. But you know by our very size, scale and scope that you are taking a lower risk. Size mitigates risk -- just as it helps to have access to a powerful engine in your car. You may not need it often, but it certainly helps when you need to get yourself out of a difficult situation."
His fifth pointer, he says, is to look at depth of capability: "What I tell clients is that they should look at the investment model of the company they are looking to do business with. Where are they investing? If you are looking at buying a network, you can see that BT is planning to invest over £2 billion a year building a state-of-the-art IP network to support its customers across the world. If you visit BT's technology centre at Adastral Park, you can see the depth of our R&D and that we are people who are spending serious money advancing our network capability. Some of our  competitors may be doing the same, but not all outsourcing companies are."
Pointer 5:
Check to see where the supplier is investing. Is it in areas that will help them deliver for you?

The deal clincher

And finally -- and this, he says is the real clincher -- you need to be clear how your service provider justifies its efficiency.
"If I go to a customer saying I can do this better than you can, I'd better have real evidence to back up my claims.
"For example, if you asked BT to show you why its network operation centres are more secure than your own, we could take you through our approach in detail. We could show you how our security works and explain why, because of what we invest in this area, we lose only a tenth of what most similar companies would lose through hacking and unauthorised access.
"Or if we are saying we can run your call centres more cheaply, I'd be able to show you how our CRM methodology allows us to maximise the impact of technology investments. If you wanted to offshore as well as outsource, I'd be able to show you our capability in India, and I'd be able to show you our capability to deploy customer data more effectively through a wide area network and our unique management package."
Pointer 6:
Ask to see evidence that backs up the supplier's claims.
In addition, Leigh says, it's vital that you chose a vendor that fits your needs: "Most companies just chose from the top five or six brands, so they start with a list that contains unsuitable partners from the outset. I think companies should start by taking a serious look at the outsourcing market and making sure they really understand the value propositions of the main players.
"It may sound simplistic to say you need to know your own needs and wants, but often they can be difficult to reconcile within the political environments of large corporates. In fact, often the most difficult thing to do is to agree a set of expectations with all the stakeholders. You know what happens -- the finance director wants the lowest cost, the business unit leaders want better service, the marketing director wants access to new know how, etc, etc."
In conclusion, he notes: "Outsourcing isn't about perfection. But what you do have to be is better. People are looking for improvement, not nirvana.
"So what I say to my clients is: focus on the two or three things that will really make a difference. If cost is one of them, your supplier needs to show that its business model will demonstrably deliver better costs, and not just offer a facile statement of intent. Then make sure all your stakeholders understand what they are going to get before negotiating to get those things -- no more, no less. And once you've done that, set measurable targets for the improvements you want so you can check you get them. It may not be simple but it is straightforward. A science, perhaps, but not a black art."                                                  n

Penguins, bandwagons, new-technology hype -- what are the factors at play behind the introduction of new products and services, asks Nick Outram

Why is it that some new products and services are billed as the next big thing only to fade into obscurity, re-emerging years later and taking the market by storm? For example, the portable MP3 player. This technology was certainly available in the early 2000's and received a lot of media attention but only later did the Apple iPod capture a significant fraction of the market amidst a frenzy of buying activity. This article looks at some of the reasons behind this phenomenon and explores what strategies may be employed to help speed products past the initial hype and into the fast take-up phase of product roll-out.
Firstly, let me introduce you to two concepts: penguins and bandwagons. You will probably be aware of a bandwagon effect -- everyone jumping on to a 'new and proven' idea that captures the imagination and provides real value and benefits to the consumer. The bandwagon phase of product adoption corresponds to the sweet spot for sales. Most -- if not all -- of the product issues have been ironed out and the product is flying off the shelves. This is a very good time to be a Product Manager!
The penguin phase comes before this, and is so named because the largest majority of potential buyers are like penguins standing on the ice flow -- unsure whether to go into the waters or not. Although most of the penguins realise there are advantages to be gained from getting their feet wet, none but the most adventurous want to be first in as dangers may lie therein. In the wild, the danger is clearly that they may be eaten alive if first in.
Purchase of a new product has obviously lower risks than this but humans can go to extraordinary lengths to ensure that they feel good about their purchase. Books have been written regarding the psychology of buyers and complex models of buyer behaviour created, but ultimately it boils down to the basics of whether a person is happy to purchase an item or not. If someone perceives value in a product then -- when all barriers to the purchase have been removed -- the person will buy. In the wild, penguins will wait until they see a certain number of other penguins in the sea before they will venture in, and humans also exhibit this wariness. Every potential buyer has a different threshold of purchase, just as every penguin has a different threshold of fear.
The first set of people to buy can be critical to the success or failure of a product for the simple reason that they can provide a seed from which to grow the support that will encourage people with higher purchase thresholds to take up the product. Often, these early adopters are willing to put up with a certain level of product teething troubles and pain because they realise the immaturity of the product yet can grasp its longer term benefits.
Gartner Group defined five hype phases for a new technology or product. These are:
1. On the rise
2. At the peak
3. Sliding into the trough
4. Climbing the slope
5. Entering the plateau
Applying the penguin analogy I would argue that the first two represent the period during which the fearful penguins rule -- only towards the end of the third period do the majority of penguins begin to wake up and realise that the benefits of diving in outweigh the disadvantages.
Thun, Grobler and Milling of Mannheim University have attempted to model a generic product that exhibits positive network effects using a system dynamics approach [see reference]. In this approach the overall product purchase behaviour of individuals is modelled over time, certain key input variables can then be tweaked to see what effects differing conditions have on the overall outcome. For example the level of positive product recommendations between 'early product adopters' and the more cautious 'late adopters' can be factored in to find out what effect this variable may have. Also, once the feedback and 'inter-penguin chatter' has begun, another variable can model how rapidly this spreads through the 'penguin population' creating the bandwagon of purchases. While the models they present tend tobe fairly academic and generic they can provide valuable insights into methods for speeding up product introduction and increasing adoption rates.
In Figure 2, the solid lines represent a fairly typical product adoption and installed base S-Curve as one can find in most text books on product marketing. The dashed lines show the outcome of adding in strong Penguin --'P'-- and Bandwagon --'B'-- effects to the normal product adoption model. The most noticeable feature of the Strong PB model is the much delayed onset of a, then, much stronger take-up phase. For certain products one could argue that the original 'sliding into trough' phase in the Hype Curve is a consequence of the slow initial adoption rates due, in turn, to the mass of bystander 'penguins' waiting for the all-clear signal before diving into the new product.
There have been many situations like this in the past for products: consider the Sony BetaMax vs VHS 'war' in the 80s and its more recent re-run: Sony BlueRay vs HD-DVD. When one of these products reached a critical 'threshold of faith' that it would not be a dead technology in a couple of years, the perceived value of the product's 'technology network' increased dramatically leading to a winner-takes-all outcome as the public jumped onto the VHS bandwagon. Incidentally, I predict this time the winner will be Sony BlueRay, simply due to the clever strategy of incorporating one into every Sony Playstation3 to be sold next year, dramatically increasing the network of end users and consequently the overall perceived value of this product's network. I also expect to see heavy subsidies of Playstation3 as Sony senses the much greater opportunity potential of getting everyone -- not just Playstation3 owners -- more quickly onto the fast growth phase in order to resell their content back catalogue in the new high definition format.
As mentioned previously, the class of products that seem to exhibit some of the strongest effects are the communication products. They also seem to exhibit the hype phases, as we all love thinking and talking about a new piece of technology -- and many of us like to feel the pain by being first in the water! A new technology still in the development phase is Push To Talk (PTT). It has so far exhibited all the trends of the classic Gartner hype phases and consequent delayed uptake, outlined above. PTT is meant to turn your phone into a walkie-talkie, you need friends to talk with and they need a PTT enabled phone. This is a classic chicken and egg situation, but it gets worse -- so far only a small subset of phones and networks support the product and they are pretty much incompatible. It is no wonder that this service, once hyped as being the next big thing, is wallowing in the trough along with so many other mobile products; video telephony, mobile instant messaging and content streaming to name a few. Only when the service is standardised, widely available and the public perceives a value in it, will it start to take off -- but take off it will and, at some point in the next five years, the PTT bandwagon will begin to roll.

Shortening product take-off

Based on the analysis of their research model, Thun, Grobler and Milling's paper considered ways to shorten the time before a product takes off, the results were:
1. The classic method of increasing the public's exposure to a new product -- advertising it to make sure people know of its existence.
2. Increasing the contact rate between non-users and users who can convince non-users to adopt. Viral marketing is one mechanism that springs to mind here.
3. Focusing on augmenting the pool of interesting communication partners of every user in the installed base -- examples given are:
a. Marketing measures to make users communicate more with each other and formerly unknown people -- e.g. generate communities of users
b. Technical advances that make it possible to use the product in new ways, increasing utility or communication with new people (e.g. SMS) or more than one partner (e.g. conference, PTT)
c. Extending the installed base indirectly by creating compatibilities with existing products (e.g. connecting Mobile Instant Messaging to PC based Instant Messaging).
Furthermore, from general experience, take a fresh look at the service from the consumer perspective: 
1. Is the product usability maximised? -- Think of both its utility and usefulness.
2. Are you continuously enhancing the product with feedback from the end-user?
3. Is the end-to-end process of purchase as simple and intuitive as it could be? If you are a Product Manager, become a mystery shopper for your product.
4. Have you overcome all possible barriers, both psychological and physical, as to why a consumer should not buy your product? This could involve more detailed analysis of the end users themselves, not treating them as a homogenous mass but segmenting them better and understanding their real needs and wants in addition to where they perceive the most value lies in your product offering.
5. Are you re-enforcing their choice to buy your product as a good one post-buy? Turning them from product penguins to product evangelists.
In summary there are a great many factors that can affect a products' success in the market. Even when all the barriers to adoption are removed, a product may be slow to take-off due to slow recognition of its benefits within a potential community of users. In addition to the basics of removing these perceived barriers to product purchase, things can be done to spread the word, increase adoption and raise the installed base.         n               

Reference:
Proceedings of the 18th International Conference of the System Dynamics Society:
'The Diffusion of Goods Considering Network Effects' -- A System Dynamics-Based Approach
Jorn-Henrik Thun, Andreas Grobler, Peter M. Milling.
University of Mannheim

Nick Outram, SopraGroup UK can be contacted via e-mail at: noutram@sopragroup.com

While RFID is one of those technologies with a potentially sinister side, it has been proven in the field as an
important tool for companies who need to track their assets and reduce costs. Malavika Srinath looks at how the
technology is developing

Every organisation in existence is a victim of technology. For a company to be 'high-tech' however, it requires a lot of positive thinking. This is an expensive label to earn and few are willing to take the risk of investing in a technology that hasn't yet proved its worth. Yet, how can worth be proved unless the technology is put to use? This question has remained unchanged over decades.
And one more victim of the market's 'chicken-and-egg' view on technology is RFID.
So far, big retail giants like Wal-Mart have spear-headed the pro-RFID movement by issuing mandates to their suppliers, resulting in half-hearted attempts by companies to introduce this technology into their operations. But with new packages on the market introduced by ERP software vendors like SAP and Sun Microsystems, RFID has become infinitely more attractive.
Why? Simply because now, thousands of customers the world over can adopt these packages almost seamlessly. Therefore, a technology that was previously "too expensive" now doesn't seem such a waste of money and resources after all.

What is all this fuss about RFID?

The technical process of RFID mystifies many. But in reality, the principle behind RFID reminds me of biology lessons at school. In essence, the RFID scanner is like a bat, sending out radio waves to track the location of a truck, or pallet. The tag embedded in the object sends a message back to the 'bat', detailing its location. This message zips back to the system, to be read by operators and line managers.
Simplistic as this may sound, RFID can save a company millions of pounds in wastage and loss of goods. The technology was initially used to track livestock on farms, but for the companies that have been brave enough to extend this system to their business operations, the results have been phenomenal. Tighter inventory control, time saving and increased transparency in operations along the supply chain are just a few examples of the benefits that they have seen with the use of this technology.
Given the nature of the RFID mandates, the earliest adopters of RFID were in the retail supply chain. It is estimated that spending to track retail products alone will grow from $91 million in 2004 to $1.3 billion by 2008. Within retail, perhaps the sector that benefits most from the use of RFID is food retail, especially in cases of food and drug safety and product re-calls.
Undoubtedly, the market is growing more positive about the technology but companies still need convincing that they can generate tangible business value through integrating RFID in their business operations. No one wants to be first to act and the atmosphere is one of "let's wait and watch".
The apprehension behind the adoption of RFID is not unjustified. Costs are high now, and benefits are not fully visible. The possibilities of misuse and breach of security seem more plausible at this point, especially to a company that does not have funds to invest in hi-tech systems.
In addition, as with any technology, there are always drawbacks: it is expensive, it can be inaccurate due to external elements, and scanners can only read tags from particular distances.
Also, once companies have seen the benefit of this technology, there is a possibility of it being used in areas outside supply chain management. It is scary to think that these scanners could make their way into daily life, intruding on privacy and scanning private details -- for example, the contents of one's handbag when inside a store.
But realistically, given the cost of single tag, RFID is most likely to remain in the back room well into the future. It needs to be viewed, therefore, merely as an opportunity to make operations faster, more efficient and certainly a way to keep customers happy.
It is therefore critical that software providers are attuned to changes in market requirements.
For most vendor companies, whose yearly business objective includes the introduction of new functionality and packages, software innovation can make or break yearly revenues.
The question really is: what do customers want?

The next logical step

Many customers who are already on enterprise resource planning (ERP) software packages have been interested in RFID for some years now, either because they are proactive or merely because of compliance. Software vendors have simply tapped into this burgeoning need for faster, more efficient systems.
The result: new software packages, encapsulating the RFID functionality. Easy usage, effective results.
Several attempts were made in the past to simplify the integration of data collected via RFID into other software applications. With these new packages, one of the biggest hurdles faced by the tag in managing large amounts of data could be done away with.            ©
Packages were initially offered to pilot-test customers, then opened to the entire market in mid-2004 and customers who were the early adopters are already seeing results.
Focussing on unique industry demands, vendors encourage customers to capitalise on the competitive advantages offered by their respective packages. For example, SAP even helps its customers implement RFID in a step-by-step approach, making the whole process less daunting for the customer. So far, things are going right and the industry seems to have delivered to its eager customers.
So what's the down-side?
It's that customers on various ERP systems run more than 90,000 installations. Will they all be able to integrate RFID into their operations with equal ease? Naturally not. So this means that customers will have to invest further in their systems in order to be able to run these new RFID packages. Although the benefits of adopting RFID into one's operations cannot be denied, it could be years before companies see tangible benefits.
In addition there are other systems available in the market like RFID-enabled Warehouse Management and Asset Tracking that could offer very similar, if not such elaborate, RFID functionality. These are even likely to be marginally cheaper, given that more and more software providers are starting to run the RFID race.

Heavyweight vendors

With so many software vendors climbing into the sad-pit, the tug of war for the top spot continues as heavyweight vendors like SAP and Sun Microsystems release RFID packages within weeks of each other.
Despite being in a fiercely competitive market, the bigger players in the industry seem confident that they won't be losing customers because of their prices. In the end, the belief is that the benefits of using RFID will outweigh all the initial costs of implementation.
Interestingly, companies like Kimberly-Clark (an early adopter of RFID due to the Wal-Mart mandate) are starting to play major roles in helping to push the concept of RFID further, while cutting back on their own costs. This is reflected in Kimberly-Clark's suggestion that SAP and OAT work together to create a productised interface to ensure that companies would not be required to develop their own custom software, thereby reducing costs.
The market is highly likely to see more and more of this kind of initiative, as customers in industries such as high-tech, retail and pharmaceutical become increasingly convinced that RFID is the way forward.

A market set to grow

It is not at all surprising that despite currently averaging on a price of 50 cents a tag, the market for RFID is set to grow. Eventually, the costs saved because of RFID will have an impact on the price of tags and by 2007 businesses could well be shelling out over $260 million for the integration of RFID into their systems.
With the introduction of their new RFID packages, bigger vendors like Oracle, SAP and Sun Microsystems have already bagged themselves a fair share of this future revenue. It is unfortunate that smaller software providers like RedPrairie and HID Corporation who began to deploy RFID solutions several years ago may have missed out on this pot of gold. The key factor governing the success of technologies like RFID is in the time of introduction: the market is unlikely to be receptive to technology when it is not ready. In biding their time, the bigger players have scored.
Increasingly, it is starting to look as if RFID is the key to winning the race for technology.                           n

Frost & Sullivan is currently carrying out strategic research in the RFID market with a specific focus on logistics. For more information or to give us feedback on this article, please contact the author at malavika.srinath@frost.com

Malavika Srinath is a Research Analyst, Logistics and Supply Chain Management, Frost & Sullivan

Network security is one of the key issues in the IT industry, and the challenge for operators is making sure that its network elements are protected. Costa Costantakis explains

Security has become a hot-button issue in IT and carrier network operations.  While 'securing the perimeter' appears to have captured most of the available spotlight, a related and emerging security challenge for carriers is that of administering secure access to their network elements (NE) -- both to differentiate themselves and, by extension, to ensure the security and availability of their customers' traffic. 
Unfortunately, the reality today does not reflect the security measures one might reasonably expect to find in service provider networks. Many NEs, particularly legacy elements, have limited security features designed into their software. Tools to centralise and automate security administration are lacking, at best. Due to these challenges, combined with a lack of resources and time, security administrators are often forced to overlook basic procedures like password strength enforcement, with the original, default password set by the manufacturer often left unchanged on NEs.
Consider also the true cost of security administration. According to industry estimates, companies will typically spend between 25 and 40 euros ($30 and $50) in operating expenditures (OPEX) for every password change they administer. With substantial numbers of passwords on a network being changed many times per year, the automation of credential changes can amount to many millions of euros in annual expenditures.
Beyond the direct costs of security administration, there are the associated costs of opportunity that include service level agreement (SLA) credits, customer dissatisfaction and churn, and network outages that can be attributed to two sources: (a) the inadvertent misuse of commands by innocent network operators dealing with fast-paced, complex network operations; and (b) increasingly sophisticated hackers gaining unauthorised access to a group of NEs within a network.

A daunting challenge

In spite of the increasing importance of securing the management plane within their multi-vendor network environments, the reality is that service providers face daunting challenges in administering security. They are routinely forced to use rudimentary tools, like scripts, to effectuate password changes across their networks (with failure rates often exceeding 50 per cent on a first pass). They are forced to allow shared passwords among network operators because of the inherent limitations in the number of accounts that can exist on a network element (eg, a hundred users requiring access through a limit of only five or six accounts offered on a network element).
  This is in spite of the fact that shared passwords compromise security altogether and make tracking network activity difficult, at best. Central logging activity is also a challenge because different vendor systems log activity differently (some on the NE, some on the EMS that accompanies the NE, etc).
Perhaps equal to the risk of hacker intrusions, in terms of potential network outages, is the innocent misuse of commands from within. The inability of security administrators to restrict a network operator from using specific commands beyond the scope of his responsibilities is a primary reason for this. The end result is that a network operator is often granted more privileges than would otherwise be given by the security administrator, which in turn leads to the possibility of inadvertent or accidental human errors that can take down parts of the network.

A better approach

Conceptually, a solution to the challenges described above entails breaking from the traditional approach of giving users direct access to NE accounts. An alternative approach is to create a new layer of 'user' security that is distinct from the network element security layer.  This is achieved by deploying a distributed Security Proxy Server that behaves as a layer of abstraction between users logged onto the system and NE accounts. The Security Proxy Server's job is to authenticate users and determine their command privileges before providing them access to a network element account. Furthermore, it filters all of their commands, allowing only those that an individual has been authorised to use. The solution described is compliant with the emerging ANSI (ATIS T1.276-2003) and ITU-T (M.3016) standard specifications, being driven by service providers and governments in a quest to secure their infrastructure without markedly increasing their operating costs.
In taking this overall approach, several benefits arise, as follows:
*  Unique Passwords for All Network Operators
First, passwords are no longer shared. Every network operator is assigned a unique user ID, password, and set of command privileges. All are strictly enforced by the Security Proxy Server serving as a 'gatekeeper' and controlling all access to NE accounts. A further benefit is that NE account passwords now remain hidden from all but a few security administrators, owing to the layer of abstraction between users and the NEs.
*  Secure Centralised Dissemination and Control of Passwords
Second, security administrators avoid the headache of 'swivel chair' management by moving to one, centralised system for managing security on all NEs across the multi-vendor network. Such centralisation can pave the way for true automation of the ongoing requirement for user ID and password changes, in turn saving the service provider millions of euros per year in security administration effort.
*  Overcoming Security Feature Limitations
A third benefit of the layer of abstraction between users and the NEs is that common security policies can be imposed on user privileges right across the entire multi-vendor network, regardless of the number of NE types. This is done by implementing the policy logic at the new user security layer in the Security Proxy Server, instead of hoping to find common security features on each vendor's NE. The use of this user security layer further streamlines element  management by providing each user with "single sign-on", versus requiring network operators to log on to each NE in series.
*  Centralised Logging of All User Activities
A tiered architecture that makes use of the additional security layer as described, allows the tracking and logging of user activity to move out of individual NEs and into a central repository. Central logging and auditing of all user activity suddenly becomes possible, with all user commands being funnelled through the Security Proxy Server. Effort, time and money is spared by security administrators who gain the ability to access consolidated activity records from a single source and in a consistent format, in turn proving invaluable in the event of a network crisis or outage.
*  Granular Security Privilege Control
Not every field operator is trained and qualified to enter every possible command on every NE -- nor should they be if the economics of network operations are to remain favourable. The Security Proxy Server 'brokers' sessions by checking the authorisations granted to an individual as he attempts to issue a command to the NE. This level of granular control allows security administrators to minimise the potential for human error to occur. In essence, an inexperienced field operator cannot accidentally issue a fatal command if he is blocked from issuing it.
*  Optional or imperative?
Faced with many of the time-consuming challenges related to management plane security, administrators have more often than not been forced to accept the risk of a breach by cutting corners on standard security management practices. To date, this shortcutting has been viewed as realistic and a necessary compromise in ensuring that vital network operations proceed in a timely fashion.
As next generation carrier networks become more complex and susceptible to outages, and as secure network operations become a differentiator for service providers in the eyes of customers, such corner-cutting will become too costly to overlook. It is inevitable that, driven by the instinct for survival and their customers' insistence, all service providers will seek out more robust, cost-effective approaches to securing the management planes of their multi-vendor networks in the years to come.                                             n

Costa Constantakis is Director of Marketing at Nakina Systems, and can be contacted via tel: +001 613 254 7351 ext. 307; e-mail:costa@nakinasystems.com

With currect technological forces driving the industry down the road of convergence, companies who are unable to respond quickly will suffer most, says Marc Kawam 

Convergence, or the transition to Internet Protocol (IP), is what many people are calling a 'stealth revolution'. Slowly and invisibly to the majority of phone users, Europeans are changing the way they communicate with one another. Following on from the popularity of IP telephony in the US, where there are already 3 million residential users, we are now moving away from traditional circuit routed networks to Internet-based voice calls. We are even seeing the emergence of triple-play services as service providers like FastWeb and Homechoice bundle together voice, data and video and wireless to deliver a one-stop shop for all communications and entertainment needs via IP.

Convergence going mainstream

In the UK, BT's recent announcement to transform its telecommunications infrastructure into a pure IP-based network by 2009 is a clear indication of the commitment that large providers are making. And there are good reasons for it: as well as paving the way for a string of new hi-tech applications, its 21st century network (21CN) is expected to deliver cash savings of £1bn a year to BT by 2009. But are established providers up to the challenge of migrating to IP and can they really compete with innovative companies like Skype and Vonage that are rewriting the rules and being heavily backed by deep-pocketed investors?
In the fast-paced telecoms sector, responsiveness and speed-to-market are all-important. Whether speed to respond to competition, speed to innovate, or speed to respond to customer requests, responsiveness is the major challenge companies face today. Take almost any CSP offering and track it over the years: it becomes quickly evident that what was considered a rapid service roll-out a few years back would be considered exceptionally slow by today's standards.
As deregulation and a fall in barriers to entry opens the market up to a new breed of innovative companies fighting for a slice of this evolving market, customers are being freed from the tyranny of the incumbent. Free to choose from an increasing array of communications services providers (CSPs), customers are increasingly demanding more for less. Not only do they want advanced IP-enabled services, they want to pay less as well. If that wasn't enough, they also have to   accommodate the expectations of the 'Internet Generation' who are used to getting what they want when they want. Some CSPs believe their customers are now shifting to a real-time provision mind-frame where they can instantly obtain the services they desire.

Agility is key

Stuck with hundreds of IT legacy systems coming from the days of circuit telephony, many companies fight with a myriad of disjointed IT systems, and lack a clear vision of their future technological needs. Even some of the established players lack the internal systems and related flexibility to handle demand for next generation services where the customer is in control. According to Insead's agility index, which measures the ability to roll-out and provision innovative IP-based services (such as broadband, VoIP and IP-TV) rapidly, efficiently and to a high quality to new market segments, CSPs in Europe have, on average, reached only 60 per cent of their potential agility. In addition, whereas CSPs are now, on average, able to bring service to market 2.5 times faster than five years ago, customers are prepared to wait 2.8 times less long for new services to be delivered to them.
Clearly there is a gap between the speed and           flexibility required to compete in the fast-paced telecoms industry, and the IT readiness of these organisations to respond to this need. With competition in Europe being fierce, agility becomes an imperative for business success. So what can they do? How can they move to the nirvana of real-time service provisioning, as their customers often demand? How can this be achieved for the complex bundles of IP-based products/services (including content services) that are delivered off multiple vendor platforms, and more often then not owned by diverse business entities?
Insead's research report identifies self-provisioning/self-management and integration with partners' systems for more seamless order mediation as key areas for improvement. The latter becomes increasingly important as service providers begin offering aggregated services, where a subset of the services is actually supplied by a partner entity. Two-way communication of service order information, such as transmitting service requests, receiving acknowledgements, and confirming service availability between partner service provider entities, will increasingly need to be automated. Unstructured manual processes for mediating orders will not scale and will not allow CSPs to meet the demands of their consumers.

How to cope with change

Service providers need to bridge the 'automation gap' between customer-facing software applications and the service delivery partners for a variety of rapidly changing and evolving services. Next Generation Operations Support Systems (NGOSS) can bridge the gap. They allow CSPs to not only handle the vast array of new services that customers are demanding today, but also support future IP-based services that service providers will need to roll out to retain and grow their customer base. These solutions are often built on open technologies, such as, J2EE, EJB, JMS and XML. Standards, such as TeleManagement Forum's SID and OSS Through Java Initiative (OSS/J) APIs, promise to facilitate development of component based OSS where CSPs can mix-and-match systems as needed to quickly respond to customer demands or new services needs.

Crucial part of OSS business

Standards are a crucial part of the OSS business, since currently, there are over 2,000 commercially available OSS applications on the market, and combined with applications that service providers have created themselves, the number increases to almost 5,000. Most of these solutions, however, do not provide all the components needed to cover widespread adoption, and the integration challenge for these applications is immense. The interfaces between the systems developed using standards can hide the complexities of the functionalities of each individual system and allow, for example, flow-through service provisioning without any inefficiencies due to manual processes or manual data transfer. This is one key and necessary part of the equation: the technological piece.
Beyond that it requires management's understanding of the IT needs and priorities, and an appreciation of the impact that low IT agility can have on the success of the organisation. Responsiveness, speed to market and quality of service are the key ingredients to crack the European market. It is a long way to the top, and most CSPs have a fair way to go yet, but one thing is clear: those who manage to climb up the agility index ladder will have the technological ability to keep up with the accelerating pace of the CSP industry; the rest may have to pay the price for being too slow.                 

Marc Kawam is Managing Director, Europe, CEON Corporation. tel: +33 153 536766; e-mail: MKawam@Ceon.com   www.ceon.com

A taste of TeleManagement World in Texas -- which takes place from 7-10 November

Telecoms is undergoing enormous and rapid change. Managing this change could mean the difference between capturing market share for new and innovative services or being left behind.
The TeleManagement Forum's U.S. conference and expo TeleManagement World, taking place November 7 - 10 in Dallas, Texas, will provide an in-depth analysis of the changing telecoms industry and will address new entrants, new technologies and ideas from revolutionary thought leaders that will ready operations and business support system professionals for the challenges that come with the innovation underway in the telecoms marketplace.
"Telecom competition is actually no longer about new technologies or even new services. It's about who can most effectively manage themselves, their networks and their customers, and which operators can become leanest, fastest," says TeleManagement Forum President Jim Warner.
According to Warner, the US Telecom market has not seen this much change in more than 20 years. Warner points out that there is massive consolidation taking place -- with Verizon buying MCI, SBC buying AT&T, and Sprint merging with Nextel.
"But there's also a lot of new players coming into the market, often with new technologies and capabilities," Warner says. "The fact is that if you can't manage your customers and give them the services they want, they're going to go elsewhere now that they've got choices."
TeleManagement World Dallas brings together operators, suppliers and industry experts to discuss, share examples and provide solutions for how to meet the challenges of aggressive time-to-market pressures and how to improve the customer experience to gain a competitive edge.
TeleManagement World is designed to be a "cram course" to help operators truly understand how to manage next-generation services and the networks and back office systems that will deliver them.
The conference will include a Telecom Innovation Summit on November 8 that will delve into some of the industry's most pressing issues. Industry panels will focus on managing the next wave of innovation and other hot topics such as new devices and services, municipal Wi-Max networks and how operators can be 'lean' in light of Sarbanes-Oxley.
Six tracks being held November 9 - 10 will prepare operators and suppliers for their next move in the telecom industry by focusing on business transformation and market readiness; how to optimize the next-generation network's potential; service delivery and content to provide the anytime, anywhere customer experience; revenue assurance and billing; NGOSS software and technology and a deeper look at cable, IMS and IPTV.
Seven different Catalyst programs will serve as a living lab of innovation at TeleManagement World Dallas and will include two operator-driven initiatives, demonstrating how to solve complex operation and business support system issues.
The Catalyst programs will create scenarios that employ NGOSS concepts to manage mobile multimedia service management processes, simplify Sarbanes-Oxley compliance and manage Internet Protocol (IP) services, including voice over IP. Catalyst projects will also explore how to integrate OSS products from multiple vendors in a triple play scenario and will kick-start a new NGOSS Web Service Enablement initiative.
Fourteen different full or half-day training courses will be offered on Nov. 7 and Nov. 10 on topics such as NGOSS, the SID and the eTOM as well as using OSSJ to implement NGOSS. Courses will also cover a BSS perspective of VoIP, billing and charging for data and content, revenue assurance and billing in a convergent world. Attendees of the Monday courses will recieve a copy of NGOSS Distilled, a book written by John Reilly and Martin Creaner.
TeleManagement World provides a networking, educational, and demonstration forum that will help attendees apply state-of-the-art advancements in telecom to their business to manage the innovation of the future.

For more details or to register visit:
www.telemanagementworld.com

    

 

European Communications is now
Mobile Europe and European Communications

  

From June 2018, European Communications magazine 
has merged with its sister title Mobile Europe, into 
Mobile Europe and European Communications.

No more new content is being published on this site - 

for the latest news and features, please go to:
www.mobileeurope.co.uk 

 

@eurocomms

Other Categories in Features