Friday, July 30, 2010

Meaning of Signal Strength Bars

Apple's antennagate controversy is now growing old and a bit stale. The company has come out with a half-hearted response to dealing with the problem, but it appears increasingly likely that this will prove sufficiently satisfying to their customers, especially since most have not experienced the problem of fringe-area performance. There has even been some talk about the antenna being the reason for delaying release of the white iPhone 4, or then again maybe not.

Although I've already touched on this issue, there is one aspect that I think is worthy of another blog post. It comes about from a few subsequent discussions I've had with people, including iPhone 4 users with the product in question. This has to do with a fundamental misunderstanding of how cell phones are internally engineered. I often take it for granted that in this modern age when it seems everyone is conversant with the high-tech products and services that they use every day, that there is also some basic understanding of that technology. This is frequently untrue. Rather than seeing the cell phone for what it is, internally -- a collection of modular building blocks that are essentially independent and communicate over interfaces -- some (most?) people see the cell phone as a monolithic device. That is the impression I get as to why there is still so much confusion between the dependence of the iPhone 4's antenna performance on how it's held and the number of bars on the signal strength display.
But touching the hot spot doesn’t always ruin the call, even if it lowers the number of bars. In several cases, when I was already on a call with three or four bars showing, I deliberately covered the hot spot with my hand, and the call continued normally, strong and clear, even though the bars dropped to one or two.
On that basis I decided to break down the problem into its component parts so that the relationship is more clearly illustrated, and in particular why the number of bars of signal that is displayed can be so distinct from fringe-area performance. Hopefully someone will benefit from this. To those who do understand the technology well, please forgive me for any errors due to my coarse description of the technology details.

First however, let's look more closely at weak-signal performance, where the received signal strength is close to the minimum for a usable signal. This is determined by the signal's field strength -- which is independent of the phone -- and the phone's antenna and radio module. As can be seen in the diagram, the gray area where performance is marginal is wider for the now-obsolete analogue technology (such as AMPS) than for digital. Degradation as an analogue signal weakens is more gradual, and is manifested with increasing noise and distortion until the other person's voice becomes completely unintelligible, and finally descends into silence and call termination.

With digital, degradation may not be noticed until the bit error rate is high enough to cause drop-outs. From there it takes only a little less signal to reach the condition where the bit stream cannot be decoded, resulting in silence and then call termination. In other words, the gray area of received signal strength where the signal is distorted or subject to drop-outs is narrower for modern digital phone transmission. As mentioned in my earlier article, this is one reason why the number of bars can be uncorrelated with reception quality: at any level above the minimum signal strength to achieve a low bit error rate, digital reception is pretty much perfect.

With that out of the way, let's return to the above diagram. The radio module is a self-contained unit, with reception dependent on no other component other than than the antenna system. If the antenna efficiency is impaired (or the signal from the carrier's base station is reduced by terrain or obstructions, or there is interference from another source) the signal that the receiver has to work with is reduced. There are differences among phone in their antenna designs and placement, and some variation in the performance of radio modules from component manufacturers, but for any one phone there are no other factors we need to be concerned about. The antenna plugs in one end, and audio in and out plugs in the other end, plus various control lines and power.

One thing the radio module produces is an indication of signal level (using an internal signal sampler) that can be read and used by other modules. This is typically to be interpreted on a relative scale where zero is approximately the level where communication with the network base station (or tower, if you like) is marginal or lost.  On the Nexus One phone, the relative scale goes from 0 to 31, where 0 is referenced to -113 dBm and 31 is -51 dBm. Below and above these signal strengths will produce 0 and 31, respectively. On other phones the granularity may be less (less than 32 values) and the interval between values may be non-constant (it's a constant 2 dB on Nexus One). However, these are at best nominal values and could vary, perhaps even quite a lot, since there is no real technical need to ensure accuracy.

Whatever the granularity and range, these values are mapped by software to something that is displayed for the convenience of the phone's user. The mapping can be linear (as shown) or any function at all that maps from the signal strength value to the indicator icon. The important part here is that the mapping is arbitrary. I've shown it as mapping the signal strength to one of 5 different displays -- 0 to 4 bars on the screen icon -- because that is a common format.

When Apple talked of making a software change, it was this mapping function they were discussing; they proposed changing one arbitrary mapping to another arbitrary mapping, one where the number of bars would be higher for lower signal strength values. If by chance this wasn't clear before, you should now be able to see that this mapping function has nothing whatsoever to do with the performance of the antenna system and radio module; the mapping function does not impact reception quality so modifying it does not solve the iPhone 4 antenna problem.

Of course changing the mapping function so that it shows more bars at lower signal strengths doesn't hurt. It may even be reassuring to those who worry that a call will be interrupted, or never begun, when there is just one bar displayed. This really isn't a terrible idea since, as described earlier and in the previous article, performance is probably just fine at that signal strength on a digital network so why not remove the source of worry. The thing Apple should not do is claim that this change solves the problem, since it clearly does no such thing.

There is also the matter of the phone's transmission performance since it, too, is equally dependent on the antenna; antennas are in general reciprocal in receive and transmit usage, so a reduction of antenna performance affects transmission (from the phone to the base station/tower) equally. Since that process is a story by itself, I'll stop here, having covered the reception issue which is the larger problem.

Wednesday, July 28, 2010

Paid For Unethical Acts

Rather than be annoyed or outraged by Minister Clement's behaviour in the long form census discussion, I am instead saddened. This Globe and Mail article shows very clearly the curious juxtaposition of two seemingly incompatible versions of the same person: one who immediately reacts to save another human being's life at no small risk to himself, and the other a stone-faced political robot who is immune to reason and good sense in the pursuit of maintaining an absurd public position.

I can only wonder how one person can rationalize this degree of inconsistent behaviour. Tony Clement is clearly an intelligent and sensible person, and so I am sure he must struggle with this to a degree in at least the privacy of his own thoughts. There is a real dilemma here that is not easily resolved: how does a person with a strong ethical foundation justify and deal with having to act unethically in pursuit of a paycheque or other life goals? Surely if he were to back down from his stance on the census he would may get some immediate applause for a principled decision but his political career, or more, could end up lying in ruins.

He is hardly alone when dealing with this. I am willing to bet that anyone reading this has not once but numerous times found themselves instructed to obfuscate, mislead or even outright lie by the company or institution paying your salary. In most cases you will do so, even if you do so with your insides in a knot of shame or disgust. Some can manage it without any qualms whatsoever, even when they are their own boss. It's common.

As one example, a customer service supervisor at Rogers explicitly lied to me over the phone about the nature of a long-standing service problem and what they were doing to expeditiously solve the problem. I had been transferred to her when, after many days of unsatisfactory action on their part, I demanded to cancel my service entirely. I didn't know she had lied until a few hours later when I got the real back-story from another Rogers employee who was also upset at not getting the support needed to get my problem resolved.

I can imagine that she is in every other respect a fine individual. But when told how to perform her job, in order to continue earning a salary and advance her career, she shoved her ethical values into a back pocket and did as she was told to do. It can be very easy for any of us to do the very same thing. I am myself less than innocent in this respect; many times I found myself promising customers what I knew very well the company I was representing could not possibly deliver. The way I compensated for this was by afterward pounding on others within the company to somehow get the job done. Even so I always felt soiled by the experience.

Minister Clement is now in this very position. Hopefully he can find a comfortable middle position that will allow him to feel he's done his job while also resolving the ethical dilemma. A bigger question may well be, will Stephen Harper allow him to do so.

Tuesday, July 27, 2010

Corporate Profit Revival and the Stock Market

In the ordinary course of my investing activity, I regularly try to figure out market sentiment: whether the larger population of investors are positive or negative on share prices moving forward. There are many technical indicators and surveys of fund managers that attempt to quantify sentiment, which all have their pros and cons. There is of course no certain way to know the future no matter how much data you look at.

One big question many are asking right now is whether the economy is truly on a course of sustainable recovery or if there another decline in the offing: the so-called double-dip recession. Out of all the many economic and market indicators that are being paid close attention are corporate profits. They've been rising. Under other circumstances this would give investors some confidence that the recovery is on track and there is therefore reason to be confident of rising stock prices. Unfortunately it isn't quite so simple.

Much of the profit recovery appears to be due to cost cutting, not a resurgence of demand. As any corporate manager knows, the quick route to immediate profitability is to reduce expenses since (to simply greatly) revenue minus expenses equals profit. Not any expense can be cut since produces revenue always have unavoidable expense. If you sell a motorcycle, for example, you need to buy or build the parts, assemble them, advertise, ship product to dealers and provide service.

What you need to do is reduce expenses that don't affect current operations. Some choices include marketing, research and development, defer capital expenditures, and reduce salaries, bonuses, travel and even office supplies. In other words, gamble with the company's future success. If the same pressures apply to a company's competitors, due to broader or global market conditions (such as now) the risk may be muted. Getting it right is no easy task since the company may miss the economic turn or some new innovation and therefore be poorly positioned during the recovery. Failure to invest in employees or production capacity runs risks incurring loss of talent and production interruption due to equipment failures. That, in an ideal world, is why senior managers are paid the big bucks, to make these critical decisions.

From the outside, as ordinary shareholders, we have to ask if the last quarter's profit rise is sustainable or an anomaly due to expense reduction that cannot be repeated. More precisely, if the recession continues, profits and share prices may very well decline again within another quarter or two. Believing that, it would make sense to sell into the current market rally. However if we instead believe that the recession is ending it makes sense to buy, or at least hold positions, since prices are bound to rise throughout the remainder of the year and beyond. Indeed, share prices may be jet-fueled by a combination of lower expense and increased revenue. But only until companies are convinced the recovery will stick and they increase employment and resume capital expenditures.

On purely technical factors I admit to feeling bullish on the market, even in the middle of the summer doldrums and their low trading volumes. When I pored over numerous charts on Monday -- some that I own but most that I follow and I believe act as useful indicators -- I noticed that many stocks, but certainly not a large majority, are near or testing their 50-day exponential moving averages. This proves nothing except that, for these stocks at least, they are at a decision point. Or more precisely, investors in those stocks are at a decision point.

Banks mainly look good, but not all. Gold looks poor (unfortunately I hold a miner in my portfolio), while many staid industrials and more-exciting technology stocks look set to run higher. This is very much a non-rigourous view of the market, just my impressions from lightly skimming over some of the mass of market data.

I don't know what will happen next except that it will be important. August may provide more definitive answers as the nascent trends are either confirmed or fail.

Monday, July 26, 2010

Statistical Sampling and the Long Form Census

In the ongoing controversy over the federal government's intention to make the census long form optional there is an exceptionally high noise level over issues of governance, political interference in the civil service, tax burden, privacy, value of the long form to government and outside agencies, and much more. I will talk about none of that, and will thus casually sweep from the table all politically-related issues encompassed by the discussion. I do this so that I can focus on the technical aspects of the issue, in particular whether the long form census delivers reliable statistical data when it is mandatory, and when it is optional. The political questions are interesting of course, but these are not my focus in this post.

Stats Can has three broad options on how to go about the long form census:
  1. Mandatory
  2. Optional
  3. Scrap it entirely
From here it is possible to formulate a few simple questions about the entire census exercise:
  • Is the mandatory census, as conducted in 2006 and earlier, delivering valuable data? That is, are the results sufficiently reliable to be useful?
  • To what extent does making it optional lower the reliability, and will its results meet the test of being sufficiently reliable to be useful?
  • Are there good alternatives to the long form which deliver similar results with acceptably high reliability?
If the answer to the first question is negative, the utility of the long form census is highly questionable, raising the possibility of scrapping it entirely. In the case of the second question (assuming that the answer to the first question is positive), it is reasonable to assume that the reliability will be lower (based solely on sampling theory), but we must now determine just how much less reliable it is to determine whether it is worth doing at all. The third question is particularly interesting since some countries do manage to do this pretty well without a long form, however the manner in which it is accomplished is important.

Let's start with the current long form and its reliability. There are two questions to be considered: is the sampling done in a way to meet the target statistical confidence in the conclusions derived therefrom, and; is the data collected accurate? If this question interests you, I suggest you read this article in the National Post since it gives a flavour of the problem of collecting high-quality data before continuing.

As the article shows, whenever you involve humans in any survey there is an issue of quality. It is important to understand that this is entirely independent of the sample size and selection methodology. Humans can lie, be negligent, or simply be wrong no matter what they are being asked. The problem can be managed in part by making the census form easy to use, easy and quick for the responder to get right, and avoiding hot button issues that make people uncomfortable (e.g. tell us about your undiscovered criminal acts and your favourite kinky sex acts). The question about natural gas spending is a good example of a question that will often produce poor data.

The same difficulty is perhaps more obvious in political polls that ask which way you'd vote if an election were held today. These are carefully reported complete with statistical confidence levels, nationally, provincially, and by other responder characteristics (e.g. age and sex). This is all very mathematically rigourous but still totally wrong. The methodology used to calculate the confidence levels (e.g. plus or minus 3.8%, 19 times out of 20) assumes that responders are equivalent to coloured balls in a bag that are randomly drawn; if you have a bag of 100 balls and you draw 10 balls at "random" and you get 5 white and 5 red ones, there is a tried and true mathematical process to say something rigourous about the entire population of 100 balls. Not so with people. Some misunderstand the question, say whatever they believe the questioner wants to hear, lies in an attempt to skew the poll, or would never vote anyway.

Remember, we're still talking about a mandatory long form census. Stats Can spends a lot of effort to make the sample (who is sent the long form) both sufficiently random yet suitable to the types of analysis to which the collected data will be put, and it is not enough. Even the force of law to ensure that the sample integrity is assured isn't good enough. Stats Can knows this and they will apply many supplemental filters to the data in an attempt to tidy it up a bit, if their assumptions about how the data is skewed by human and other factors are reasonably close to the mark. However, they can never be certain.

If we now degrade the data by making the long form census optional, it becomes even more difficult to filter the data. As Minister Clement has stated, an expected reduction in the response rate will be compensated by increasing the sample size. That's nice, but it doesn't help Stats Can all that much. The thing is there is no good way to know who is opting out and why. There is an unknowable selection bias. Even if they know the location of each responder (this requirement is apparently also in doubt, in addition to who does or does not respond) the additional unknown selection bias degrades any analysis of the collected data. Are people's reasons political, privacy related, education or comprehension, availability, or other factors? Stats Can can't know. They may also not have the ability to compare the 2011 and 2006 samples to determine what new filters to apply to the data.

At this point there is some question as to whether the long form census should be done at all if the data and its analysis are degraded by some unknowable amount, especially in advance of the census being conducted. If the quality is deemed likely to be too low for its intended purposes, both by government and other organizations at large, it may be appropriate to scrap it entirely. The cost of an optional long form may even turn out to be higher than a mandatory form: more forms sent out and potentially processed, plus more filter analysis to recover some of the lost data quality, countered by lower costs of enforcing compliance.

If the results of the long form census are still deemed to be important even though it is untenable due the loss of quality, there are alternatives, but perhaps not ones that are doable. Some countries maintain databases that integrate all the data that all government departments know about the country's citizens. This super-database can be good enough to make a long form census superfluous. Except, and (in my opinion) fortunately, this isn't done in Canada. We have set some pretty strong barriers between government departments and agencies which make it very difficult to share and integrate data across those barriers. As an example, reporting income to CRA that was obtained in, shall we say, peculiar means, does not end up in the hands of the RCMP or other police forces. CRA can't even give your identity to Elections Canada without your consent. These barriers are actual respected and so can generally be trusted, which encourages more honestly when dealing with branches of the government. It is highly unlikely that Stats Can's needs would supersede the necessity and enforcement of those barriers. In other words, super-databases are not in the cards, and for which I am most thankful.

I don't know how this will eventually shake out since it has become a political hot potato that has already resulted in the resignation of the head of Stats Can (presumably regarding a perception of political interference) and is generating a surprisingly heated debate across the country. In closing I will note that our Mayor, Larry O'Brien, says that the City of Ottawa will if necessary replicate what the long form census produces by some means. It will be amusing to see how he proposes to do this since, as a minimum, the City does not have the funding to do a local "census" and does not have the federal government's power to force compliance in responding to any questionnaire the city sends to a sample of residents.

What a weird issue this is that has captured so much of the country's attention. Who would have thought it.

Sunday, July 25, 2010

Sing Along to the Giant Hogweed Invasion

First it was the invasion of purple loosestrife that we panicked about as it filled the ditches and fields, replacing native weeds with themselves, the foreign weed. The invasion continues while the panic has long since subsided. More recently it is zebra mussles and Asian carp that are the invading fauna (rather than flora) that threaten our lakes and streams. Now it is the turn of the giant hogweed.
It's like something out of a horror film: A pretty plant that was once the darling of backyard gardens has escaped to wreak havoc on a community near you. It grows tall enough to dwarf adults and is armed with toxic sap that burns human flesh.
The unwelcome spread of this noxious plant is nothing new. It is an ancient invasion, human mediated, much like rabbits in Australia. In fact it brought to mind a song written 40 years ago that I remember well, and is tucked away in my dusty collection of LPs and cassette tapes. It was written by none other than the band Genesis, and release on their Nursery Cryme album of 1971. This was very much in its early days when Peter Gabriel was the leading personality in the group and Phil Collins had just joined as their new drummer. It was the heady days of art rock, a genre in which Genesis was a rising star.
Peter Gabriel's lyrics to "The Return of the Giant Hogweed" tell an apocalyptic story about a "regal hogweed" being brought from Russia by a Victorian explorer to the Royal Gardens at Kew. Later, after being planted by country gentlemen in their gardens, the hogweeds take on a life of their own and spread their seed throughout England, preparing for an onslaught. The citizens attempt to assault the hogweeds with herbicide, but the plants are immune. After a brief instrumental (subtitled "The Dance of the Giant Hogweed"), the song ends in a crashing climax where the hogweed reigns victorious over the human race.
I know, it sounds so apocalyptic, but the song is actually quite decent and is quite humourous in its way. In my opinion it has aged not at all badly in the intervening decades although I listen to it only rarely nowadays. Here are the closing words of the song lyrics to give you a taste (but beware of pop-ups and other junk you'll pull up if you click on any song lyric site). Even the species of hogweed is the same as in the current crisis.
Mighty Hogweed is avenged.
Human bodies soon will know our anger.
Kill them with your Hogweed hairs
Heracleum Mantegazziani

Giant Hogweed lives!
One thing that puzzles me is that the song is titled The Return rather than The Revenge of the Giant Hogweed. Reminds me of a long-ago minor controversy that saw the original title of the Star Wars film changed to Return of the Jedi from Revenge of the Jedi, supposedly by Steven Spielberg himself.
Lucas changed the title, saying "revenge" could not be used, as it is not a Jedi concept.
However, I doubt very much that anybody got on Genesis back about the title choice, and, unlike the principled Jedi, revenge is very much the theme of the giant hogweeds in the song.

So dust off the old record collection or download the MP3, and then play the song the next time someone starts on about how we are all doomed by this majestic though nasty weedy invader. Just perfect for a lazy summer Sunday.

Thursday, July 22, 2010

Catching the Smart Phone Market Wave

Antennagate is not hurting iPhone sales, nor should we expect that it will. Once the market decides that it loves a product it takes a lot of pain to sever that relationship. While Apple's release of quarterly results this week do not reflect loss of sales due to antenna problems -- the quarter ended before the issue became public -- there are ample indications that there is no business problem.
[Interviewer] Any changes in demand since antennagate?

Cook: “Let me be perfectly clear: We are selling every unit we can make, currently.”

Follow up: So you haven’t seen any slowdown in order rates, or any increase in returns?

Cook: “My phone is ringing off the hook with calls from people who want more supply.”
This is not unique to iPhone as even Toyota found out this year. When Toyota's sales dropped precipitously there was real concern that the company would suffer a blow it would not easily, or ever, recover from. Yet their sales have recovered quite nicely. Unfortunately I don't have the reference at hand, there was a survey of car shoppers done at the height of the public crisis over uncontrolled acceleration and Toyota's apparent malfeasance and negligence. What the survey found was that buyers that were considering Toyota before the crisis arose were still considering Toyota.

Rather than buying a vehicle from another manufacturer they were content to wait for Toyota to solve the problem and, importantly, for the recession to end: all vehicle manufacturers were deeply hurt by loss of consumer confidence and the resulting deferral of big-ticket purchases. If customer loyalty survived a product defect that could kill you, I imagine that a malfunctioning antenna and public relations missteps would not seriously hurt Apple.
A recent survey by IDC found that 66 percent of people who own older iPhones are holding off on upgrades, and 25 percent of new buyers are now delaying their [purchases].

...barring any other foul-ups with the iPhone or other products in the near future, Apple should escape this fiasco with its reputation intact. "The best defense against it is to have a strong cushion of good will already established. Apple has that," Bernstein said.
Apple is not unique with smart phone product defects. As I mentioned previously, the Nexus One built by HTC for Google has an almost identical problem. Then there's Droid X with its own problems. The fact is that all smart phones suffer from a host of defects, most small but some that are large: user interface peculiarities, speed, multi-tasking, networking, screen and camera glitches, and so forth.

The sad thing about this is that it is not unexpected; product releases with known defects is a necessary evil that manufacturers accept when there is a new market category -- smart phones -- that becomes enthusiastically adopted by consumers who can not buy the phones fast enough. Just consider all the new phones that have rapidly sold out or even had people lining up to buy them the first day, including every iPhone version, Droid X and HTC Evo.

There is money on the table right now, and only a foolish company would delay products to fix every last defect since gaining market share and riding the market wave demand that products are released early and often. If this is not done at the now critical phase of smart phone adoption, there is real risk of losing the market to competitors, not just this week or this quarter but forever. Not every smart phone platform will survive and survival requires maintaining market share and customer loyalty. Non-catastrophic defects can always be resolved in the next release (hardware defects, such as iPhone's antenna problem) or downloaded to customers' phones (software defects). Customer loyalty in this environment is sustained with a rapid release cycle that delivers new features and, we hope, defect resolution.

Get used to dealing with defects for some time to come, and even Android "fragmentation" for that matter. The nature of the smart phone market ensures that this mode of operation will continue for at least the next one to two years. Eventually the market will stabilize, the quantity of platforms and variants will settle down to a workable number, and the manufacturers will have some leisure -- but not much! -- to fix their products before you buy them.

Wednesday, July 21, 2010

Wireless Profits and Price Competition

The incumbent wireless carriers in Canada are doing very well indeed. Not only are they among the most profitable of all Canadian corporations, they rank exceptionally well among all major global carriers.
The Canadian industry leads the world in terms of average revenue per user (ARPU), earning an average of US $54.73 US per user per month. While Canadian carriers posted low per-minute revenue, value-added services such as caller ID and voice mail contributed to the high ARPU.

The average margin in the developed world was 38.3%, with U.K. firms posting the lowest result at 22.6%. The Canadian result was closer to the 42.2% average found among the 29 emerging economies in Europe, Asia and Latin America.
However this success does come with a cost, due to the high price of service.
Canada placed last among developed nations in penetration, at 69%, which was only three percentage points above the average penetration rate in the developing world, at 66%.
It is reasonable to conclude that there is more than mere correlation going on here, that the high margins and ARPU are directly responsible for the high profits of Rogers, Bell and Telus. Ideally, competition is the tool to prick the profit balloon, which by giving consumers more choice will push down prices and increase penetration. That is the idea behind the new spectrum licenses for Wind Mobile, Videotron, Shaw, Mobilicity and Public Mobile.

The market responded to the threat of competition, and therefore profitability, by (at least in part) driving down the share price of Rogers around the time that Wind entered the market and others announced plans to do so this year. This was a bit premature, as more recent price quotes show, with the reports that Wind was not winning large numbers of subscribers from the incumbents. This will indeed take time, not only for the new entrants to build their networks but also to convince the public that their service is reliable enough to make the switch.

There is also the matter of price, since the incumbents will not remain idle. They will have to be careful with how they counter the lower prices offered by incumbents, even if it is done under alternative brands such as Chatr by Rogers Wireless.
The first taste of that came Friday, as Mobilicity chairman John Bitove called reporters to his office and threatened to haul Rogers before the Competition Bureau or launch legal action. He sees the Chatr brand – specifically, talk of its too-close-for-comfort pricing plans – as an “abuse of power” that contravenes a section of the Competition Act dealing with temporary or targeted “fighting” brands. He said Rogers was trying to “destroy” his company.
Under the current federal government I am doubtful that the Competition Bureau or even the CRTC will be enthusiastic about getting involved unless the incumbents' prices become blatantly predatory by being set at levels well below cost. While the government has shown that it is willing to promote competition, even when it means overruling the CRTC and being "flexible" with regard to the Telecommunication Act, they are more relaxed about letting the market operate unfettered. There is also the matter of stock prices and employment: the incumbent carriers are major employers of Canadians and their shares are widely held in mutual funds and pension funds; the government will not want to open themselves to attack on either front.

A possible strategy that the incumbents could take would be to hide predatory prices among service bundles. If they lower a bundle of services (e.g. TV, broadband, wireline telephony and mobile), or offer to add wireless to an existing bundle for, say, $10 more a month, it will difficult to argue that it is the mobile component of the bundle that is getting its price cut rather than one of the other bundle components. However, if they go the route of separate brands for their cut-rate mobile services, such as Chatr, the bundling strategy does not work so well.

I suspect we will have to wait a while longer to find out what pricing strategies the incumbents ultimately settle on. They will not rush to lower prices until they believe they must -- to preserve high profits for as long as they can -- and this will not happen until the new entrants show some success at winning their customers' business. The pricing battle could become very interesting in the latter part of 2010 or early 2011.

Wednesday, July 14, 2010

It Is Not (too hot to blog)

One reason why I write this blog is to experiment with organizing my thoughts and clearly communicating them in written form. This doesn't require that I have many (or any) readers, only that there is the potential for people to read what I write. Just knowing that is enough to give me the discipline to make the effort to write well. I often miss the mark, but then that's what makes this blog worthwhile: practice.

A common problem that any writer frequently encounters is finding the right word, or choosing among several possibilities. Among the many times I've dealt with this dilemma, there is one that for some reason gives me pause since I don't know what choice is best. It's a silly little thing, perfectly suited to a mid-summer blog post when the heat is affecting my ability to pay any attention at all to my blog. My problem is the common word sequence: it is not.

My problem is not that there are many other choices -- using the phrase is uncontroversial and has few true alternatives -- but rather with how it is typically contracted in ordinary speech. The phrase is shortened in one of two commonly used ways:
  1. it's not
  2. it isn't
There is actually a third -- it ain't -- which we can safely ignore since it isn't (it's not) considered good English. Even keeping to the first two possibilities, I often wonder which is best.

The first choice sounds a bit rude to my ear, yet, so far as I am able to notice and recall, it is the more popular choice. But since it appears to be more popular, perhaps I'm wrong to dismiss it. What I would really prefer is a third (or is it, fourth) choice that would supersede the others.

The contraction I'm leaning toward is an admittedly peculiar one, although it has seen usage in poetry and other works. This contraction is tisn't. Of course there is the matter of unfamiliarity that makes it seem inappropriate, and there is some variation with the use of apostrophes -- t'isn't, 'tisn't -- which is true of any neologism or resurrection of an old word that had passed into disuse. On the plus side it removes the ambiguity of choice, and even comes with its own corrupt version: tain't.

On the other hand, if this is the sort of thing I'm worried about maybe I'm better off taking a blogging vacation. But not just yet.

Monday, July 12, 2010

G20 Summit's Lasting Political Legacy

Now that the G8 and G20 summits are fading into the past, we can begin to speculate on what will survive the immediate political and public relations first impressions. This will not likely include any of the formal agreements and commitments arising directly from the summits since the track record of countries following through on those is quite poor. Much like the summits themselves, before long both we and they will pay little attention to promises made, and indeed it is very likely that we will even forget what those promises actually were.

The real political legacy for Canada in particular is likely to be unrelated to the formal agenda of the summits, but is likely to haunt the governments of both Canada and Ontario for some time. I suggest that this legacy will take two forms:
  1. Comparisons of the ~$1.2B spent on security to other government program spending.
  2. How governments set the balance between security of politicians and citizens versus the rights to free speech, media privilege and protest.
For the past two post-summit weeks both governments are not doing so well. First, comparisons of security expenditures are already coming fast and furious. This spans the range from spending on various social programs, bailouts of GM and other corporations, and now disaster relief for farmers in Saskatchewan:
NDP Leader Dwain Lingenfelter said the $30 an acre was a slap in the face. He compared the $360-million payment ($144 million provincial and $216 million federal) to the $1.2 billion the federal government spent on security for the G8 and G20 summits in Ontario.
Regardless of whether the comparisons are justified, they are being made and we can expect them to continue. Even if this aspect of the summits' legacy dies down after a while, expect it to be brought to the fore once more when the next federal election occurs. This could be as soon as this fall.

On the second point, so far it is the Ontario government that is taking the brunt of the backlash. Certainly both city of Toronto and the federal government shares the responsibility for how security was managed, and their time in the spotlight will come, but for now it seems that the Ontario government is the primary focus. Sympathies for the protesters themselves is not terribly high, although theirs have been the loudest voices in the aftermath. The real impact will come from quieter voices: businesses that sustained physical damages and loss of revenue, Toronto citizens who were inadvertently victims of the police, and reporters who were sometimes excessively restricted or the target of security actions.

Since these voice are quieter it may take some time until the impact is properly felt. When it is, there is the potential to damage the reputations and prospects of a number of politicians, since it is likely that, since the majority can identify with them, the victims of collateral damage will get a sympathetic response. While difficult to predict, I would say that it is Premier McGuinty who faces the greatest risk.

I was quite amused when Ontario Ombudsman Andre Morin announced that his office would investigate the Ontario government's secretive legal and political manoeuvering to establish new security procedures in the lead-up to the summits, and how these were communicated to the police forces on the street. I was amused because of the government's recent attempt to smear Morin's reputation -- who has a highly positive public profile due to his strong performance -- until relenting and reappointing him to his office. I am sure that he, too, is amused, and relished the opportunity to get involved once his office received enough complaints to justify action on the file.

As for the police themselves, across all the forces involved, I suspect they will come out fine in the end. There undoubtedly were some poor actions on their part, but I think it is their bosses, the politicians who authorized the laws and gave the orders, to whom the mud will deservedly stick.

Wednesday, July 7, 2010

Rogers Wireless Goes Down-market with Chatr

I hadn't intended to say anything more about Rogers Wireless' plans for their new Chatr brand, until I read this article. If these Rogers' executives are being honest in this interview about their marketing objectives, they believe that the new entrants are aiming at the budget end of the market.

This may be true of Public Mobile, which has stated they are after the urban, budget consumer, but it is less true of Wind Mobile. If this is indeed Rogers' competitive objective, I believe they are making a mistake. The mistake is in conflating two very different groups of consumers:
  1. Those who can't afford to pay; and,
  2. Those who want to pay less and get more.
Lower-priced plans, it is true, can appeal to both groups. However, the fact that the new entrants are offering lower prices and better terms does not mean they are all after the first group of consumers. For example, what may be true of Public Mobile is not true of Wind Mobile which is offering data plans and some higher-end phones. Wind is going after Rogers' bread and butter market, but with lower prices and better customer service. Both of these attributes appeal to the second group of consumers, but may also prove attractive to the first group.

While Rogers Wireless may be willing to compete on price with the Chatr brand, I have to wonder why they have not done so already under the Fido brand, and whether they will ever address the second problem area: customer service. Their current system (much to my own dismay and that of so many of their customers across all of their services, not just wireless) is focused more on avoiding customer service to reduce operating costs. I had a chuckle when I read this gem from the Globe and Mail interview:
[But] as we looked at some of the customers that left Roger to go to [new entrants], and it was a smaller number than we ever imagined it to be, but we still called them: Why would they leave us?
Rogers actually called a customer to ask them what they thought of Rogers' service? That's unbelievable. Perhaps they conducted a spot survey of a few defectors, but their time would be much better spent engaging with existing customers before they make the decision to leave.

Monday, July 5, 2010

Smart Phones and Signal Strength

Along with rest of the world, I have been looking on in stunned disbelief at how poorly Apple is dealing with the iPhone antenna problem. I am less troubled by the problem itself -- although it is not inconsequential -- since it has the smell of an unfortunate engineering compromise.

As this article indicates, signal attenuation is hardly unique to the iPhone. I can confirm that the Google (HTC) Nexus One does indeed have the same problem, as the article shows in its comparison. In a fringe reception area the problem can be severe, leading to the reported lost calls by iPhone users. Most urban users with an long-established carrier are unlikely to see the problem, or only intermittently when travelling. In my case the Nexus One is registered on Wind Mobile, and their coverage is poor, seemingly relying on fewer base stations ("towers") than Bell and Rogers to cover the city. Hopefully they are working on filling in those holes, but for now I am in a fringe area where the signal loss from simply holding the phone is catastrophic.

Fringe area performance is so noticable since with digital transmission technology there is a small signal strength interval between no signal and a perfectly clear signal. This wasn't the case back in the days of analogue technology (e.g. AMPS), where the degradation was more gradual rather than the modern phenomenon of sudden transitions between signal loss and acquisition. That is, you may not realize you are in a fringe area, and therefore prone to a dropped call, until it happens. This relates to the number of signal bars that are shown, and partly excuses Apple's reporting of more bars, since with digital transmission it isn't that big a deal provided there is enough signal to work with. In other words, if the phone works, for most people the number of bars is superfluous information.

Of course phone users do care about how many bars they get since it is one of the few ways they can judge how well their carrier is performing. In this respect, Apple's move to increase the number of bars reported (which has nothing at all to do with the phone's radio performance) annoys their customers. While not as serious as the speedometers in some early generation Japanese cars (early to mid-1970's Toyota Celicas are models that comes to mind) that consistently, and deliberately, read 10% higher than the true speed. The only purpose for cars and phones to do this is to mislead customers to an unwarranted favourable impression of the product.
CNET, for instance, ran some tests, and suggested that Apple might simply be juicing the signal display to make it look like its phone was getting good reception.
What I mentioned about an engineering compromise is worth comment. The insides of these big-screen smart phones are packed wall-to-wall with a lot of components. For best performance the antennas need to kept away from metal (many components and even the case) and coupling to or transmission through semiconducting objects (human body). The true situation is more complex than this short description, but it is enough to say that, under the many other constraints in place, placing the antennas around the outer edge of the phone is not unexpected. Avoiding coupling to the hand is best accomplished with distance, where even a few millimeters can make a difference, such as that provided by the rubber bumpers that Apple sells (I'm not aware of anything similar for the Nexus One).

Was Apple aware of this problem when they released the phone? I believe they were, and indeed it would have been extremely difficult for them not to know. But when it comes to large corporations and diverse priorities between engineering, marketing and management, it should surprise no one that engineering concerns about antenna performance would be acknowledged and set aside in favour of getting the phone into the market. After all, few people pay the slightest attention to the radio performance of a cell phone, preferring to focus on features such as camera megapixels, colour and song capacity, and the vendors respond accordingly. There are companies out there that do have a good reputation for radio performance: Motorola and even Nokia are examples that come to mind, but even they are inconsistent across the years and product lines so don't rush to them too quickly if that is what you want.

Apple will survive this road bump, probably even if they continue to obfuscate and mislead in their public relations. The brand is simply too strong and the majority of users won't even see the problem. However, a little dose of honestly wouldn't hurt them either.

Friday, July 2, 2010

Wireless Branding

Multiple branding is a common practice in all large, consumer-oriented corporations. Whether it is the seemingly countless household products sold by Proctor & Gamble or the many lines of automobiles sold (or recently retired) by General Motors, they not only market products under many brands, these products even compete with each other. The wireless carriers in Canada do the same, with the latest entry being the rumoured chatr by Rogers Wireless.

Since this practice of multiple brands may on the face of it seem both bizarre and counter-productive it is worth a look. After all, they are businesses so there must be an advantage to the practice. In particular, there are reasons why the big three wireless carriers -- Bell, Telus and Rogers -- all do it (Fido, Koodo, Solo). Roger's chatr will merely be the latest to appear on the scene. These brands are, as one outfit irreverently calls them, pseudo-MVNOs.

Just as with P & G and GMC, the brands they market often share the same factories, business structures and components, only differing in style and presentation. That is, they differ in their marketing. The wireless carriers' brands are the same: once you've bought your phone and service contract, you use the same networks as the carriers' primary brands, and suffer under the same customer service, billing practices, and contract terms and conditions. Indeed it's worse than with cleaning products and cars since with those there is at least a chance that there are some small differences among the branded products.

There are three key reasons, in my view, why corporations follow the multi-branding marketing strategy:
  1. Illusion of choice - Consumers like choice. In a market where there is only one choice, even if that service or product is of decent quality and offered at a fair price, it will be treated with suspicion and attract attacks, whether or not those attacks are justified. Carrying the costs of multiple brands (and the costs are many) shields large corporations with a monopoly or dominant position in their market. People are not blind to the tactic, yet it is good enough to successfully deflect criticism in many cases. All it takes is a different line up of phones and service plans to complete the illusion.

  2. Corporate aversion - We tend to celebrate the sudden emergence and success of upstart companies. We do this since we tend to identify with or envy their success, knowing that in our economy and society that, with a little luck and skill, we could each do the same. However when these companies then grow to be massive and ubiquitous we become uncomfortable and suspicious with them, just as we do with any large, dominant corporation or institution. For example, Google and Apple. Right or wrong, the public's reaction is pretty typical. By distributing their public image across multiple brands, much of this animosity can be defused by the corporations.

  3. Market dilution - If I give you a coin and you flip it, there are two possible outcomes. Label each side with the name of a large wireless carrier, such as Rogers and Bell. Now I give you a six-sided die and to the other sides I add four more names: Solo, Fido, chatr and Koodo. You now have six possible outcomes when you roll the die, or at least you might think so. Let's now go further to an octagonal die, to which we've added Wind and Public Mobile. Roll the die and there is just a 1-in-4 chance you'll get one of the new entrants. That is market dilution. It's all completely transparent yet -- in concert with the illusion of choice -- it reduces the marketing effectiveness of the new entrants. All these brands are more than just names, as they must be to achieve dilution, and therefore include distinct advertising campaigns and retail channels. A naive, inattentive or rushed user is thus more likely to encounter an incumbent when shopping.
Roger's intention with regard to the final point -- market dilution -- seems fairly clear, as the following article extract indicates:
Analysts suspect that the brand will toss confusion into the crowding Canadian marketplace by adding yet another new option in addition to the three new entrants and four existing budget “flanker” brands owned by the incumbent wireless carriers. BCE Inc., for example, owns both Solo Mobile and Virgin Mobile Canada, while Rogers’ already owns Fido, and Telus Corp. has Koodo Mobile.

There is also a sense that Rogers will use the brand only in places where the company faces fresh competition from new entrants – such as Wind Mobile, Mobilicity and Public Mobile, and later cable companies launching wireless services – and not in areas where it would simply be providing consumers with a lower-cost option to its existing services.
Multiple branding strategies don't always succeed, but very often they do. With an increasing number of market commentators and media now ready to point out the translucent relationship of many wireless brands to their corporate owners -- to distinguish them from truly distinct brands with their own networks -- there is a good chance that, over the long term, consumers will see through the tactic more often. This does not mean that the new entrants will win consumers' business, only that they will stand a better chance of competing on their merits.