Revisiting Product Naming

A couple of months ago, I had written about the chaos and the difficulty of the product naming process. Today, I was reading an article in HBR about this issue and it had some interesting perspectives that I had previously not considered. Specifically, the article talks about issues faced while selecting a name for the next generation product. Let’s say that your company has an existing product named X. If you release another version of the product a couple of years later, do you maintain the same name (X) or do you come up with a new name (say Y)? What factors should influence your decision?

Customers’ Expectations: The article mentions that if customers see the same name for the newer product they expect improvements in existing features. But with a new name they expect a fundamentally different product and, surprisingly, they expect the new product to be a riskier product than the older model. This perception of risk in a newer model is the key to deciding what to name your product. It is interesting how something as simple as picking a name for the product becomes an exercise in risk management.

In this case, managing risk begins with a fundamental grasp about your customers and your competition. If, for example, your target market is techies who are comfortable with experimenting with newer technologies, a brand new name might help. On the other hand, if your target customer is a regular business user who uses your product just as an aid to get his job done, it might be safer to present a consistent name to the user. A good example of this is Microsoft Office products. Although the products have gone through plenty of changes, maintaining the same name helps customers upgrade without the perception of any risk.

If your product is the underdog in the market, it might help to boost your image by picking a new name for a new model. The example cited in the article is that of AMD. Although AMD’s Athalon processor was only a newer version of its previous K5 and K6, they changed their naming convention and chose Athalon over K7. This seems to have helped as Athalon did much better than its predecessor models.

Posted in Uncategorized | Tagged | Leave a comment

The Incandescent Light Bulb

“Any sufficiently advanced technology is indistinguishable from magic.”Arthur C. Clarke 

“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” Mark Weiser

In 2011, the electric light bulb hardly seems magical but it has certainly ‘weaved itself into the fabric of everyday life’. But the light bulb was certainly magical when it was first introduced in the mid 19th century.

I have been writing about the Smart Grid the past few days and a post of about the light bulb certainly seems in line with the theme. The April 2011 issue of the IEEE Spectrum Magazine has an informative article about the history of the incandescent light bulb. If you are not interested in reading the entire article at least check out the great photos. Many of the photos date back to the early 1900s.

The most surprising thing about the early light bulbs was the choice of materials used for the filament. Some of the material included bamboo fiber, cotton, paper and grass. All of these materials seem extremely combustible. I would have never imagined that you could heat them so hot as to emit light. It turns out that their behavior in vacuum is far different from what would happen in the presence of air (oxygen). Another fun fact: In the 1940s, the light bulbs came with an instructional manual!

A few years ago, one would have read this article in a magazine, appreciated it and then closed the magazine. But now, our hyper-connected world makes it possible to interact and learn more in ways we had not anticipated. I initially read the article in the ‘old-world’ book format. Since I liked it, I visited the website to see if they had any additional content. There was nothing from the publisher but there were user comments at the bottom of the article. One of the comments mentioned the ‘Phoebus Cartel’, a term that I was not familiar with. This cartel earned its place in history by supposedly imposing an artificial lifespan of 1000 hours on each light bulb. There’s more information about the cartel on Wikipedia. There is no hard evidence that the Cartel imposed this lifespan on bulbs but it certainly controlled competition and sales of the light bulbs.

The incandescent bulb has lasted over 130 years and it will be exciting to see how the new generation of compact flourescent and LED bulbs will evolve.

Posted in Uncategorized | Leave a comment

What is the Smart Grid? — Part 2

In my previous post, I gave a brief overview of the Smart Grid. There are many other resources on the Web that provide a good introductory overview of the Smart Grid.

Websites:

Videos:

  1. PBS: Smart Grid
  2. GE Reports: Smart Grid Discussion

Technical Articles:

IEEE Journals such as the Transactions on Smart Grids, the Power and Energy Magazine etc. are good sources for keeping up with the latest in academic research. Here are a couple of papers that provide a good overview of the Smart Grid (unfortunately IEEE does not provide free access to the papers):

  1. F. Li, W. Qiao, H. Sun, H. Wan, J. Wang, Y. Xia, Z. Xu, P. Zhang; “Smart Transmission Grid: Vision and Framework”, IEEE Transactions on Smart Grid, vol. 1, issue 2, pp. 168 – 177, 2010.
  2. E. Santacana, G. Rackliffe, Le Tang, Xiaoming Feng, “Getting Smart”, IEEE Power and Energy Magazine, vol. 8, issue 2, pp. 41 – 48, 2010.

The second one is ABB’s position paper on the Smart Grid. It’s authors are the CEO and VPs of ABB.

These resources should give you a good idea about the Smart Grid. 

 

Posted in Uncategorized | Tagged | Leave a comment

What is the Smart Grid?

The electric grid is the vast network that transmits electric power from the place of generation to the end consumer. Most of our electricity is generated at large centers such as Thermal Power plants, Hydro-Electric plants and Nuclear Power plants. In the US, Hydro-carbons such as Coal, Oil and Natural Gas generate almost 70% of the total electricity. Nuclear plants account for about 19% and Hydro-electric plants contribute another 7%. See the full table over at Wikipedia. The majority of electricity consumption occurs in homes and buildings (commercial and industrial). Most people are completely unaware of the engineering involved in transmitting electricity from the source to the user. We take it for granted that everywhere we go we will have an outlet in the wall to plug in our devices. The figure below shows a high level overview of the electric grid’s current architecture.

 

Basic-grid

Notice in the figure above that our current grid has a one-way flow of electricity from generators to consumers. There is no link in the reverse direction (from customer all the way back to the generation center). What would the link in the reverse direction accomplish? More importantly, what ‘substance’ should be flow on this reverse link? Clearly there is no point in transmitting electricity back to the source. But what would be worth transmitting in the reverse direction?

Just as the technology that enables the transmission and distribution of electricity on the grid is largely hidden to the public, there are many entities involved in ensuring the smooth operation of the grid. We are mostly familiar with our local utility company to whom we pay our monthly electricity bills. Some of the other players include the Service Providers and Operators that provide direct support to the consumer. Then there are the Energy Markets that determine the price of energy. 

 

Actors-involved

These additional actors in the grid would benefit tremendously if they had access to real-time information about the status of the grid. How healthy are the transmission lines? Do any of the distribution centers need maintenance work? Which localities have greater load demands? To answer these sorts of questions, the grid needs to have some ‘smarts’. Information needs to travel from the consumer to the source of generation and to all other players involved.

In a previous post, I explained the factors motivating the work on the Smart Grid. One important factor that I missed was the need to Modernize the existing electric grid. An introductory video at the DOE’s website has a good visual example of how consumer technologies have evolved over the last 50 years but grid infrastructure that powers all these new technologies has remained stagnant. These screen-shots explain it all:

Old-technologiesNew-technologies

As you can see, audio players have evolved from huge vinyl disk players to the sleek iPod nanos; TV’s have gone from bulky CRTs to super thin LED; the latest generation of passenger Airplanes such as the Airbus A380 can carry 500 people and military aircraft have all sorts of technologies that are never even made public; And ofcourse the simple telephone has evoloved in to the Smart Phone. The one technology that has not changed in the last 50 years is the electric grid. Upgrading the components of the grid to modern devices can yield tremendous benefits in energy efficiency and cost savings.

A lot of work that falls under the Smart Grid umbrella involves upgrading the grid infrastructure to use the latest technologies in power generation and distribution.

To summarize, the Smart Grid is an upgraded, modern electric grid that uses the latest technologies in energy generation, transmission and distribution in combination with the latest communication technologies to relay information across the entire grid. This can be visually summarized as: 

Conceptual-model-full

Source for Images: NIST Framework and Roadmap for Smart Grid Interoperability Standards, Release 1.0, (NIST Special Publication 1108); January 2010

 

Posted in Uncategorized | Tagged | Leave a comment

Lessons from Irrationality

I don’t intend this post to be full fledged review of Dan Ariely’s ‘Predictably Irrational’. There are plenty of reviews out there if you are interested. This post is more of a ‘highlights’ that I would like to remember. The book covers a range of topics that have applications in areas as diverse as Business, Marketing, Behavioural Economics, Self Help (Yes, it has a chapter on how you can improve your self control.), and Morality.

  1. Relativity and the Decoy Effect: Our consumption (or more accurately, our buying) habits are influenced by the choices we are offered. The book has a couple of interesting case studies about this effect. Williams-Sonoma’s sales of home bread makers took off only after they introduced a higher priced model. Until the second model came on, buyers had nothing to compare the original model to. They were thus less inclined to buy one. Restaurants that purposely add a high priced item on their menu find that orders for their second highest priced items go up. Dan also gives an overview of an experiment in how price offerings for an online newspaper subscription influenced customers’ choice. The decoy technique is definitely something product managers should consider. Look at your product price list. Do you see any decoys? Would you benefit from having an intentionally higher priced offering? Would your customers then pick the second lower alternative?
  2. Anchoring: James Assael, the diamond dealer, made black pearls desirable by having it introduced as an exclusive offering from the prestigious Harry Winston jewelers. Once people associated black pearls with Harry Winston (a brand they were already familiar with) they instinctively embraced the idea of paying big bucks for black pearls. This concept is probably very familiar to people in the advertising industry. If you are a technical product manager and you are not too keen on reading up a lot on marketing/advertising then this book would be useful.
  3. Price Memory: Ariely makes an interesting case that if we had no memory of previous prices, we would not be affected by price increases. Do I feel that $3.50 for a gallon of gas is expensive only because I remember the days when it used to be $0.99? Probably true. But how do you make use of this idea in running your business?
  4. Zero Cost: People’s behavior changes considerably when offered free products. This does depend a lot on where and what is offered for free. Dan recounts some of the experiments they did by offering free chocolates at MIT. I suspect the reactions would have been considerably different if such freebies were offered at poorer places. But that’s beside the point. Viral Marketing advocates using free material to get your product’s name out in to the market. One needs to be careful with how you use this idea. A case in point are the reports that claim that services like Groupon are actually hurting merchants because people are getting used to the idea of lower costs.
  5. Losing Trust: Many of the techniques that Dan covers in the book are used by people in Marketing and Advertising. It would be prudent to keep the chapter on ‘The Cycle of Distrust’ in mind and not get carried away with dubious marketing plans. People have gotten ripped off by bad business people so often that they tend to be very wary of all offers. In today’s social media obsessed world, news of your company’s bad policies could spread virally and cause real damage. It is useful to understand the irrational behaviours of the human mind but there is line between marketing and cheating. Be careful that you do not overstep that line.
Posted in Uncategorized | Leave a comment

The Driving Forces behind the Smart Grid

What are the driving forces behind the attempt to build the modern Smart Grid?

1. Energy Efficiency: The modern world economy is hungry for energy. The human population is close to 7 Billion. A substantial percentage of this population lives in under-developed countries. Without an adequate energy supply, no nation can develop and sustain a growing economy. Energy supply has become a matter of national security for many countries. With too many countries trying to get hold of the limited resources, there is bound to be an upward pressure on energy prices. In such circumstances, every nation has a lot to gain by improving the efficiency of its existing supplies. There is thus a great benefit if we can reduce wastage and losses and do more with less. At the generation, distribution and transmission level, by upgrading the infrastructure to newer technologies, we could reduce losses and improve energy efficiency. At the individual consumer level, it is believed that people tend to consume less when they are given real-time feedback of their energy consumption. For example, if people were able to see their home’s energy consumption on a regular (hourly/daily) basis, they would adjust their energy usage habits to reduce consumption.

2. Energy Diversification: Our main, and traditional, energy sources are Coal, Oil & Gas, Hydro-Electricity and Nuclear power. The entire electricity infrastructure is built around these energy sources. A large thermal, hydro-electric or nuclear plant generates electricity that is then distributed to individual homes and buildings. The vast network of plants, sub-stations, transmission and distribution stations that make it possible to transfer the energy from the bulk-generation center to the end consumer constitute the electric grid. With more energy generation options such as Solar, Wind, and other Green Technologies vying to become a big part of our energy supply, the infrastructure has to be updated to accommodate them. For example, solar panels can be placed on every roof-top. This would make every building not only an energy consumer but also a generator as well. What if some of these buildings generated more than they consumed? Would they be able to supply their excess energy to the rest of the grid? That’s where a smarter grid would help.

3. Utilities’ Efficiency of Operation: Consider the humble electricity meter in your home. The meter itself reliably performs a fairly simple task of keeping track of your energy consumption. However the utility company has to spend a lot of resources reading the meter readings and reporting it back to the consumer. The typical process involved sending employees once a month to every home and reading the meter values. This number is then used to calculate the bill and mailed back to the consumer. It is obvious that this process is inefficient. Energy companies have a great incentive to reduce the amount of time and resources spent on such a trivial task. This is driving the innovations occurring in the Automated Metering Infrastructure (AMI) space. Imagine the savings if every meter was converted in to a smart meter that could automatically send its reading to both the energy company and the consumer.

4. Grid Reliability: At a recent Smart Grid conference, one of Duke Energy’s employees shared an internal joke: their SCADA system was ‘Someone Called And Duke Answered’. In many remote areas Electric Utilities typically have to depend on people calling in to report loss of service. Utilities have always wanted an efficient method of maintaining their vast networks. With a distribution network literally spread across thousands of miles, it is extremely difficult to pin-point the location of a problem. What if every electric pole was equipped with a cell-phone that would automatically call the local office if it every detected a problem in its vicinity? Smart Grid communication technologies aim to make such reporting a reality. The need for pin-pointing trouble spots is not new, but the ability to make such automatic reporting is now a technological reality.

5. Moore’s Law: No discussion about the rapid growth of electric/electronic technology over the last forty years can ignore the effects of Moore’s Law. In many ways, the Smart Grid is the inevitable by-product of the developments in other technical areas. Our computation and communication technologies not only have great capabilities but are also extremely cost-effective. It is economically viable to have wireless/wired communication capability in to every household appliance/electrical device. It is now feasible to convert every electrical device in to a fully electronic device. This means that appliances such as a dish washer are not just passively converting electrical energy in to kinetic energy to clean clothes, but they would also be actively conveying information about their operational status, energy consumption etc. to other devices. Every electrical device is transformed in to a smarter electronic device.

6. Dynamic Pricing: The aim of Dynamic Pricing is to reduce peak energy demands and thereby evenly spreading out energy consumption across time. Most of the energy is consumed during working days. There is a considerable gap between the energy demands at peak hours versus non-peak hours. These demand fluctuations place a strain on the grid. If the demand peaks were to somehow disappear or at least reduce, then the grid would not be under as much stress. If the utilities had the ability to dynamically change the price of electricity, then it is expected that consumers would try to use as much as possible during the off-peak, low-price hours. Dynamic Pricing would encourage people to spread out their energy consumption. For example, if the price of electricity were lower in the middle of the night, consumers might benefit from running the dishwasher at that time. But how would the dishwasher know when the price was low? How would the dish washer start operating in the middle of the night? The communication technologies being developed for the Smart Grid aim to address these sorts of issues.

Each of the points discussed above can be discussed in greater detail. I plan to explore the Smart Grid space a lot more in the coming weeks.

Posted in Uncategorized | Tagged , | Leave a comment

Should Engineers also act as Tech Support?

Should engineers also act as Tech Support? This question crops up quite regularly. I doubt this problem is an issue for engineers working in large companies which have dedicated customer support teams. Engineers in most start-ups and small companies are likely to face this problem. Although my primary experience in Industry is with regard to software products, I suspect engineers in all other fields also have to deal with this problem.

My short answer to the question is: No. Like all good engineering answers, the proper response to the question is: It depends. To get a better understanding of the issue, the original question must be rephrased because it is vague. Should engineers also act as Tech Support? This question implies that doing Tech Support is a regular part of the engineer’s job and that is why my short answer to the question is a simple ‘No’. Let’s rephrase this question: Should all engineers have the experience of doing Tech Support for the products they design and build? This phrasing has different implications and my short answer to this is “Absolutely Yes”. Engineers will greatly benefit from the Tech Support experience but it is certainly neither necessary nor beneficial to handle Support issues regularly.

There is a list of ‘Pros’ for this question over at Stack Overflow. From a cursory look, the reasons cited seem good, but a deeper examination reveals flaws in every claim. I will examine these claims here.

  1. Gaining Exposure to User Perspective: It is true that spending some time with end-users is useful for developers to get a different perspective on the product. However there are far better approaches to gain this knowledge than doing Tech Support. Product Managers should be the key link between the developers and the end-users. Good PMs should not only relay user feedback to developers but should also ensure that developers get the chance to interact directly with end users. Trade shows, technical conferences, user groups, focus groups etc. are great places for getting much better feedback than doing Tech Support.
  2. Gain Domain Knowledge: Once again, this is a bogus claim. Most Tech Support calls come from clueless end-users with RTFM-type questions. Even in non-RTFM cases, the end users rarely call in with anything insightful that would even remotely contribute to the growth of the developer’s domain knowledge. And once again, there are much better ways to gain domain knowledge.
  3. Marketing Benefit for Company: Having good Tech Support is definitely a huge marketing benefit. But where does it say that this good Tech Support should come from your engineers? Management in many start-ups and small companies seem to have the notion that only the best engineers can provide the best tech support. They need to re-examine the lost productivity by having highly paid engineers do tech support.

Overall this tech-support & engineers issue is one that needs balance. From an engineer’s perspective, I would highly recommend that all new engineers spend some time doing Tech Support. This is very useful if you are new to a company that has an established product. This will be a quick way to learn a lot about the product since you will be under pressure to provide timely responses to customers. The initial few weeks or months is also a good time to get your domain knowledge and user perspective. This is especially useful when you switch companies and move to completely new industry.

There is another crucial thing you will learn by doing Tech support: the ability to spot the bad 2% of your customers. Seth Godin explains this beautifully and paraphrasing it here is unnecessary. “But if you try to delight everyone, all the time, you’ll just make yourself crazy. Or become boring.” (The bold italics are mine.)

Posted in Uncategorized | Tagged | Leave a comment

What’s in a Name? The Eternal Quest for the Perfect Product Name

One of the most frustrating tasks in bringing a product to market is to find a name for the product. There are two main reasons why picking a product name becomes so complicated in many organizations. The first reason is that everyone in your company probably feels that they are qualified to make a name suggestion. Why wouldn’t they be? If it involves a decision about a deeply technical issue in product development, except for a handful of smart engineers no one is going to come forward with suggestions. But when it comes to things like finding a name, every one is an expert and every one has a cool suggestion. The second reason is that the people responsible for making the decision might not be following a proper process to find a good name.

The typical process involves restricting the discussions and decision making to a few core people. This eliminates a lot of useless discussions and long, unproductive meetings. Limiting the team is good but that it self does not solve the problem. The more important thing is to start the naming process by listing a set of criteria that your name should meet. Then go about picking names that match your requirements. In other words, simply follow the basic engineering principle of requirements-design-implementation to come up with a name for your product. If you follow this process, you could even open up the discussion to a wider audience in your organization and get good suggestions. After all, every one is qualified to suggest a name.

There are quite a few places on the web that discuss useful requirements for names. <a href=" http://mashable.com/2010/05/28/naming-startup/“>Many of these are mostly talking about web-based start-ups. You could follow the same principles for your non-web products as well. I have a few other criteria of my own that don’t usually get mentioned elsewhere.

  1. Language Independent: If you intend to market your product across the globe, then keep non-English speaking users in mind when picking a name. At your next trade show or business event, try interacting with some people who don’t speak English. Of course, I assume you will have an interpreter. Describe your product to them and see how much effort they put in to simply understanding and repeating your product’s name. If you plan to establish an international Sales channel, it definitely helps to keep your names simple. Names that are too long and complicated might be difficult to comprehend, and remember, for non-English speakers.
  2. Differentiation from Competitor’s Products: If a competitor has a product named ‘Apple’ it might not be wise to go with something like ‘Red Apple’. If your competitor’s product is the dominant one in the market and you introduce a similar sounding product, users might think that your product came from the competitor’s company. You might end up giving credit to them, especially if they are already well established. For users who figure out the difference between the products, you might end up looking like copy-cats trying to play catch up with the market leader. If you are ambitious and want to be the leader, pick a name that is clearly different from other products in the market.
  3. Verbabilty: This requirement does not get the attention that it deserves (Exception). This is very important if your product would be used as a tool. Think of all the classic products such as Xerox, Photoshop, Google etc. How often do you use these words as verbs in your every day conversations? Of course, none of these products started out with the intention of becoming a verb. It takes a lot more than a marketing campaign for your product name needs to enter the every day vernacular. But planning for it ahead might be beneficial.

Of course, coming up with a list of criteria is a lot easier than actually thinking up a name. Good luck with your name search!

[I am referring to technical products here. I can only imagine the naming headaches in the consumer goods sector.]

 

Posted in Uncategorized | Tagged | Leave a comment

Cloud Computing, Mobile Computing and Multi-core Processors: A Golden Braid for the Modern Internet

For quite some time, I have been struggling with coming to grips with the whole notion of the Cloud Computing industry. I had trouble trying to figure out why anyone would want to use the Cloud. My reasoning was this: Pretty soon, everyone would have machine with an 8 core processor running at a clock speed of a few GHz, many Gigabytes of RAM and Terabytes of memory on their hard disks. With so much computing power available so cheaply, why would someone turn over all their computing tasks to a remote, unreliable Cloud? Why torture all that computing power to work through a bottleneck of around 1 Mbps? In most cases, people would be lucky to get an Internet connection of 1 Mbps.

The second related issue that confounded me was the ever increasing number of cores on processors. For the average PC user, what is the benefit of multi-core processors? At home, I use a 2004 Dell Inspiron 600m, with an Intel Pentium 1.6 Ghz processor, and 512 MB RAM. At work, I use a 2008 Dell Desktop with a Core Duo processor in it. And guess what? I barely notice the difference between the two machines. Granted I no longer do any compute intensive tasks on my home laptop but for all other tasks such as Word processing, spreadsheets, Web, video etc, there is no noticeable difference between the two machines. Even Don Knuth, expressed his unhappiness with multi-core architectures. And if the Don himself had doubts, what could I say? I reasoned that multi-cores were just the next Marketing fad for Intel. Back in the 1990’s, the microprocessor clock-speed was the most important feature in a computer. Every 18 months the frequency kept doubling and Moore’s Law relentlessly marched forward. By the early 2000’s the clock speeds ran into the power ceiling and the focus changed to low-power computing. With CMOS technology rapidly reaching the physical limits of miniaturization, multi-cores seemed to be the best way forward. Forget the fact that there were hardly any software applications to take advantage of these cores, the chip vendors kept making them and marketing them. It looked as though they did this for one simple reason: because they could. The number of cores then took over as the USP. And now, there is plenty of talk that “cores are the new transistors”.

Fast forward to 2011 and the tech-world seems to be forming a different picture, one that ignores the traditional big players (Microsoft, Intel) and has many new players. The problem with my analysis was that I had turned a blind eye towards the oncoming Mobile revolution.

My questions were simply the wrong question to ask. The question is not why people would waste their computing resources. The question is: Do people even want that computing power? It might be true, that all that computing power would be available cheaply, but people seem to be increasingly picking mobility over power. The ability to access information from anywhere at anytime has become more important that computing power. This is the basic explanation for the rise of the tablet/mobile revolution. It turns out that the average PC user does not even need a PC let alone a high-performance, multi-core machine. All the average user needs is a tablet or a smart-phone to surf the web, read tidbits of information, watch videos and interact with friends and family on social media. Currently the most important feature in computing devices is connectivity. Is it 3G or 4G enabled? Does it have WiFi? Does it have Bluetooth? No one seems to care much about what processor is in the mobile device and how much memory and storage it contains. If the device has a good connectivity then the Cloud offers unlimited storage.

Look around in your workplace and ask yourself how many of the employees’ PCs could be easily replaced by a less powerful and more connected device. My guess is that apart from the Engineering team, most other people can afford to replace their PCs. It is only the engineers who run complex applications and tools that require high performance machines. For everyone else, the computing can easily be outsourced to the cloud.

Now, where do Multi-cores fit in this picture? Multi-cores are going to rule the data centers that power the Cloud.  One of the big revelations at the 2011 CES was the seeming death of Wintel and the rise of Armdroid. However the Armdroid is only going to power the consumer’s handheld device. What is going to power the other end of the connection? For this, we need to look at the developments in the multi-core industry.  Not surprisingly, many of these companies have been bought by the big players. The surprising element here is that multi-core technology might not end up being a disruptive technology. It might simply help reduce costs and improve performance at data centers. It seems like the hard problems of parallel computing might after all not have to be solved to take advantage of multi-core processors. The technology might serve very well to simply shrink the size of the data center. Instead of worrying about solving the parallel programming problems, data center architects could simply apply the principles of Queuing Theory and Load Balancing to take full advantage of the multi-core processors.

The Cloud, it turns out, is a necessity for Mobile Computing. And, Multi-core processors will form the backbone of the Cloud.

Posted in Uncategorized | Tagged | Leave a comment

Keeping Tabs on your Competition: Lessons from the Google-Bing Copying Episode

Google recently accused Bing of copying its search results. Plenty of comments have since poured forth supporting both sides in this debate. Irrespective of which of the two was right, the rest of us, especially if you are involved in product development or product management, can learn a couple of things from this episode.

Never Relax at the Top: Google is the clear leader in search technology and has a commanding lead over Bing. Nevertheless this episode shows that Google takes its competitors very seriously. It appears that an entire team was involved in the project to track and analyze Bing’s results. Google clearly spent quite a bit of its resources trying to show that Bing was ‘copying’ their results. If you are in product development, what have you done recently to track your competitor’s technology? Are you confident that your technology will maintain the edge over your rivals’ products? In the fast-moving technology industry it helps to be paranoid.

Don’t Hesistate to Learn from the Leader: I do not buy Google’s argument that Bing has been copying their results. At best, one might make the argument that Bing was not clear about the way it was gathering data. In fact, for me the important observation from this episode is the way Bing has been using Google to improve its technology. Bing’s engineers seem to have calculated that they can improve their product by using data from how users interact with their competitor’s product. This kind of thinking needs to be encouraged. After all innovation does not happen in a vacuum and many of the best products are built from improving older technology. What has your engineering team learnt from your competitors’ technology?

Posted in Uncategorized | Tagged | Leave a comment