Insurance is an extraordinarily useful tool to manage risk. When it works as intended, it provides financial protection to individuals and firms who pay insurers a relatively small premium to protect themselves against a large loss. But insurance is broadly misunderstood by consumers, insurance executives and regulators.

Many consumers do not voluntarily buy coverage against potentially risky and serious losses. Case in point: fewer than half the residents in flood and hurricane-prone areas were insured against water damage from Hurricane Katrina and Hurricane Sandy. And a significant fraction of the population does not have health insurance today, despite the large premium subsidies currently offered in the form of Medicare and Medicaid and tax breaks for employment-based health insurance. A principal reason for this is that many people tend to view insurance as an investment rather than a protective measure. If after several years one doesn’t make a claim, there is a feeling that ones’ premium has been wasted.

Insurance firms also behave strangely. After they suffer a severe loss, they may decide that a risk is completely uninsurable rather than determining whether they should increase their premium. For example, prior to 9/11 insurers did not price terrorism risk when providing coverage against damage to commercial property. After 9/11 most carriers refused to offer terrorism insurance because they feared catastrophic losses from future attacks.

State regulators often constrain insurance premiums because they are concerned that insurance will not be “affordable,” especially to those who are at higher risk. In Florida, the state set up its own insurance company, Citizens, that offers highly subsidized premiums to residents in hurricane-prone areas. Private insurers could not compete against these prices and Citizens became the largest insurer of homeowners’ coverage in the state. All taxpayers in Florida will be required to help pay for Citizens’ losses, should the state be hit by a devastating hurricane.

Similarly, the ACA health reform legislation requires sellers of individual and small group insurance to sell coverage to all comers at premiums that do not take into account the buyer’s medical risk, given age and local prices for health services. These policies assist those in the high risk category but impose additional costs on lower risks in the form of higher medical premiums.

Why do consumers, insurance firms and regulators behave as they do?

There is a tendency for those at risk to assume that disaster losses or major health related expenses will not happen to them. Given this view they feel no need to purchase insurance protection. Only after suffering a loss will consumers voluntarily buy insurance. After a disaster, insurers may decide to restrict coverage, and state regulators are likely to prevent private insurers from charging premiums that reflect the actual risk.

Behavior of this kind defeats the three principal purposes of insurance: to provide information via premiums as to how serious your risk is; to provide motivation for undertaking financial protection against an event that could produce a significant loss but has a low probability of occurrence; and to offer incentives in the form of premium reductions to reward people who invest in risk-reducing measures.

Incentives, rules, and institutions that encourage a constructive role for insurance will ultimately improve individual and social welfare. Several recent pieces of legislation have set the tone for appropriately dealing with risk.

In light of the private insurance industry’s refusal to provide sufficient amounts of terrorism coverage following 9/11, Congress passed the Terrorism Risk Insurance Act (TRIA) in 2002. It provided taxpayer-backed protection to insurers against catastrophic losses from future terrorist attacks if they agreed to make coverage widely available. As a result, businesses are now able to purchase reasonably priced terrorism coverage. To date there has been no need to call on taxpayers to fund the guarantee. TRIA is up for renewal in 2014 and there is an opportunity to re examine the appropriate roles of the private sector and the Federal government in providing coverage.

The Biggert-Waters Act passed in July 2012 proposed major reforms to the National Flood Insurance Program (NFIP) over the next five years. Future premiums will reflect risk (tied both to specific location and expected climate change) so individuals are aware of the hazard they face. They can also be rewarded with lower insurance rates if they undertake protective measures. FEMA is in the process of developing more accurate flood maps to set these rates. The Act authorizes $400 billion per year for this purpose over fiscal years 2013 – 2017.

The Affordable Care Act (ACA) requires insurers to offer insurance to all residents in the United States who do not currently have coverage through either their job or a public plan. It also levees a tax penalty on those who choose to be uninsured. To deal with the affordability issue, premiumsare to be subsidized for some low- and middle-income households. However, with the exception of offering premium discounts for those who engage in a limited set of less risky behaviors (such as not smoking), premiums after 2014 no longer reflect individual medical risk factors. There is thus some concern that the penalties specified by the ACA may not be enough to encourage low risk individuals to buy insurance because of the high premiums they will have to pay.

What can be done to make insurance a better policy tool and to avoid adverse side effects of the well-intentioned programs already in place?

One way to convince people of the long-term benefits of insurance is to stretch the time horizon over which the event can occur. Studies have shown that people are much more likely to buy insurance or invest in protective measures if an event such as a hurricane that has a 1 in100 chance of occurring next year is presented as having a greater than 1 in 5 chance of happening at least once in the next 25 years. And if the disaster does not happen – well, the truth is that the best return on an insurance policy is no return at all. One should celebrate not having a major loss!

Insurers should construct worst-case scenarios for rare events. They can then determine a premium that reflects their best estimate of their expected future risks, factoring in the uncertainty of the event’s happening. Insurers could also consider offering multi-year policies if state regulators allow them to price coverage that reflects risk over that period. A multi-year insurance policy with risk-based premiums coupled with a multi-year home-improvement loan to pay for risk-reducing measures may enable policyholders to reduce their overall costs.

State insurance regulators should be appointed rather than elected so they are less prone to being influenced by special interest groups and lobbyists. Regulatory decisions should make transparent who stands to benefit from a subsidized insurance program, and who will be paying part of that cost to protect others. State insurance programs, such as Citizens in Florida, should indicate to all residents in the state that property insurance on homes near the ocean (including second homes) is likely to be highly subsidized, and those living elsewhere may bear the expenses of the clean-up following the next severe hurricane.

These concepts, if followed, will increase the chances that insurance is better understood so it can fulfill the roles it is designed to play: reducing future losses and financially protecting those at risk.

]]>http://www.forbes.com/sites/wharton/2013/02/12/improving-insurance-decision-making/feed/3Little Firms, Big Mistakes: How Should Small Employers Respond to Health Reform?http://www.forbes.com/sites/wharton/2013/01/07/wharton-professors-on-new-healtchare-requirements-impact-of-small-business-hiring/
http://www.forbes.com/sites/wharton/2013/01/07/wharton-professors-on-new-healtchare-requirements-impact-of-small-business-hiring/#commentsMon, 07 Jan 2013 21:52:11 +0000http://blogs.forbes.com/wharton/?p=45By: Jonathan Kolstad, Mark Pauly and Robert Town, Professors of Healthcare Management at the Wharton School of the University of Pennsylvania

When owners of small businesses are celebrating the holidays next year, their cheer, they are being told, should be tempered by the thought that they will have to cope with the employer requirements for health insurance that will be effective the first day of January, 2014. Mercer, a major benefits consulting firm, is telling them that extending coverage will be a significant new expense, and McDonald’s Corporation’s CFO predicts that even increasing menu prices will not be enough of a cushion. A recent New York Times article offering a collection of anecdotes and opinions, makes a gloomy prediction: expect to take a big hit to net income, cut back on hiring full time workers, or both. But some seriously logical economic theory and empirical evidence suggest that this set of options is much too bleak. Economists, least of all among policy advisors, do not offer magic. They do, however, provide an alternative that is likely to happen, whether employers plan on it or not.

The ignored alternative possibility, good news for employers but less so for workers, is that the great bulk of the cost of newly offered coverage will come, not out of profits or hiring, but out of worker cash wages. That is what happened in Massachusetts when “RomneyCare” was implemented. While there was little impact on the overall labor market, there was a striking change for those workers who gained new insurance: they saw wage reductions (relative to the trend) of almost precisely the cost of health insurance to their employers. Adding further evidence for the power of the employer side of the labor market to adjust in the face of an individual as well as an employer mandate, the number of employers offering health insurance actually increased following reform.

While many employers and their advocates think they pay for health benefits out of profits, economic theory suggests something different and more consistent with what happened in Massachusetts. Consider a hypothetical small employer who is considering whether to offer its currently uninsured workers making (say) $36,000 a year insurance costing approximately $4,000. Straightforward economic logic implies that, even if the employer is forced to write the $4,000 check for insurance (rather than the worker), the money will still (after a transition period) come out of workers’ pay as long as it does not force money wages below the minimum wage level.

Here’s why: suppose employment of a given number of workers is initially stable at a wage of $36,000. Bumping up the cost of labor by adding an additional $4,000 charge will cause employers to cut back on jobs, but any resulting unemployment will put downward pressure on wages. Even if the employees of firms not currently offering coverage don’t value everything they receive, they presumably would attach some value to the insurance they will be getting. So they would be willing to keep working for a somewhat lower wage for better benefits.

How far will wages fall? Unless workers pull out of this part of the labor force in large numbers, wages will fall just enough to get employers to hire back those laid off, at which point the downward pressure on wages will cease—which means that total compensation paid per employee goes back to $36,000, as wages fall by $4,000 to offset the cost of the health insurance benefit.

When the dust clears, employers will be making the same profits as they did before and workers will still have the same kinds of jobs as they did before—but jobs that pay less in cash and more in benefits. So, even though an employer would correctly estimate that his business would suffer if he alone were forced to pay for benefits, the fact that his labor market competitors are being put under the same obligation means that the labor market overall will readjust in a way that is much less threatening. Of course, in real life it will take some time for reductions in money wages to phase in, as raises are curtailed or new workers are offered less attractive terms.

What if some workers might withdraw from the labor market if they feel this new package is too frugal? At one extreme, if the value of insurance was so great, or fear of the individual mandate combined with social disapprobation so strong that workers did place a value of $4,000 or more on the insurance, there would be no reason to stop offering to work. If the benefit was regarded as worthless, the supply response would be the same as if wages were cut by $4,000 and no benefit was offered. Or, as is likely, it could end up somewhere in between. (The Massachusetts experience suggests that the new benefit was largely worth its cost to workers.)

Now for some complications and qualifications. The ACA law only imposes penalties–up to $2,000 per worker–for firms with 50 or more full time workers. So some smallish employers are thinking about legally avoiding the penalty by replacing full timers with part timers until they get under a 50-worker head count. (A recent Forbes article advised them to do this.) If they had not wanted to use part time workers before, it must be because such workers are less productive per hour or per dollar paid than full time workers–so it makes sense to pursue this gambit only if the loss in per-worker productivity from switching to part timers is less than the lower of the $2000 penalty or the net increase in cost for full time workers–which we have argued is likely to be much less than the cost of the employer insurance contribution. Either way, torquing around the mix of full timers and part timers to try to avoid penalties is likely to be foolish; better to either pay for the insurance (and cut wages) or pay the penalty (and keep wages up), unless you are very close to the 50-worker threshold to begin with.

The response of worker wages and employment to the employer and individual mandates in ObamaCare does represent a kind of informal referendum on the value to workers of the benefits they are being provided; wage reductions by the cost of the benefit, and stable employment, means that benefits are worth their cost to workers; smaller wage reductions and withdrawal of workers from the labor force in response mean that the benefits are not worth it to workers. And policy advocates might not be happy that what is good news (or, at least, better news) for employers and workers who want full time jobs will cost other workers some of their pay in return for new benefits. But even in these cases, generally paying for benefits out of wages is going to be a smarter strategy for employers, and better news for workers as a whole (including those who would have lost full time jobs), than the sad tidings currently on offer from the conventional wisdom.

Thanks in part to Brad Pitt and Jonah Hill, 2011 was a banner year for Analytics. Their performances in the movie Moneyball brought widespread attention to the idea that data, logic and a willingness to shake up the status quo can be instrumental in finding hidden value. Michael Lewis’s book about Billy Beane, Paul DePodesta and the Oakland A’s may have been first on the scene, but the movie made a broad audience sit up and pay attention.

The Moneyball story was so compelling because of the relationship between Beane and DePodesta. Here you have a visionary, if a bit erratic, leader willing to partner with a quantitatively focused team member to deliver break-through results. So to all the Billy Beane’s out there, how can you effectively work with the Paul DePodestas in your organization? Once you have found them, here are eight questions you can ask to jointly uncover breakthrough value.

1. WHAT IS YOUR NAME?

It may seem silly, but many companies choose to bury their analytics department in the basement along with all the other the other mechanical systems. But a successful analytics team can be much more than just the folks who count the website visits and provide the monthly sales reports. The best analytics groups serve as key advisors to senior staff, understanding the strategic issues at the top of the company and then digging into the data to find critical facts that can drive better business decisions. More and more companies are waking up to the importance of the analytics function and recognizing its leaders with titles like “Chief Analytics Officer” or “Data Scientist.” Some forward thinking companies have put analytics under the direction of the CFO, reflecting their belief that understanding and predicting customer behavior is as critical to the future of their business as keeping track of revenues and costs, and requires the commitment to objectivity (and ROI calculations) that Finance brings to the table.

2. ARE YOU COMFORTABLE SHOWING ME THE FACTS, EVEN WHEN IT ISN’T WHAT I WANT TO HEAR?

While a great analytics director will partner with you to follow up on your hunches and hypotheses, he or she should not be a “yes man.” Be cautious of analysts who always seem to spin the data to support your pet projects. The best analytics professionals focus on ferreting out the facts and can grow to be a trusted, objective voice in the C-suite. You want an objective strategic partner who can balance optimism over future prospects with a conservatism based on context and data.

3. WHAT ARE THE KEY CUSTOMER TOUCH POINTS IN OUR PURCHASE OR SERVICE PROCESS AND HOW DO WE MEASURE THEM?

In the past decades, there has been an explosion in our ability to observe customers as they interact with a company and as they progress through the sales process. The list of available measurement technologies gets longer every year and can become overwhelming (and expensive). Many companies find themselves with a variety of platform-specific data sets that are difficult to bring together, leaving the analytics team fighting with the data instead of creating insights.

To avoid this data overload, smart companies begin by mapping out their customer process and identifying “key customer touch points”. Beginning from these touch points, your director of analytics can develop a strategy to drive your investments in measurement technologies. For example, an online service provider who is having trouble growing its subscriber base might focus on instrumenting their marketing platforms to figure out what types of marketing is attracting the best customers. If that same company was having problems with retaining subscribers, they might focus on measuring how current subscribers use the service in hopes of improving the offering. When your analytics team has the discipline to organize data and reports around key customer touch points (rather than technology platforms) you will develop a holistic, single view of your customers and how they interact with you in a multi-platform world. After all, a purchase is a purchase whether is it made using cash in a store or using mobile payments through an app, and a good analyst will cut through the technology issues to help you see that.

Whether your business is media, financial services, retailing, B2B, or even non-profit, your director of analytics should be able to rattle off your key customer touch points, tell you the current strategies for measuring those touch points and let you know which touch points would most benefit from improvements in measurement.

4. HOW DOES YOUR ANALYSIS RELATE TO MY FINANCIAL GOALS?

Even if you have a set of key customer touch points, sifting through the sea of available data can still be overwhelming. It is important that you and your director of analytics stay focused on what you need to know to make decisions, not just what is interesting. One way to do this is to connect analytics reporting directly to financial goals. When your director of analytics focuses reports around critical business objectives such as “Is this service profitable?” and “Are we pricing it right?”, you will have the information you need to make decisions. “Nice to know” is simply not enough, and often costly.

5. WHAT STEPS ARE WE TAKING TO PROTECT OUR CUSTOMERS’ PRIVACY?

While data is an amazing boon to companies, allowing them to understand their customers better than ever, it also represents new obligations. Customers entrust their personal information to you and violating that trust can lose you customers and cause a number of other legal and regulatory problems. CMOs should encourage their analytics teams to work closely with their company’s privacy officers to help ensure that the analytics team understands the company’s data collection and use policies, as well as applicable laws and regulations. With proper executive encouragement, analytics teams can become active participants in a company’s data governance process.

6. ARE THERE WAYS WE CAN CHANGE HOW WE INTERACT WITH OUR CUSTOMERS THAT WILL PROVIDE ME ADDITIONAL (USEFUL) INFORMATION?

One of the persistent challenges that companies face is understanding how customers interact with their products and services over time. Today’s analyst should help you find ways to encourage customers to make themselves known using a variety of creative approaches that both improve the customer experience and make it easier for the company to measure customer activity and preference over time. For example, loyalty card programs have been very effective at getting customers to identify themselves every time they make a purchase, which, ultimately, allows retailers to accurately measure the effects of direct marketing. More and more, websites are providing reasons for customers to log in every time they visit – Amazon.com provides personalized product recommendations and ESPN.com provides fantasy sports. These changes are the ultimate win-win-win, providing better products and services to the customer and more accurate data so the company can make further product improvements.

7. HOW SURE ARE WE ABOUT THAT?

We live in an uncertain world and analytics is no different. Making business conclusions from data critically depends on there being enough data to draw conclusions. For example, if the response rate for two e-mail campaigns were 10% and 0%, but you only sent the e-mails to ten customers each, then you really shouldn’t bet on the first campaign being better. (It’s never a good idea to make conclusions based on just one customer.) A great analyst will find ways to let you know when the data is speaking loudly and when it is just whispering.

8. WHAT QUESTIONS SHOULD WE BE ASKING?

Everyone who touches data at your company should have a sense for why. What is the benefit to revenue or to costs? CMO’s should engage their analytics team in a conversation centered on, “Are we asking the right questions?” and “What questions should we be asking?” The answers should inform what you are measuring, what you are reporting on, what ad hoc tasks your analysts spend their scarce time researching and where you should focus your big data and predictive analytical resources.

IN CONCLUSION

The Oakland A’s did not win the World Series based on their Moneyball tactics, but they did uncover hidden value and they influenced the way baseball teams and companies look at analytics. The most lasting result of the Moneyball experiment could be that the visionary Beanes and the analytical DePodestas of the world should spend more time together, sharing perspectives on how to win more often.

Everybody has been buzzing about Nik Wallenda and his daredevil crossing over Niagara Falls this week. Elsewhere in the world of sports, the USA gymnastics trials for the Olympics will start later this month, and there’s nary a whisper about it. This difference in public interest/attention makes me think about the difference between the fields of economics and marketing: economics is like a high-wire act, and marketing is equivalent to gymnastics. Let me explain.

Economists are an interesting breed. They have their heads in the clouds, dreaming of a magical world where people are rational, markets operate efficiently, and all kinds of other strict rules apply about what’s right and wrong. In their vivid imaginations, they see upward-sloping supply curves, downward-sloping demand curves, and all the implications that arise from these and other assumptions. They’re not particularly troubled by facts, often because the actual “rubber meets the road” data that would support or refute such assumptions are hard to find – especially when they’re too busy balancing their delicate curves in a manner that often seems to defy the laws of physics.

Occasionally, economists encounter phenomena that don’t seem to comply with their balancing act – much like a gust of wind endangers the tightrope walker. That’s when their real skills start to show. They conjure up new assumptions and clever twists on old ones to prove that seemingly irrational phenomena are, after all, quite rational. Balance is restored once again. Well-written books like “Freakonomics” beautifully demonstrate this blend of art and science. It is wonderful to behold such mastery, and the skills required to stay safely perched up high are quite considerable.

In sharp contrast, marketers – like gymnasts – start and finish with their feet on the firm ground. The marketer’s desire is to understand the way the world really works – not how it should work under a shaky set of assumptions. Marketers and gymnasts also emphasize the need for balance, but in their case this word means something different than it does for the high-wire economists. Much of the balance for a marketer/gymnast arises from the need to be quite good at each of several different tasks. For the gymnast, this means achieving excellence across a variety of physical challenges (e.g., floor exercise, vault, parallel bars, etc.); for the marketer, it means mastery across a variety of academic disciplines (such as statistics, psychology, anthropology, sociology, and – yes – economics). Overspecializing in any one of these areas is a path to sure defeat; the key is to develop a broad set of skills to ensure that you can take on all the twists and turns required by the problem at hand, while always landing safely on two feet. A marketer wants to “stick the landing” by offering an implementable solution to a real problem – not an ivory-tower explanation that merely proves that the world is still rational.

Because marketing/gymnastics is so deeply rooted in reality, it is gritty work that doesn’t generate the same “oohs” and “aahs” as a tightrope performance. People look up to economists and down at marketers – just like they do when their athletic counterparts are performing. Economists get to rub elbows with presidents; marketers get to sell more deodorant. Nik Wallenda gets a prime-time special on national TV and marvelous movies have been made about other high-wire heroes such as Philippe Petit (“Man on Wire”); in sharp contrast, however, can you name a movie about a famous gymnast? Likewise, it’s easy for business leaders to name many economists, but how many can name even a single marketing professor? (I guess I should give up hope that they’ll make a movie from my new book, “Customer Centricity: Focus on the Right Customers for Strategic Advantage” http://wdp.wharton.upenn.edu/books/customer-centricity-essentials/).

OK, I admit it. While I’m proud to be a marketer, I do suffer from a bit of econ envy – as do many of my marketing brethren. It’s not that we feel intellectually inferior to our economist cousins; it’s just that we wish we could get some of the glory and attention that seem to come so naturally to them. We respect their work quite a bit (in part because our field derives some of its intellectual origins from theirs), but we hope in vain that they would share some of their limelight with us.

But when I stop being wistful about such matters, and get back to work, it’s all worthwhile. I genuinely enjoy the challenges of describing markets and consumers as they really are – not as they should be. I appreciate the opportunity to learn and combine different skills to solve complex real-world problems. I’m glad that companies come to me when they are looking for tangible recommendations and measurable solutions.

And while I know there will never be a Nobel Prize for Marketing, at least I can draw some consolation that gymnasts can compete for Olympic gold, unlike the folks who teeter on the high wire…

We do four things at Wharton Computing: have an impact on learning, support research, run the business of the school and keep our systems safe. These same tenets, undoubtedly with some industry-specific modifications, can be applied to any IT organization. You’ll notice that each of these areas is a fairly traditional support role that doesn’t bring in any revenue. However, it is possible to turn the internal tools running your business into paid products that bolster the bottom-line.

At Wharton, we’ve successfully turned some of our internally developed tools into revenue generating products for the School. No doubt your IT shop has a number of tools created to address institutional needs that might make good paid products, but how do you make that transition?

There are four lessons we learned going through the sometimes rocky process of transforming an internally offered product into a viable commercial service. Our experience with Wharton Research Data Services (WRDS) illustrates each of these lessons, but first a little information about WRDS itself and how it came to be.

About 20 years ago it became apparent that our faculty members were spending a lot of time acquiring access to similar datasets for their research. A centralized data service made far more sense, which led to the creation of WRDS. It was a big hit internally, and soon faculty at other institutions wanted WRDS access. Keep in mind at this point we hadn’t even thought about making WRDS a commercial product. We were in the enviable position of having spontaneous commercial demand for an internal product. Thanks to WRDS’ flexible architecture we were able to quickly bring our first few academic clients online. Fast-forward to today and WRDS now connects over 30,000 individual academic, institutional and corporate users in 27 countries to data from over 40 vendors on a single platform.

WRDS, as a product, made sense from a business perspective as Wharton Computing excels at managing data analysis tools and huge data sets to a large user base. This brings us to the first lesson: Stay close to your core competencies. It seems like a no-brainer: concentrate on what you do well, but this lesson is critical to success. If you can’t provide a rock solid, caliber service to your own internal users then don’t even think about trying to sell it externally; you will fail.

Speaking of failure, one of the potential perils of making an internal product available to external users is ostracizing your internal constituents. You never want to be in a position where one of your internal users says, “If I were paying for this product I’d get a better experience.” That’s why the second lesson is Support your own user base. Both external and internal users deserve a high level of support but your internal users are an integral to your product’s success. Think of them as a combination of sales reps who tell their colleagues about the wonderful tools they use, market researchers suggesting features and giving feedback, and beta testers who help you constantly improve and test your product. With WRDS we’ve created devoted resources for internal Wharton users that allows us to apply any Wharton specific tweaks to the product without impacting external clients.

The third lesson is a variation on the cliché “you have to spend money to make money.” Make investments both before and after you’re selling the service. You’ll quickly realize that selling an internal tool to an external audience requires significant investment upfront. You have to invest in staff with an entrepreneurial spirit before you can justify major expenses like additional support staff. If you invest in the right people early on they’ll be engaged in the product and want to see it succeed. Once you’ve had your first sale a couple of things will happen: people start to really believe in the product, and you have some positive cash flow which can be reinvested to make sure your product remains compelling and competitive in the marketplace.

The final lesson is perhaps the most important: Expand your skills. Having the right amount of staff is key, but having a staff equipped with the right skills is invaluable. Not only do you need technical staff to run your product, and work on new versions, but you’ll also need marketing and finance people and a dedicated support team. User support is a great example of the additional skills that are required when dealing with external users versus internal. Many internal products have “a guy” who knows every line of code, every nuance of the system, and can solve any problem. The guy answers all your support questions, but what happens when she goes on vacation or calls in sick? Your internal users are often willing to wait for the guy to get back to them, but paying customers won’t be as forgiving. You’ll need a solid support team in place with a formal structure so both your users and your staff know what is expected.

Transforming an internal product or service into something people will pay for isn’t simple but if you keep these lessons in mind and stick to your core competencies, support your internal users, invest in the product, and expand your team’s skills you’ll find the rewards well worth the effort.

]]>http://www.forbes.com/sites/wharton/2012/04/03/unleashing-your-it-departments-strengths-to-create-new-business/feed/1China’s Year of the Dragonhttp://www.forbes.com/sites/wharton/2012/02/21/chinas-year-of-the-dragon/
http://www.forbes.com/sites/wharton/2012/02/21/chinas-year-of-the-dragon/#commentsTue, 21 Feb 2012 20:48:24 +0000http://blogs.forbes.com/wharton/?p=29Jason Wingard is the Vice Dean of the Wharton School’s Aresty Institute of Executive Education

When I visited Hong Kong in mid-January to attend a meeting of the Wharton Executive Board for Asia and host an alumni reception, feverish preparations were well underway for the Chinese New Year that began January 23rd on the western calendar. Each year in the 12-year cycle of the Chinese lunar calendar is represented by a different animal, and we are now in the Year of the Dragon, a particularly lucky and auspicious member of the Chinese zodiac. In Chinese folklore, the dragon is not an evil creature but rather a benevolent one who has been blessed by the gods and is capable of amazing feats.

The Year of the Dragon seems destined to be an inflection point in the dramatic history of China’s economic development. Yes, there is the familiar question of whether this is the year that China’s extended period of remarkable economic growth will finally be supplanted by a slower rate of expansion. But there is another transition underway as China’s position as the world’s low cost producer is steadily challenged by increases in wages and other costs.

China is responding in two ways: Industrial migration and executive training. As costs rise in the coastal areas where industrial development has been centered, there is a shifting of industry westward to less well-developed and cheaper interior locations. Although China has vast expanses of undeveloped territory, this is still a temporary solution. The main thrust of Chinese industrial policy must inevitably involve moving up the value chain. This is precisely what other developing nations have done. Indeed, as production costs rose in Taiwan, South Korea, and Hong Kong, they shifted to producing more complex and costly goods.

Like these countries, China must move into designing, producing, and marketing its own higher value-added finished products, in order to continue prospering. China has shown it has the entrepreneurial skills to discern opportunities and seize them. To climb the added- value ladder, however, Chinese executives also need to enhance their leadership skills. China has avidly pursued technology transfers from its joint venture partners, but it also needs leadership and management technology to up its game, and that requires education and training.

Wharton is one of the institutions that is helping China embrace complexity. Wharton not only provides Executive Education in China, we have also offered global modular courses and have entered into academic partnerships with Peking University and Tsinghua University. Our executive education business in China has doubled since the last fiscal year. These advanced, customized leadership programs will help provide one of the critical preconditions required for China to achieve a broad-based transition to more value-added products and processes.

Designing, producing, and branding more sophisticated products require taste and creativity, marketing sizzle and naming flair, as well as organizational and managerial skills that go beyond what China has marshaled in many of its existing industries. Despite years of sending billions of dollars of exports into virtually every country around the world, there are few indigenous Chinese brand names that anyone can recall. There is no Samsung or LG, let alone an Apple or Ford. China produces a substantial portion of the world’s apparel, but the most valuable items carry the names of Western designers. Nonetheless, the unparalleled record of growth and development that China has registered over the past few decades suggest that it will absorb the lessons of leadership training. Top business schools like Wharton can help Chinese executives make a successful transition to the next phase of that country’s economic development.

In the wake of Amazon’s disappointing Q4 results, the Kindle Fire has ignited a veritable firestorm of debate.

Lackluster reviews and suspicions that the tablet device is being sold below cost have led analysts to anxiously eye the company’s dwindling cash reserves. But amidst the heated debates about functionality and pricing, one concern has received relatively little attention: should Amazon be competing in the tablet market in the first place?

From my perspective, the Kindle Fire represents a dispiriting move away from Amazon’s historical focus on customer centricity. In my book Wharton Executive Education Customer Centricity Essentials: What It Is, What It Isn’t, and Why It Matters, I argue that a customer-centric strategy aligns a company’s development and delivery of its products and services around the needs of a select set of customers in order to maximize their long-term financial value to the firm.

This emphasis on a “select set” of customers is crucial. Customer-centric firms never talk about “the customer” – because there is no average customer. These firms recognize that there is a diverse ecosystem of consumers out there of all colors, shapes, sizes – and, most importantly, different lifetime values to the firm. Customer-centric firms celebrate the heterogeneity of their customer bases and focus their efforts on those subsets that are likely to provide the greatest bang for the buck over the long term.

In many ways, Amazon has set the standard for customer-centric activities. The company maintains detailed customer-level data, which it uses to tailor its marketing communications and make customized product recommendations. When I ask my Wharton MBA students to name companies that are truly customer-centric, Amazon is always near the top of the list.

And for the original Kindle Reader, this “select set” of focal customers was clearly defined. Back in 2010, Jeff Bezos went on record saying that the Kindle was for “serious readers.” He elaborated by pointing out that “90% of households are not serious reading households.”

By focusing squarely on serious readers, the Kindle carved out a tremendously valuable market niche. Its simple interface and innovative screen technology provided a top-notch reading experience for those who still care to read books. It was a strategy focusing on creating delight for a particularly profitable customer segment. The many other “non-serious readers” who also bought it were just icing on the cake. I often pointed to this specific example as a great case study of genuine customer centricity in action.

Yet here we are today, watching Amazon dismantle this wonderful exemplar. It’s understandable that Amazon wanted to leverage the success of its Kindle to gain a toehold with the broader market. Understandable – but deeply misguided. By trying to make hay from the current tablet frenzy, Amazon has strayed from its customer-centric roots towards a more conventional product-centric mindset. The question they seem to be asking themselves now is, what can we make – and who can we sell it to?

The problem in this case isn’t a lack of demand for the product. Indeed, even as profits sagged and Amazon burned through its cash, the company sold an estimated 6 million Kindle Fires in the fourth quarter alone. So what’s wrong with this strategy?

First, it consumes scarce resources and valuable management attention. While Amazon executives are busy fixing glitches in the Kindle Fire, they could have been focusing on how to acquire profitable new “serious readers,” retain the ones they already have within the Kindle franchise, and use the Kindle platform to extract the maximum value from existing customers through cross-selling, upselling, and other customer development activities.

Second, by branding the Fire under the Kindle umbrella, Amazon risks confusing and alienating its focal customers. Now that the premier product in the Kindle line no longer offers the unique reading experience that was associated with its original e-reader, the entire value and proposition of the Kindle franchise isn’t so clear any more. Amazon should have used a distinctly different name for the Fire so that serious readers would still proudly use their Kindles with the knowledge that they were still held in special regard by the firm.

So what can Amazon do to right this mess? The script seems to dictate that sooner or later the Kindle Fire will be yanked from the market once it proves to be too much of a drag on Amazon’s earnings and resources. At that point, Jeff Bezos should focus on developing an enhanced version of the original Kindle and reassure its most valuable customers that Amazon is continuing to develop new devices and services with them in mind. In other words, Amazon should scrap the Fire and hold on to the glowing embers: that focal core of deeply profitable customers who represent the firm’s ongoing source of competitive advantage.

Customer centricity is a winning strategy. Introducing a “me too” product into an already crowded marketplace is not.

At Wharton, creating and maintaining a 21st century learning environment is one of our top priorities. When it was decided to move the Wharton| San Francisco campus to a larger space, it was an opportunity to install cutting-edge technologies to impact the executive MBA students and executive education participants in San Francisco.

The biggest change is that the new facility in San Francisco will feature all digital high-definition classrooms designed to support Wharton’s commitment to connected and lifelong learning. Whether it’s a class, a speaker series, or a networking event, the classrooms will be production ready for streaming or broadcasting in HD to students, alumni and the world.
One benefit of the HD is that students will have a more immersive experience – inside the classroom and beyond. Classroom materials, videoconferencing or homework assignments, will all benefit. HD provides a significantly improved quality and sense of connectivity, which is important for a school such as Wharton where it’s not uncommon for high-level guest speakers to give talks from around the globe.

Another innovation is the design of the group study rooms. As team projects are an important part of our curriculum, a design focus has been on designing the spaces so that they foster teamwork and collaboration. Our group study rooms will enable teams to work with each other down the hall, in Philadelphia, or around the world using videoconferencing and innovative controls to create shared digital workspaces.

Whether they are in the classroom, a group study room, or back at home, the executive MBA students will be able to make use of their Wharton provided iPads. Continuing our iPad pilot program that began in May 2010, an Executive MBA Technology Advisory Group continues to study how students consume content, create content, and collaborate which in turn determines how the School can continue to enhance their educational experiences with these devices.

Faculty and guest speakers will find touchscreen technology in all of the classroom podiums, allowing them more interaction with their materials. Not only can they project content, they also can interact with it and capture it via annotation.
Knowing how rapidly technology changes, a collaboration with Shen Milsom & Wilke and Crestron to future-proof the new rooms will allow for seamless adoption of new technologies when they become available.

Since construction began in June on Wharton | San Francisco’s new home — the top floor of the Hills Plaza building on the Embarcadero — we’ll soon start testing and refining the new technologies. We are fortunate to have two campuses, as many of the innovations in our San Francisco facility will be brought back to Wharton’s Philadelphia campus.
Our goal is to ensure that future business leaders have the most innovative, cutting-edge, and connected facilities possible whether they are in San Francisco or Philadelphia. As one of the top business schools in the world, we continue to create and refine a dynamic, futureproof 21st century learning environment.

]]>http://www.forbes.com/sites/wharton/2011/11/16/designing-a-21st-century-learning-environment/feed/0A New Kind of Distance Learninghttp://www.forbes.com/sites/wharton/2011/09/14/a-new-kind-of-distance-learning/
http://www.forbes.com/sites/wharton/2011/09/14/a-new-kind-of-distance-learning/#commentsWed, 14 Sep 2011 20:11:42 +0000http://blogs.forbes.com/wharton/?p=8Don Huesman is the Managing Director of the Innovation Group at the Wharton School

In a famous story about languages, researchers studying the Greenland Inuits documented their use of 15 different terms for snow. By way of comparison, the Navajo have two – one for falling snow, and a second for everything else. In the use of technology in education we have a growing set of terms that appear on the surface to represent the same thing – distance learning, elearning, blended learning for example. In the spirit of Greenland’s Inuits I’d like to suggest these terms do have different shades of meaning, and I’d like to join those calling for adding another term to the list– connected learning.

The term is a play on words, of course, and an example of the old marketing trick of avoiding the baggage of one phrase by inventing another. Distance learning suggests for some a detached, lighter and less serious version of learning. I personally believe that distance learning is a noble cause, providing access to college to those who otherwise find themselves left outside the ivy covered gates– imagine displaced autoworkers in Detroit or single mothers in Oakland completing coursework after putting the kids to bed. In some instances the barrier to access is not distance, but capacity.

At Miami Dade College, more than 30,000 students cannot take courses required for their programs because there are not enough seats available in classrooms. With recorded lectures and interactive courseware, classrooms can be reserved for personal interaction and exercises, and thus scheduled more efficiently. Enabling greater access to higher education and increasing capacity are critical objectives in the current economic crisis.

But access and capacity are not the goals driving what I am referring to here as connected learning. The technology experiments we are conducting at Wharton have a very different objective, and as a result, a different design for technology enabled learning environments. Our objective is to use technology to take advantage of the fact that distance between learners sometimes provides opportunities for learning that would never work as well in a traditional classroom. Connected learning seeks to explore alternative venues while maintaining the immediacy and intimacy of the traditional academic community. When successful, the result is a higher quality experience and stronger learning outcomes than would be available through traditional means.

This summer, for example, we had students working summer jobs and internships in companies all over the world, in workplaces at Google, Microsoft, Cisco and Apple, at manufacturers in Australia, China and Japan, financial service firms on Wall Street and tech firms in Israel. Each Tuesday evening 15 of these students joined in a web conference with Tom Lee, a faculty member conducting class from Wharton San Francisco.

The class focus is the design and development of new web based products and services. The differing perspectives provided by students in various summer work and internship sites represent a rich resource for discussion and team-based project coursework. This connected classroom now has access through internships to a global laboratory, with diverse settings for testing and experimentation, and for exposing academic models and textbook analysis to the informed and immediate critique of current practitioners. The distance bridged by connected learning is not just geographical, but can include the divide between practice and the academy.

At its root, this is not really new. Practicums, internships and field studies have a long tradition in universities. John Dewey outlined the importance and processes of reflection in practice over 100 years ago. What is different is that our technology now allows us to transcend the barriers of distance and time much more effectively than was possible before. Incremental quantitative advances in bandwidth, video frames per second, cloud based storage and processing and other technologies have reached a tipping point, with qualitatively new opportunities for learning now emerging. Connected learning is the effort to explore the added value of distance in our programs of study, with our students acting as agents in wider communities beyond the campus ivy covered gates.