Quantcast
Channel: David Raab Article Archive » Relationship Marketing Report
Viewing all articles
Browse latest Browse all 10

Privacy

$
0
0
Privacy
David M. Raab
Relationship Marketing Report
August, 2000
.

Privacy concerns have grown from a distant cloud on the bright horizon of relationship marketing to a looming thunderhead that may burst any second. The immediate catalyst has been the Internet, where consumers are acutely aware that their every movement can be tracked, recorded and analyzed. But database marketers have collected reams of personal data for years, a practice that generated fewer complaints only because consumers were less familiar with the details. Now that the Internet has made privacy a public issue, all kinds of data gathering are being examined.

Many marketers are genuinely perplexed by consumers’ concern. While marketers are often willing to play on public distrust of government snooping when it serves their purposes, they see little potential for abuse of data held in private hands. After all, a private business can’t put you in jail or make you pay taxes. The commercial reason for gathering personal information is benign: to provide advertisements and services that are tailored to individual interests. So long as consumers have the option to reject any offers they receive, what’s the harm? As these marketers see it, the whole issue has been manufactured by a handful of privacy nuts who are either genuinely paranoid or cynically use privacy to further their own political agendas. Proponents of this view point to opinion surveys, low use of existing opt-out services, and consumers’ willingness to trade data for slight compensation as evidence that the real value people place on data privacy is quite low.

But such arguments miss the point. True, most people don’t care about trivial annoyances, such as advertising, and therefore won’t pay even a low price in effort or cash to avoid them. But personal data gathered by businesses can be used for decisions with considerably greater impact than ad messages. Remember, the fundamental goal of relationship management is to tailor all interactions to build the optimal relationship with each customer. This goes beyond advertising to include which products are offered, how they are priced, and what level of service is provided.

This sort of data-driven tailoring is usually described as treating customers differently, but a less polite way to put it is that some will be treated better than others. Think about the airlines: your status with the company determines everything from how quickly they answer the phone to take your reservation, to how long you wait to check in, to how soon you can board, to when your luggage comes off, in addition to mention your legroom, food and drinks during the flight itself. Interestingly, this particular set of privileges seems to attract little hostility from the have-nots, perhaps because they still perceive air travel as a luxury. Compare this to the anger that banks generate when they charge additional fees or limit the services available to low-balance customers. This type of discrimination is perceived as hugely unfair, presumably because banking is considered an economic necessity, or perhaps because it seems to target lower income customers.

Although the superior treatment that airlines, banks and others give their best customers raises some issues of fairness, it is not primarily a privacy issue because the distinctions are based on transactions between the customer and the business itself. That is, there is no hidden data collection and no sharing of data from other sources. There are also no inferences based on predictive models or scorecards: the criteria are objective measures such as miles flown or balances maintained. This means that even though a policy may be inherently unfair, it is at least accurately and consistently applied.

By contrast, the worst privacy nightmares involve data that may be collected surreptitiously, inappropriately shared with others, applied incorrectly or just plain wrong. This is where the real harm gets done: someone is denied a job because they visited a gay Web site; someone is charged a higher health insurance premium because the insurer sees their past medical bills; a loan application is rejected based on a defective statistical model; a service request is denied because an external database suggests the buyer will not be a good future customer.

A privacy skeptic might point out that most of these situations involve a problem unrelated to privacy itself: people shouldn’t be penalized for visiting unpopular Web sites, people shouldn’t lie about their past medical bills, statistical models shouldn’t be defective, data shouldn’t be incomplete or inaccurate. But such problems do exist and always will; it would be silly to ignore them, assume they will all be fixed, or pretend we can legislate them out of existence. They are part of the privacy issue because they only hurt people when large amounts of personal data are widely available: otherwise, no one would be able to check what Web sites someone had visited, look at their medical bills, use certain variables in statistical models, and so on. In effect, privacy wraps a blanket of ignorance around each individual that prevents companies from even trying to discriminate among them, for good reasons or bad. It seems likely that a visceral understanding of the protection provided by privacy underlies most people’s concern for its loss.

Of course, no one has absolute privacy, and giving up information confers benefits as well as costs. So the privacy debate is really about striking a balance between the value that data can provide and the problems that it can cause. It’s tempting to argue for a free market solution, of letting individuals negotiate with companies about which data to share, for what uses and with what compensation. But things aren’t so simple: if most people decide to share a particular piece of information, then those who do not may be falsely assumed to be hiding something, or simply lumped into a category that gets worse treatment than others. So even voluntary data sharing results in effective coercion of those who would prefer to opt out. This means, paradoxically, that only government regulations can make data sharing truly voluntary, by protecting those who choose not to share.

Obviously there are economic and social costs to such regulation, so it should be limited to types of data that are worth controlling. The government also needs to ensure that companies use data only in the agreed ways, just as it would enforce any other contract. Finally, society may decide that some types of data should never be collected or should only be used for particular purposes. So, free market fantasies notwithstanding, there is no alternative to government involvement in this area. In reality, extensive government regulation already exists in areas such as credit reports and medical information.

The real question, then, is not whether the government should be involved in privacy regulation, but how. This ultimately depends on the social aims the regulation supports. One aim is privacy itself–the sense that what a person does should be nobody else’s business unless there is a good reason otherwise. Today such a right is widely, and legally, acknowledged, although it is still questioned in some circles. Another fundamental social goal is fairness: the idea that everyone should be treated equally unless they are different in a significant, relevant way. In today’s United States, differences such as race and religion are almost never acceptable grounds for differential treatment; other differences, such as income and education, are suspect but not automatically forbidden. Fairness is justified as a fundamental moral imperative–it is simply the right thing to do. In addition, fairness has a practical justification: giving everyone equal opportunities helps to ensure that society gets the benefit of all of its members’ talents.

Since the bedrock principle of fairness is that people should be treated equally unless there is a reason not to, any differential treatments–including the treatments of relationship management–need some justification before they are accepted. In the airline and banking examples mentioned earlier, the justification is fairly easy: the differences are based on relevant past behavior by the specific individuals involved. But what about differential treatment that is not based on concrete information about specific customer transactions? If the information is likely to be inaccurate or lead to false conclusions, it is considerably less defensible because people may be discriminated against unfairly. And–here’s where privacy comes back in–much of the personal data that is surreptitiously shared among companies is prone to exactly this sort of problem. There may be errors in the data itself, errors in matching data to the right person, approximations used in place of real data, and any number of other flaws. Nor, unfortunately, are the errors likely to be unbiased: data based on averages will penalize people who belong to disadvantaged groups and reward those who belong to more privileged strata, regardless of their personal characteristics. This means that differential treatment based on inadequate data is not only immoral, but diminishes the social mobility that is a key practical benefit of fairness. The utilitarian argument could be extended to oppose differential treatments even when they are based on accurate data: the theory is that treating everyone the same effectively subsidizes less well-off people, making it easier for them to ultimately succeed. But such subsidies are harder to defend on purely ethical grounds.

This is pretty abstract stuff, and there may be some holes in the logic. But what it boils down to is this: data privacy raises serious personal and social issues. It is not merely the irrational concern of a handful of paranoid Luddites. Relationship marketers should not fight blindly for the widest possible freedom in how they use personal data but should carefully seek to balance valid business and social considerations. Those who do may well find themselves supporting considerably greater restrictions on data sharing than they originally expected.

* * *

David M. Raab is a Principal at Raab Associates Inc., a consultancy specializing in marketing technology and analytics. He can be reached at draab@raabassociates.com.


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images