Why Internet marketing is often unfair

Choosing the means of communication, recommendations and promotions based on individual preferences of each customer, can find approach to any buyer. With the development of machine learning and big data Analytics personalization is gaining in importance and become less Intrusive and annoying for consumers. However, automated systems can reinforce harmful prejudices.

In our new study, we explored the use of dynamic pricing and targeted discounts and tried to find out whether (and how) lead to the false prejudice, if the prices charged to consumers are determined by the algorithm. A cautionary tale personalized marketing such happened with Princeton Review. In 2015, it became clear that this company is offering services in preparing for the examinations prescribed for customers from different regions different prices, and the differences in some cases have reached hundreds of dollars, despite the fact that all classes were conducted through teleconferencing. In the short term, this type of dynamic pricing might seem an easy way to increase revenue. But studies have repeatedly shown that consumers believe this approach is obviously unfair that it undermines confidence and causes no desire to go back to the company. Moreover, the bias Princeton Review wore and racial character: the subsequent resonance investigation conducted by journalists ProPublica, showed that for Asian families prices were systematically higher than for other clients.

Even the largest technology companies recognize that to provide personalized services, while avoiding discrimination, is not easy. Several studies have shown that the announcement of high-paying jobs on such platforms as Facebook and Google, much more often shown to men. Lastthe year against Facebook was filed, and as a result it was found that the company had violated the law prohibiting discrimination in the sale and rental of housing as allowed property advertisers to target ads depending on the race, sex, age, etc.

As Yandex, Ozon and “Tinkoff Bank” hiring an army of assessors To those manually looked for errors in the computer algorithms

How the personalization algorithms? Suppose your company wants to identify the most sensitive to the discount customers. And to train the algorithm you plan to use the accumulated statistics for previous periods. If among the customer characteristics used for machine learning, there are demographic parameters, in the end, the algorithm with high probability will give different recommendations for different demographic groups. Often cities and regions divided along ethnic and social lines, and data about browsing history of the user correlate with its geographic position (e.g., via IP address or search history). It is possible that areas with high income or from areas populated predominantly by representatives of one nationality or race was the most sensitive to discounts. Customers with high income can afford to buy something at full price, but they often make purchases via the Internet and know when you can expect lower prices. The algorithm, trained on such statistics, will learn to offer more discounts white affluent customers.

We’ve reviewed the results of dozens of large-scale experiments in the field of pricing in electronic Commerce. Using the IP address of the client as a rough indication of the location, we were able to match each user with his city, according to the U.S. census and to get an idea of the average income in the region used public data. Analyzing the results of millions of web sites, we found confirmation that, as in the above hypothetical example, people from wealthy areas are more actively respond to discounts when shopping over the Internet, than people from poorer areas. And since the algorithms for dynamic pricing designed to offer deals to users who are more likely to respond to them, marketing campaigns in the future, obviously, will systematically offer lower prices to individuals with a higher income.

in Order to minimize such socially undesirable results, companies will have to conduct audits of artificial intelligence, including evaluation of accuracy, fairness, interpretability and reliability of all important algorithmic decisions in the organization.

Although the first stage we can expect considerable costs in the long term it may be profitable for many companies. Given the social, technical and legal difficulties associated with the validity of the algorithms, you will need a team of trained internal or external experts, who are in any business process involving automated decision making, will try to find blind spots and vulnerabilities.

the Topic of social inequality is very painful, and CEOs, as ever, it is important to think about how not infringe on in any way automated marketing campaign, the interests of certain social or ethnic groups. It is necessary for the success of the company in the long run.

About the authors: Alex Miller is a doctoral student in the field of information systems and technology at the Wharton business school of the University of Pennsylvania. Kartik Hosanagar – Professor of technology and digital business at the Wharton school of business at the University of Pennsylvania.

the Original article here.