Algorithm redlining is a problem that isn’t going away

In 1988, Bill Dedman pointed out in his Pulitzer Prize winning article the devastating effects of redlining.

Redlining is an insurance term that started when companies would literally circle areas on a map that were deemed too risky to insure. Of course, the people most affected were in inner cities or low-income neighborhoods.

While insurance redlining is now illegal, there are still hundreds of examples where it happens every day:

  1. Online Delivery Redlining: Amazon, “The Everything Store” with one-day delivery, is apparently not for everyone.
  2. Supermarket Redlining: In Hartford, Connecticut, 85% of supermarkets left the city between 1968 to 1984 and few supermarkets have opened since.
  3. Geographical Redlining: Retail giants like Home Depot and Staples charge different prices based on where you live.
  4. Predatory Loan Redlining and Liquorlining: In sort of a reverse redlining, areas where banks are not loaning as much money see an increase in predatory loans and liquor stores.
  5. Subprime Loan Redlining: Back in 2000, Wells Fargo targeted churches in black communities and convinced their religious leaders to deliver “wealth building” seminars to their congregation. The bank would then make a donation to the church in exchange for every new mortgage application.

The list goes on with student loans, auto insurance, workforce services…you name it.

Now, as Joi Ito wrote, we are training our computer algorithms to do the same thing. Big Tech can now target and begin redlining because we have surrendered so much of our personal information.

So what happens when your Apple Watch predicts that you are 18% more likely to have a heart attack at age 50? Will you be uninsurable?

Or how about predicting the likelihood whether someone will commit another crime or get in a driving accident or perform well at a job?

The problem isn’t predicting. The problem is how we use the prediction.

Right now, the algorithms are wrong. Most of the time in fact. Yet, we are increasingly relying on them each day to tell us what to do.

The question is, what happens when we begin to rely on these algorithms for everything? What happens when these algorithms actually get good at predicting behavior? Is that algorithm biased in anyway?

Computers are not biased like humans. They don’t think or feel or carry the same prejudice that human beings do. But humans have built computers and computer are a reflection of our flawed system.