When the data crunching crunches you in the new age of algorithms

The robot takeover will have profound implications for the future of work and daily life—but not in the way that lay folks might guess. As illustrated by this reporting, for example, the implications may be unobvious: “the scariest part of the automated workplace is probably not that robots are coming to take your job — it’s that the robots are coming to measure your job.”

The development is part of what many commentators and historians have dubbed the “Fourth Industrial Revolution.” In this new age of algorithms, artificial intelligence is being used by companies, governments, and private individuals to conduct powerful analysis of Big Data on an unprecedented scale and with groundbreaking sophistication.

But what happens when the data crunching crunches you, reducing your humanity down to 1s and 0s in a way that weaponizes the bell curve? 

“The U.S. government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job….We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood and agents provocateurs is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.”

Schneier further offers several possible solutions:

“The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.

The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.

We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.”

These concerns and potential solutions are not on most policymakers’ radar yet, let alone on the docket for public debate. It’s time to prepare and start the conversation.