With each passing day, we are 
 increasingly living in an algorithmic universe, due to the easy accumulation of big data. In our personal lives, we experience being in a 24/7 world of “filter bubbles,” where Facebook has the ability to customize how liberal or conservative one's newsfeed is based on prior postings; Google personalizes ads popping up on Gmail based on the content of our conversations; and merchants like Amazon and Pandora feed us personalized recommendations based on our prior purchases and everything we click on.

While (at least in theory) we remain free in our personal lives to make choices in continuing to use these applications or not, increasingly often what we see is the result of hidden bias in the software. Similarly, in the workplace, the use of black box algorithms holds the potential of introducing certain types of bias without an employee's or prospective employee's knowledge. The question we wish to address here: From an information governance perspective, how can management provide some kind of check on the sometimes naïve, sometimes sophisticated use of algorithms in the corporate environment?

|

Algorithms in the Wild

An early, well-known example of the surprising power of algorithms was Target's use of software that, based on purchasing data (e.g., who was buying unscented lotions, cotton balls, etc.), was spookily able to predict whether a customer was likely pregnant. Target sent coupons for baby products to a Minnesota teenager's home before the teenager's father knew about the pregnancy, leading to a bad public relations episode. A different example is Massachusetts' use of a mobile app called Street Bump, where smartphones riding over potholes and the like would automatically report their location for local government to fix. The problem: the resulting map of potholes corresponded closely with the demographically more well-off areas of the city, as those were the areas where individuals knew to download the mobile app and could afford smartphones in the first place.