<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=400669150353674&amp;ev=PageView&amp;noscript=1">
CONTACT US
800.472.1844
EMAIL US

OFFICES
29 SCHOOL STREET, LEBANON, NH 03766
264 SOUTH RIVER ROAD, BEDFORD, NH 03110
2300 FIRST STREET, #336K, LIVERMORE, CA 94550
400 EAST WALNUT ST., SUITE 128, SPRINGFIELD MO 65806

21 Oct Protecting Data Analysis

Posted at 16:22h by Jay Hutchins 2 Comments

It’s pretty common knowledge these days that every time any one of us participates in social media (liking, retweeting, posting, etc.) buys something online, logs onto an app or views an ad on a phones or computer, that information is collected, stored—and used. The music you download gives a strong clue about your age. Living in certain census tracts hints at your racial or ethnic heritage.

The collective data about you and everyone else is processed using algorithms—a fancy term for computer programs that look for patterns in the data. A simple example is: if you buy a few romance novels on Amazon, they will recommend other romance novels. The algorithm draws not only on your reading preference, but also the most best-selling books within your reading preference, and perhaps also the specific type of romance novels you’ve purchased.

A more complex example is when you click on an article about anything, and suddenly you see advertisements on your browser pages about whatever you clicked on, and then the ads continue to follow you around. When Facebook sold user data to Cambridge Analytics, the firm was able to determine not just voter preferences, but also the issues that would turn on or off the voter’s enthusiasm for voting for a particular candidate, and target advertisements to individuals that would raise or lower those enthusiasm levels.

The point here is that algorithms play a huge role not only in helping us make decisions, but in influencing those decisions—and some people are alarmed about the potential for misuse. They have good reason. Facebook famously blocked many Native Americans from signing up for accounts because the software thought their names—including Lance Browneyes and Dana Lone Hill—were fake. An Amazon artificial intelligence algorithm that was designed to screen job applicants managed to teach itself to weed out women by looking for certain keywords on their resumes. Researchers recently found that job-finding searches are less likely to refer opportunities for high-paying positions to women? Why?  Because these job seekers don’t match the typical profile of people already holding those jobs—mostly white men. 

Algorithmic lending systems have tended to charge higher interest rates to Latin and African-American borrowers, not because they can detect the color of the borrower’s skin, but because of ethnic purchasing patterns. Those demographic cohorts have been, in the past, less likely to shop online than their white anglo peers.

Is help on the way? A recent article in Bloomberg notes that both the U.S. House and Senate are reviewing a bill called the Algorithmic Accountability Act of 2019, which would require companies to test algorithms for bias. In the U.K., a government-commissioned group comprised of technology experts, policy makers and lawyers, is working on a report that is expected to call for a universal code of ethics for algorithmic integrity.

Government regulations have themselves proved less-than-perfect governors of behavior.  But new laws would at least make it hard for companies to ignore issues that arise as they subtly, quietly, invisibly influence what we see, what is recommended, and how we’re treated.

Recent Posts