澳门跑狗论坛

Privacy & Security

U.S. House Hearing on Algorithms & Big Data: 5 Takeaways for Schools

By Benjamin Herold 鈥 November 29, 2017 7 min read
  • Save to favorites
  • Print
Email Copy URL

The Energy and Commerce Committee of the U.S. House of Representatives held a hearing Wednesday on the role of algorithms in how companies collect and analyze consumer data, then use the information to target consumers.

Education was less of an explicit focus than headline-grabbing data breaches (Equifax, Uber), election interference and social-media manipulation, and the contentious debate over net neutrality. But there was still lots for the K-12 community to chew on.

Here are five big takeaways.


1. Loss of privacy is a major concern.

More than 9 of 10 adults believe that 鈥渃onsumers have lost control of how personal information is collected and used by companies, and 68 percent believe current laws are not good enough in protecting people鈥檚 privacy online,鈥 argued Laura Moy, the director of the Center on Privacy and Technology at Georgetown Law, in her written testimony.

Here, she argued, K-12 students actually have it better than consumers at large, because there are at least some federal privacy laws (most notably the Family Educational Rights and Privacy Act, or FERPA, and the Children鈥檚 Online Privacy Protection Act, or COPPA) in place.

But many in the K-12 community argue that FERPA, in particular, is outdated. Some critics also contend the law was significantly weakened by regulatory changes approved by President Obama鈥檚 education department. As a result, the argument goes, the legal safety net protecting kids and families is full of gaping holes, and it generally fails to account for many of the modern technologies, data-gathering techniques, and uses of algorithms (basically, a series of steps that tells a computer how to solve a problem) that are common among ed-tech products today.

So far, such concerns have generated lots of state-level legislative activity. In the nation鈥檚 capital, it鈥檚 been mostly talk and little action, although there are again rumblings of potential efforts to update FERPA and issue new guidance on COPPA. This Friday, for example, the Federal Trade Commission and the U.S. Department of Education will be jointly hosting a public , with a focus on how the two laws might be bolstered to better protect students.


2. Companies are gathering LOTS of data on consumers. But what they actually collect is only the start of the potential problem.

Credit Michael Kearns, the chair of the computer and information science at the University of Pennsylvania, with the catchphrase of the day: 鈥淒ata intimacy.鈥

In his written testimony, Kearns defined the term as 鈥渢he notion that machine learning enables companies to routinely draw predictions and inferences about consumers that go far deeper than the face-value of data collected as part of consumers鈥 online activities.鈥 Examples include ascertaining your romantic partners, religious beliefs, and political affiliations based on where you shop and what you search for on the internet.

Particularly worrisome, Kearns said, is that machine-learning algorithms are increasingly being used to determine users鈥 emotions, moods, and mental states.

Such information can be used for 鈥渧ulnerability-based marketing,鈥 in which people are targeted with particular ads or services based on an algorithm鈥檚 determination of their emotional state. One potential example cited by Kearns: predatory loans aimed at low-income people considering for-profit higher education institutions.

Facebook also recently announced it would use its algorithms to help identify potential suicide risks.

All that likely resonates in the K-12 world, where some school districts are already paying companies to for potential warning signs, and where the ed-tech industry has grown increasingly focused on 鈥渟ocial-emotional learning.鈥 Parents and activists across the political spectrum have voiced alarm at the prospect of companies using surveys, webcams, clickstream data, and other techniques to create psychological and behavioral profiles of students in the name of better understanding their mindsets and preferences, and targeting content accordingly.

Kearns鈥 advice to Congressional lawmakers is also relevant for K-12 policymakers and officials: When considering how to legislate, regulate, and contract with technology providers, don鈥檛 focus solely on the data they are directly collecting. Take the time to also consider all the inferences and predictions that companies might be able to do with those data, especially if they start combining it with other information.


3. Algorithmic bias is more complicated than you might think.

The possibility of algorithmic bias has started to receive more attention in K-12. What if software programs recommend a less-rigorous curriculum or lesser career-field opportunities to certain groups of students?

Such concerns have already been documented in the consumer space, said Catherine Tucker, a professor of management science and marketing at MIT鈥檚 Sloan School of Management.

She described a study in which researchers placed an advertisement on Facebook, Google, and Twitter featuring job opportunities in science and technology. They found that the companies鈥 targeting algorithms ended up showing the ads to 40 percent more men than women.

But further research showed that the algorithm itself wasn鈥檛 skewed in favor of men, Tucker said. Nor were there cultural factors at work that led women to be less responsive to the ad.

Instead, she said, the problem derived from hidden economic forces. The companies鈥 algorithms were essentially running auctions that let a variety of advertisers bid on the opportunity to reach particular audiences. Other advertisers were willing to pay extra to reach women. And from the algorithms鈥 perspective, the priority for the researchers was to keep costs down. The end result was that their job ad was steered away from the more-expensive female audience.

The lesson for policymakers, in K-12 and beyond?

There鈥檚 no easy solution, Tucker said.

Just making algorithms open or transparent (as forcing public agencies to do) won鈥檛 necessarily prevent bias, she said. And neither will auditing the data that are used to train them.


4. Sunlight may not be an adequate disinfectant.

In the big-data-and-algorithms field, transparency isn鈥檛 just about opening up the algorithms.

On a far more basic level, policymakers and regulators have pushed to make sure companies (including those in ed tech) are publicly posting their privacy policies, so consumers (and students, parents, and teachers) can ostensibly know what data are being collected and how they are being used.

But there鈥檚 a big problem with that approach, said law professor Omri Ben-Shahar of the University of Chicago.

It doesn鈥檛 work.

Mandated disclosure rules are 鈥渆ntirely ineffective鈥 and an 鈥渆mpty ritual,鈥 Ben-Shahar said in his written testimony. Most people don鈥檛 read them. If they do read them, they don鈥檛 understand them. And if they do understand them, they don鈥檛 care.

That鈥檚 not the consensus view, of course. Maryland law professor Frank Pasquale, who was also on hand to testify, is a big proponent of more transparency and greater disclosure.

One of Pasquale鈥檚 points that the ed-tech vendor community might take to heart: There鈥檚 a lot of talk about data privacy, security and transparency impeding innovation, he said. But developing and employing good data practices can also promote innovation and be a source of competitive advantage.


5. Despite all the worry about algorithms, data collection and privacy, most people don鈥檛 seem willing to change their behavior.

When it comes to data privacy, one of the big questions for schools, companies, parents, and activists alike is whether people care enough about the potential risks of massive data collection and algorithmic targeting to forego the conveniences and benefits that come with them.

So far, said Tucker of MIT, there鈥檚 not a lot of evidence that鈥檚 the case.

She cited another study that highlighted the so-called 鈥減rivacy paradox.鈥

In it, researchers found that even those undergraduate students who said they cared most about their online privacy were willing to share very personal information in exchange for a slice of cheese pizza.


See also:

A version of this news article first appeared in the Digital Education blog.