Tech Isn’t the Answer for Test Taking

Table of Contents

Dear readers, please be extra careful online on Friday. The news that President Trump has tested positive for the coronavirus created the kind of fast-moving information environment in which we might be inclined to read and share false or emotionally manipulative material online. It’s happening already.

I found this from The Verge and this from The Washington Post to be helpful guides to avoid contributing to online confusion, unhelpful arguments and false information. A good rule of thumb: If you have a strong emotional reaction to something, step away from your screen.

Technology is not more fair or more capable than people. Sometimes we shouldn’t use it at all.

That’s the message from Meredith Broussard, a computer scientist, artificial intelligence researcher and professor in data journalism at New York University.

We discussed the recent explosion of schools relying on technology to monitor remote students taking tests. Broussard told me this is an example of people using technology all wrong.

There are two ways to think about uses of software or digital data to help make decisions in education and beyond. One approach is that imperfect outcomes require improvement to the technology or better data to make better decisions. Some technologists say this about software that tries to identify criminal suspects from photos or video footage and has proved flawed, particularly for darker-skinned people.

Broussard takes a second view. There is no effective way to design software to make social decisions, she said. Education isn’t a computer equation, nor is law enforcement. Social inputs like racial and class bias are part of these systems, and software will only amplify the biases.

Fixing the computer code is not the answer in those circumstances, Broussard said. Just don’t use computers.

Talking to Broussard flipped a switch in my brain, but it took a while. I kept asking her, “But what about …” until I absorbed her message.

She isn’t saying don’t use software to spot suspicious credit card transactions or screen medical scans for possible cancerous lesions. But Broussard starts with the premise that we need to be selective and careful about when and how we use technology.

We need to be more aware of when we’re trying to apply technology in areas that are inherently social and human. Tech fails at that.

“The fantasy is we can use computers to build a system to have a machine liberate us from all the messiness of human interaction and human decision making. That is a profoundly antisocial fantasy,” Broussard said. “There is no way to build a machine that gets us out of the essential problems of humanity.”

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.


Everyone is telling Facebook to do one thing. It is doing the opposite.

Those concerned about the spread of false conspiracy theories and misinformation online have singled out the dangers of Facebook’s groups, the gatherings of people with shared interests. Groups, particularly those that are by invitation only, have become places where people can push false health treatments and wild ideas, and plan violent plots.

Facebook recommends groups — including those that discuss extremist ideas — to people as they’re scrolling through their feeds. My colleague Sheera Frenkel told me that almost every expert she knew said that Facebook should stop automated recommendations for groups devoted to false and harmful ideas like the QAnon conspiracy. This is tricky because groups focused on dangerous ideas sometimes hide their focus.

Facebook knows about the problems with group recommendations, and it’s responding by … making even MORE recommendations for groups open to everyone. That was among the changes Facebook announced on Thursday. The company said it would give people who oversee groups more authority to block certain people or topics in posts.

That is Facebook’s answer. Make group administrators responsible for the bad stuff. Not Facebook. This infuriates me. (To be fair, Facebook is doing more to emphasize public groups, not private ones in which outsiders are less likely to see and report dangerous activities.) But Facebook isn’t fully adopting a safety measure that everyone had been shouting about from the rooftops.

Why? Because it’s hard for people and companies to change.

Like most internet companies, Facebook has always focused on getting bigger. It wants more people in more countries using Facebook more and more avidly. Recommending people join groups is a way to get people to find more reasons to spend time on Facebook.

My colleague Mike Isaac told me that growth can overrule all other imperatives at Facebook. The company says it has a responsibility to protect people and not contribute to the flow of dangerous information. But when protecting people conflicts with Facebook’s growth mandate, growth tends to win.

Source Article