News

Predictive text systems change what we write

Study explores the effects of autocomplete features on human writing

When a human and an artificial intelligence system work together, who rubs off on whom? It’s long been thought that the more AI interacts with and learns from humans, the more human-like those systems become. But what if the reverse is happening? What if some AI systems are making humans more machine-like?

In a recent paper, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) explored how predictive text systems — the programs on our phones and computers that suggest words or phrases in our text messages and email — change how we write. The researchers found that when people use these systems, their writing becomes more succinct, more predictable and less colorful (literally).

The paper was accepted for presentation to the Association for Computing Machinery’s Intelligent User Interfaces (ACM IUI) conference.

“We’ve known for a while that these systems change how we write, in terms of speed and accuracy, but relatively little was known about how these systems change what we write,” said Kenneth Arnold, a PhD candidate at SEAS and first author of the paper.     

Arnold, with co-authors Krysta Chauncey, of Charles River Analytics, and Krzysztof Gajos, the Gordon McKay Professor of Computer Science at SEAS, ran experiments asking participants to write descriptive captions for photographs. The study compared three scenarios:

  • Captions written with the suggestions hidden
  • Captions written with programs that always suggested three next words, like iOS and Android smartphones currently do
  • Captions written with programs that made suggestions only when the system had high confidence in the next word

When they compared the captions, the researchers found that captions written with suggestions were shorter and included fewer unexpected words than captions written without suggestions. Specifically, captions written with suggestions included fewer adjectives, including color words like “green” or “blue”. 

For example, one photograph showed a baseball player swinging a bat. Rather than suggesting “baseball player” or “hitter”, the predictive program suggested the much more common word, “man”.

Here is a caption for that image written with suggestions shown: a man is swinging his bat at a baseball game.

Here is a caption for the same picture, written with suggestions hidden: a baseball player wearing number eight swings a bat with people watching from the dugout

Here is a caption for the below picture, written when predictive suggestions were shown: A train pulling into a quaint train station.

train pulling into a station

Here is a caption for the same picture, written with suggestions hidden: An old brown train pulling away from a small train station by a baby blue building.

Even unprompted, the writer may have written the word “train”—but likely only after descriptive adjectives like “brown” or “small”. By suggesting nouns like “train” immediately after the article, rather than adjectives, the AI system often caused writers to omit those words altogether. The researchers dubbed this “skip nudging”.

So, with suggestions hidden, a participant captioned the below image as: numerous kids and accompanying grownups fly colorful animal shaped kites at the beach.

A different participant was shown predictive suggestions and captioned the image as: people standing on a beach flying colorful kites.

“In short, predictive text suggestions — even when presented as single words — are taken as suggestions of what to write,” said Arnold. “While, for the most part, people wrote more efficiently with predictive text systems, this may have come at the cost of thoughtfulness. These kinds of effects would never have been noticed by traditional ways of evaluating text entry system, which treat people like transcribing machines and ignore human thoughtfulness. Designers need to evaluate the systems that they make in a way that treats users more like whole people.”

“This research provides compelling evidence that intelligent systems designed to improve the efficiency of human work frequently also impact the content of human work, and they do so in unanticipated ways,” said Gajos. “We had missed this before because, as a research community, we tend to evaluate novel technologies in simplified settings that do not always reflect the real use. Ken’s work demonstrates the importance of testing these innovations with real people doing real work.”

Topics: AI / Machine Learning, Computer Science

Scientist Profiles

Press Contact

Leah Burrows | 617-496-1351 | lburrows@seas.harvard.edu