Better personalized recommendations through transparency and content design

Ryan Bigge
6 min readFeb 7, 2019

--

Important according to Google magic. Important according to our magic sauce.

This isn’t enough transparency for a recommendation algorithm.

As a content designer (aka UX writer) I understand that a paragraph can’t be crammed into a show-on-hover-tooltip. And I’d be banned from Medium if I didn’t mention that Arthur C. Clarke quote: “Any sufficiently advanced technology is indistinguishable from magic.”

But magic sauce isn’t just black boxing. It’s taking something serious and saying, “Don’t worry your pretty little head about it.”

To be fair, Google isn’t the only culprit. As Jesse Barron noted in his wonderful 2016 article The Babysitters Club, “We’re in the middle of a decade of post-dignity design, whose dogma is cuteness.”

Blame space constraints, corporate secrecy, or the cute revolution for hiding the math behind personalized recommendations. But creating and maintaining user trust requires transparency, not opacity.

Pay attention to that algorithm behind the curtain

Here’s taste match, Spotify’s attempt at concise transparency:

Spotify’s taste match uses percentages to indicate how likely you are to enjoy the suggested artist.

It prompts more questions than it answers, but it’s still better than Google’s magic maestro. The most transparent company I’ve seen thus far is Netflix, and even they bury the details in their help docs:

I like that Netflix uses the phrase “recommendation algorithm” in their Help Center.

It might seem odd to focus on transparency when big tech companies have so many flaws to fix right now. But using plain language to explain how personalized recommendations work is especially important given the big power imbalance in big data.

At the risk of oversimplifying the central challenge: data-driven companies know something that the user doesn’t. Yet the language used to convince people to act on recommendations lacks variety and explanatory power:

  • Inspired by your browsing history
  • Recently viewed
  • People who use {item a} also use {item b}
  • Related to {previous user action}
  • Because you added {item a} to cart

Over the last decade, most UX teams have embraced a variant of the jobs-to-be-done framework. But in many cases, personalized recommendations introduce a new approach — jobs-you-should-do-instead. For JYSDI to succeed, users need evidence that a recommendation is worth doing in addition to, or instead of, the task they set out to complete.

Airbnb has an excellent article about building content and design around evidence. Go read it. Right now. It’s the only article I’ve seen that treats engineering and design with equal weight and care.

Pay attention to that academic research behind the curtain

Competitive advantage prevents many companies from being transparent about recommendations. But I don’t think that’s the only reason. There’s also fear, an unwillingness to experiment, and a tendency to copy what everyone else is doing.

I’ve read plenty of great research showing that increased transparency improves user experience and the bottom line. Unfortunately, most of that research is hidden behind academic paywalls. If you can track them down, I recommend the following Recommender System (RecSys) articles:

A quick hit from each:

  • “Users perceived that natural language explanations are more trustworthy, contain a more appropriate amount of information, and offer a better user experience.”
  • “Most online Recommender Systems act like black boxes, not offering the user any insight into the system logic or justification for the recommendations.”
  • “Find a user experience that balances the global predictive power of machines learning and the edge-cases that can disassemble the value users receive.”

You can also check out Evaluating the Effectiveness of Explanations for Recommender Systems (Tintarev and Masthoff) for the seven possible aims of explanatory information: Transparency, Scrutability, Trust, Effectiveness, Persuasiveness, Efficiency, and Satisfaction.

Pay attention to that {noun} behind the {noun}

Future improvements to recommendations will require going beyond the sentence. What does that mean? It means not doing this:

Hello, {firstname}. It looks like you want to learn how to combine {transparency} and {content design} to improve the {UX} of your personalized recommendations.

Known as content formulas or writing with variables or text strings, these are sentences built in a way that allows for single or multiple points of data to be added. These sentences are the most common way to scale recommendations. Sara Wachter-Boettcher does a great job of pointing out the shortcomings of texts strings in her nifty book Technically Wrong.

Beep beep! Text strings are imperfect.

The Airbnb article I praised earlier is about content formulas. Their approach works because they plug numbers into a sentence. But as I’ve experienced firsthand, content formulas break fairly quickly — especially once they’re translated. The rare content formulas that don’t break easily tend to be bland or robotic.

While convenient and compact, content formulas aren’t always the best way to deliver personalized recommendations. They persist because a complete sentence is the easiest way to leverage rhetoric and persuasion. But there are other options. In this image, I’ve taken an Airbnb host recommendation and broken out the data points:

The box on the right is my remix of the original recommendation.

That might seem like a minor difference, but it represents a big shift in how we treat content and design for personalized recommendations. Let the experimentation begin!

Pay attention to that Josh behind the curtain

I started this article by giving Gmail a tough time, but I’ll end by giving some hugs to Josh Lovejoy, a former Staff Interaction Designer at Google. In his January 2018 article The UX of AI, Lovejoy talks about trust, building for actual human needs, and (my favourite) design principles. As Intercom’s Emmet Connolly put it, “Design principles are a list of strongly-held opinions that an entire team agrees on. They force clarity and reduce ambiguity.”

Here are Lovejoy’s three principles (he calls them “truths”) gently edited and condensed:

  • Machine learning won’t figure out what problems to solve.
  • If the goals of an AI system are opaque, user trust will be affected.
  • Every facet of machine learning is fuelled by human judgement, so it must be multi-disciplinary.

I hope to see more articles like Lovejoy’s in 2019. It can be a struggle to create good user experiences in far less complicated circumstances. The only way to solve for the challenges of design and AI is to be open and honest about successes and failures.

** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **

This article is based on a talk I gave in March of 2018 at IA Summit

--

--

Ryan Bigge
Ryan Bigge

Written by Ryan Bigge

Content designer + cultural journalist.

Responses (1)