Recommender systems are increasingly influential, controlling what users see and, ultimately, consume across digital marketplaces. These systems, like invisible gatekeepers, pick and order content for TikTok’s “For You Page,” Amazon’s “You Might Also Like” section, Google’s search engine results, and many others. A new approach to recommender algorithms could help digital platforms filter content more carefully.

On YouTube, which has 2.2 billion users (more than a quarter of the global population) the recommender system reportedly generates 200 million views a day from its homepage. Similarly, the number of Amazon Prime subscribers in the U.S. (147 million) is nearly half of the U.S. population. Amazon’s recommender system makes suggestions to these users from more than 75 million products available on its ecommerce platform. With so many users and content options, recommender systems have the power to shape public opinion or affect the market share of companies selling consumer goods — among other possible outcomes.

The importance and visibility of such recommender systems has put the spotlight on the ethical aspects of recommendations and has landed some of the content platforms in the hot water. Netflix was criticized in 2016 for jumping to conclusions about race and content preferences with its recommendations. Facebook has been under intense scrutiny since the 2016 election for its role in propagating misinformation. Google has been fined billions of dollars in Europe for manipulating search results. And over the last couple of years, momentum has increased for a movement demanding greater transparency into the workings of the algorithms behind YouTube and other major platforms.

Regardless of a company’s willingness to pull back the curtain on its algorithms, some of the noted concerns could be addressed via extra computation. Layers of analysis could weigh ethical considerations against profit, engagement and other business metrics, producing nuanced recommendations that meet ethical standards.

Quick to Judge

Unfortunately, there is often not enough time to perform such extra computation in live recommendation settings. Research has shown that people become frustrated with delays on digital platforms after just 100 milliseconds. Accounting for time to send information around, this leaves only 50 milliseconds for algorithmic computations — if content platforms are to deliver the lightning-fast page loads that users have learned to expect.

The conventional algorithmic approach for a recommender system consists of two stages. First, it filters a content library of potentially millions of items down to a manageable number of candidates — on the order of hundreds or thousands. This filtering is essentially a prediction of what will appeal to a user based on demographic information and past behavior. The second stage involves re-ranking the candidate items to balance a primary business objective, such as encouraging more views or purchases, against other priorities such as the freshness and diversity of content, as well as the fairness of recommendations.

Determining the ranking of content that maximizes the primary business objective yet also satisfies ethical content considerations — such as balanced perspectives, equal representation, and thematic diversity — is central to the success or failure of digital platforms. However, performing such a ranking quickly is a significant undertaking. Conventionally, a mathematical optimization procedure weights these different objectives, while also accounting for a specific user’s tastes. The trouble is this can take as long as minutes, not milliseconds. But a new algorithmic approach promises a workaround.

A Very Good Prediction

It turns out that predicting optimal weights on different business objectives using a statistical model is nearly as effective as determining them exactly through optimization — and much faster. In testing, my colleagues and I found that when a recommender system builds a personalized ranking for a user, based on such prediction, the content rankings were nearly identical in quality to those generated by running the full optimization. But the prediction method was much faster.

The research is presented in a new brief for industry practitioners by the Bernstein Center for Leadership and Ethics at Columbia Business School. The brief explains more about the ways in which we tested the approach.

We tasked recommender algorithms with generating a personalized ranking of 1,000 movies. The system had to maximize user satisfaction while also complying with a set of constraints around genre and recency of release, as well as ethical considerations such as inclusion of a gay character, the mention of race, and freedom of speech issues.

While both the prediction method and the conventional optimization approach generated recommendations that maximized user satisfaction and complied with the constraints almost perfectly, the conventional approach was much slower than the 50 milliseconds threshold. Whereas the new method performed the ranking well under this time.

With public and political pressure mounting, content platforms can now choose to use this new, predictive approach to free up time for their algorithms to proactively address ethical considerations before regulators or legislation force their hand.

Notably, the approach isn’t limited to content recommender systems. It could be used, for example, to assign time-sensitive tasks to a swarm of robots in a warehouse setting, or to determine rankings and matchings in other large-scale problems where speed matters. (The code used in the research is available as open source on GitHub.)

 

Yegor Tkachenko is a Ph.D. candidate in the marketing division at Columbia Business School and a recipient of a research grant from the Sanford C. Bernstein and Co. Center for Leadership and Ethics. The research this article is based on was co-authored with Kamel Jedidi and Wassim Dhaouadi.

This article was originally published on VentureBeat.