In our regular, offline lives, we usually play many different roles and adapt to these roles without much thinking. At work, you show your professional self. When visiting friends, you are the social version of yourself. At home, you and your partner might be perfectly happy reading a book without exchanging many words. Back at your parents' place, some childish old habits might suddenly pop up. Similarly, on the web, we have many online identities - probably far more than we have in our regular lives.

Should YouTube support you in your habit of spending a whole evening watching cat videos, or should it try to convince you to spend your time in a better, possibly more rewarding, manner, such as watching a documentary or learning a language?

In my previous blog post, I explained why recommendations given to you are not only meant to satisfy you: Amazon hopes you will buy the recommended items, Facebook hopes that you will like the recommended posts and friends enough to spend a lot of time on the platform, advertisers hope that their advertisements are targeted enough for you to click on them, and apparently political parties hire shady companies to manipulate elections. I argued that more transparency on the stakeholders and their interests would make personalization less creepy, and bring back the original benefits and ambitions to give each individual user what this user wants, expects or needs.

But what is it that we want - or should want? In a very entertaining CHI'18 Extended Abstract, it is argued that this question is not an easy one to answer. If most people are perfectly happy spending the whole evening watching cat videos on Facebook, and continue clicking on these videos, this is what they want, isn't it? Or do they actually need some help to be stimulated - to be nudged - to do something useful, like reading poetry? But wouldn't that be patronizing, and who says that reading poetry is more useful or better than watching cat videos?

To take it a step further, even if we would agree that close friends, meaningful work, and good physical health are universal constituents of a good life, should recommender systems focus on our 'ideal self' or also let us indulge in bad habits that make us feel happy?

Essentially, in this blog post, I explain in simple words how personalization works, why it can be beneficial and why it, unfortunately, often is considered creepy. Not surprisingly, Facebook plays a major role in this article. What exactly is the free lunch that Facebook serves and could it be served in a more decent manner?

The original ambition of personalization, as stated back in the 1990s in the classic book Adaptive User Interfaces, is that not only 'everyone should be computer literate', but also that 'computers should be user literate'. In this early stage, we humans created 'mentalistic' models that represented our knowledge, interests, needs and goals in a way that could be interpreted by computers, but also by us. Gradually, these models have matured from hand-made and rather simple to statistical models based on a large amount of raw data.

A classic statistical approach to personalization is collaborative filtering, which still works in a very human-understandable way. In simple terms, collaborative filtering assumes that people who like similar things (such as books or movies) have a similar taste and therefore will also like other similar things. Collaborative filtering first identifies those users that are most similar to you, and then recommends items that they like but that you haven't seen (or rated, or bought) yet. Indeed, this is the way Amazon (among others) works, and anyone who has experience with these recommendations knows that they are far from perfect.

Companies like Facebook and Google therefore use a different approach: based on as many observations (or data points) as they can collect (and store and process), their algorithms (which are far more complex and less transparent than good old collaborative filtering) try to predict which search results, friends' posts, page suggestions - and advertisements - will be relevant for us. These observations can be anything, including your user profile, previous search queries, clicks on friends' posts, participation in an online game, online purchases, the likes that you receive and give, and so on. Researchers like Jennifer Golbeck even think that far-fetched proxies such as liking a picture of curly fries are an indicator of how intelligent you are (watch her entertaining TED Talk, it's nine minutes well spent). This data-driven approach arguably works better, but with the consequence that it becomes hard - but not as impossible as many companies would like us to believe - to explain why they think we will like these personalized results.

De Nederlandstalige versie van dit artikel vind je op de site van het Privacy & Identity Lab.

In short:

  • We used a short political quiz for measuring the socio-economic bias of Chilean news outlets.
  • The political orientation of the media landscape is subject to change.
  • To mitigate the effects of the ‘filter bubble’, it is not sufficient to only address personalization algorithms; one should also analyze differences in orientation within the media landscape.

Back in 2011 already, Eli Pariser taught us in his TED Talk “Beware online filter bubbles” that our online lives largely take place within a filter bubble. Facebook automatically selects the items that will reach your news feed based on your click behavior, and Google search results are personalized based on, among others, your current location and your search history. As a result, we mainly encounter information and opinions that match our own life philosophy.

In a similar fashion, traditional newspapers and other news outlets make a selection of the news items to be included. It is common knowledge that the New York Times has a liberal bias and Fox News a conservative bias, and that people usually choose for a newspaper that matches their own orientation and interests. By contrast, little is known about political bias in smaller, regional newspapers or in the still growing number of newsportals, among which the Huffington Post, Yahoo News, CBS, but also the Breitbart News Network.

We carried out a study to identify political bias within the media in Chile and obtained some surprising results that are relevant for the media landscape in general and for our personal, personalized news consumption.

Starting today, I will post updates on my research work on my website. My blog posts will probably vary from longer or shorter summaries of recently accepted papers to rambling about my research field, which is on the fine balance between the benefits of personalization and perceived and actual risks associated with privacy matters. All blog posts will be intended to be read by the general, interested audience.

I am realistic enough to know that most blogs start off enthusiastically and then slowly bleed to death. Well, I am in the first phase, so do expect some more new posts in the near future.

Eelco Herder

eelcoherder 256px

Privacy Engineering, User Modeling, Personalization, Recommendation, Web Usage Mining, Data Analysis and Visualization, Usability, Evaluation

Dr. Ir. Eelco Herder
Radboud Universiteit Nijmegen
Institute for Computing and Information Sciences
Toernooiveld 212
Mercator 1 - Room 03.01
6525 EC Nijmegen
The Netherlands

Email:
Phone: +31 24 36 52077

linkedin

facebook