Media are often accused of being biased. Particularly regarding the covid-19 pandemic - the lockdowns, restrictions and vaccination programmes - public (broadcasting) media are often accused of being left-wing, politically correct or even state-controlled. Regular newspapers are also often accused of being biased towards one end of the political spectrum - which actually is one of the characteristics of a healthy media landscape (as we discussed in our exploration on bias in the Chilean media).
Fortunately, people are free to choose between, for example, CNN and Fox News - or to watch both. Arguably, following the constructivist principle, there is no single, fully objective truth, and even the choice of which news events to report about and which not, may already be considered biased. In this study, we investigate the perceived bias on some major Dutch news outlets: are news facts interpreted differently based on where they come from?
Many of us try to reduce our ecological footprints or to buy cruelty-free products. Still, our good intentions often disappear once we see the price difference between organic products and the 'regular' cheap alternatives. The increasing popularity of online supermarkets may provide ways to seduce customers to exchange their cheap mass-produced sausage for a more sustainable, organic or even a plant-based alternative. In this blog, we investigate some possible ways.
Do you know how much time you spend on social media? Do you still remember your posts from ten years ago - and would seeing such a post perhaps bring back some dear memories? What topics do you post most about and how are these related?
If you are a Facebook user like I am, you might have posted something on Facebook about 1-2 times every week, with predictable peaks during holiday seasons, Christmas, conferences and other periods in which you travel, and see family, friends or colleagues. There are probably also some obvious recurring events, such as birthdays - and particular themes that interest me - such as vegetarian cooking, dogs and user modeling.
To find out my activities in the (recent and more distant) past that I found worthwhile to post on Facebook, I can scroll through my timeline. However, scrolling through that virtually endless timeline would be very time-consuming. Alternatively, Facebook offers me the possibility to download all my data (post, comments, photos) in HTML or JSON format. However, if you have tried to do that before, you know that these files are almost just as useless as simply scrolling through the timeline.
When was the last time that you saw an online advertisement that you found creepy? And do you remember what exactly caused this feeling of creepiness? Many of us have experienced some feelings of mistrust while surfing the web or visiting Facebook, and this mistrust may be triggered by a wide variety of causes and suspicions. In this post, we analyze and discuss a number of cases, based on interviews with twelve Facebook users. Unsurprisingly, users do not interact with advertisements that they consider creepy, but they also often ignore non-creepy advertisements. Ad explanations should be credible and perhaps advertisers should try and throw in some humor.
Platforms like YouTube, Facebook and Twitter provide you with personalized feeds and recommendations to make it easier for you to discover content that you like. In a previous blog post, I already argued that these recommendations might not always serve you well: arguably, YouTube's aim is not to provide you with that one educational video that will change your life, but to stimulate you to continue watching other movies. This goal is far easier reached with funny cat movies that entertain you, but that are not actually intellectually challenging. Whether this is a good or a bad thing depends on your state of mind and ambitions: if you want to relax after a long working day, cat movies might be just what you need, but if you actually aim to learn something, this focus on entertainment is counterproductive.
However, there is a particular type of recommendations that obviously does not primarily serve the user's goals: personalized advertisements are specifically designed for the benefits of advertisers and for the platform to make money. There are serious concerns regarding the collection of personalized data for this purpose (as stated in a popular Forbes article, if you’re not paying for it, you become the product), but that is not the focus of this post. In my opinion, there is another increasing problem: advertisements are often not recognizable as such, and recommendations for seemingly independent articles actually lead to advertorials that intend to influence you without you being aware of it.
In our regular, offline lives, we usually play many different roles and adapt to these roles without much thinking. At work, you show your professional self. When visiting friends, you are the social version of yourself. At home, you and your partner might be perfectly happy reading a book without exchanging many words. Back at your parents' place, some childish old habits might suddenly pop up. Similarly, on the web, we have many online identities - probably far more than we have in our regular lives.
Should YouTube support you in your habit of spending a whole evening watching cat videos, or should it try to convince you to spend your time in a better, possibly more rewarding, manner, such as watching a documentary or learning a language?
In my previous blog post, I explained why recommendations given to you are not only meant to satisfy you: Amazon hopes you will buy the recommended items, Facebook hopes that you will like the recommended posts and friends enough to spend a lot of time on the platform, advertisers hope that their advertisements are targeted enough for you to click on them, and apparently political parties hire shady companies to manipulate elections. I argued that more transparency on the stakeholders and their interests would make personalization less creepy, and bring back the original benefits and ambitions to give each individual user what this user wants, expects or needs.
But what is it that we want - or should want? In a very entertaining CHI'18 Extended Abstract, it is argued that this question is not an easy one to answer. If most people are perfectly happy spending the whole evening watching cat videos on Facebook, and continue clicking on these videos, this is what they want, isn't it? Or do they actually need some help to be stimulated - to be nudged - to do something useful, like reading poetry? But wouldn't that be patronizing, and who says that reading poetry is more useful or better than watching cat videos?
To take it a step further, even if we would agree that close friends, meaningful work, and good physical health are universal constituents of a good life, should recommender systems focus on our 'ideal self' or also let us indulge in bad habits that make us feel happy?
Essentially, in this blog post, I explain in simple words how personalization works, why it can be beneficial and why it, unfortunately, often is considered creepy. Not surprisingly, Facebook plays a major role in this article. What exactly is the free lunch that Facebook serves and could it be served in a more decent manner?
The original ambition of personalization, as stated back in the 1990s in the classic book Adaptive User Interfaces, is that not only 'everyone should be computer literate', but also that 'computers should be user literate'. In this early stage, we humans created 'mentalistic' models that represented our knowledge, interests, needs and goals in a way that could be interpreted by computers, but also by us. Gradually, these models have matured from hand-made and rather simple to statistical models based on a large amount of raw data.
A classic statistical approach to personalization is collaborative filtering, which still works in a very human-understandable way. In simple terms, collaborative filtering assumes that people who like similar things (such as books or movies) have a similar taste and therefore will also like other similar things. Collaborative filtering first identifies those users that are most similar to you, and then recommends items that they like but that you haven't seen (or rated, or bought) yet. Indeed, this is the way Amazon (among others) works, and anyone who has experience with these recommendations knows that they are far from perfect.
Companies like Facebook and Google therefore use a different approach: based on as many observations (or data points) as they can collect (and store and process), their algorithms (which are far more complex and less transparent than good old collaborative filtering) try to predict which search results, friends' posts, page suggestions - and advertisements - will be relevant for us. These observations can be anything, including your user profile, previous search queries, clicks on friends' posts, participation in an online game, online purchases, the likes that you receive and give, and so on. Researchers like Jennifer Golbeck even think that far-fetched proxies such as liking a picture of curly fries are an indicator of how intelligent you are (watch her entertaining TED Talk, it's nine minutes well spent). This data-driven approach arguably works better, but with the consequence that it becomes hard - but not as impossible as many companies would like us to believe - to explain why they think we will like these personalized results.
De Nederlandstalige versie van dit artikel vind je op de site van het Privacy & Identity Lab.
In short:
Back in 2011 already, Eli Pariser taught us in his TED Talk “Beware online filter bubbles” that our online lives largely take place within a filter bubble. Facebook automatically selects the items that will reach your news feed based on your click behavior, and Google search results are personalized based on, among others, your current location and your search history. As a result, we mainly encounter information and opinions that match our own life philosophy.
In a similar fashion, traditional newspapers and other news outlets make a selection of the news items to be included. It is common knowledge that the New York Times has a liberal bias and Fox News a conservative bias, and that people usually choose for a newspaper that matches their own orientation and interests. By contrast, little is known about political bias in smaller, regional newspapers or in the still growing number of newsportals, among which the Huffington Post, Yahoo News, CBS, but also the Breitbart News Network.
We carried out a study to identify political bias within the media in Chile and obtained some surprising results that are relevant for the media landscape in general and for our personal, personalized news consumption.
Starting today, I will post updates on my research work on my website. My blog posts will probably vary from longer or shorter summaries of recently accepted papers to rambling about my research field, which is on the fine balance between the benefits of personalization and perceived and actual risks associated with privacy matters. All blog posts will be intended to be read by the general, interested audience.
I am realistic enough to know that most blogs start off enthusiastically and then slowly bleed to death. Well, I am in the first phase, so do expect some more new posts in the near future.
Privacy Engineering, User Modeling, Personalization, Recommendation, Web Usage Mining, Data Analysis and Visualization, Usability, Evaluation
Dr. Ir. Eelco Herder
Associate Professor
Human-Centered Computing Group
Information and Computing Sciences
Utrecht University
Buys Ballotgebouw
Princetonplein 5
3584 CC Utrecht
The Netherlands
Email: e.herder@uu.nl