What I talk about when I talk about stupid algorithms
By hiding channels from subscribers who ‘disliked’ a video, YouTube undermines a user’s rights, does the content creator a disservice, and sets up a strategy that only serves its own interests
In 15 years, a company called Soul Connex will develop an algorithmic test that can determine, with 100% accuracy, who your soulmate is. At first, still incensed by the idea of romantic love, people will reject the proposition that an algorithm can unequivocally select the love of your life. Over time, however, they’ll realize that an increasing number of couples are hastening the happily-ever-after goal, and will embrace this ultimate dating technology.
In the mind of William Bridges (one of the writers of Black Mirror), this is the central thread of Soulmates, a TV show that debuted in October on AMC and has already been renewed for a second season. “So now we have death, taxes, and love. Certainty sucks,” one of the characters sums up in the first episode.
In the mind of historian and best-selling author Yuval Noah Harari, this scenario is no fiction, as we can tell by this excerpt from Homo Deus — A brief history of tomorrow:
- “If we give Google and its competitors access to our biometric devices, to our DNA scans and to our medical records, we will get an all-knowing medical health service (…). Yet with such a database at its disposal, Google could do far more. (…) The self-deceptions and self-delusions that trap people in bad relationships, wrong careers and harmful habits will not fool Google. It will not make decisions on the basis of cooked-up stories (…). ‘Listen, Google,’ I will say, ‘both John and Paul are courting me. I like both of them, but in a different way. Given everything you know, what do you advise me to do?’”
The sagacity of Homo Deus lies in not selling precast futures — or setting dates, for that matter. When it addresses the power of algorithms driven by advanced artificial intelligence, for example, it blends projections like the one above with other examples that have already come true. Harari deftly shortens the path between the future and the recent past, making his whole argument much more plausible.
For this reason, it would be tempting to say that the “predictive” algorithms from Harari’s logic are closer to becoming a reality than it seems, and any day now Google (or some real version of Soul Connex) will connect the two dots of your love life. But that's not true.
Algorithms, especially those cooked inside Google’s facilities, have wield a mix of awe and fascination since they became the basis of the company's success as a search engine. This model — complex and extremely efficient — is the ultimate tale of how a really smart AI managed to build, “by itself”, a $1 trillion business.
Understandably, there is an extensive literature, both academic and casual, on how this algorithmic universe broadened social platforms engagement using cryptic “strategies”. One of the metrics that is commonly summoned to discuss recommendation systems and algorithms’ effectiveness, for example, came from a CES panel in 2018: YouTube’s recommendations drive 70% of what we watch. And we watch a lot of videos, more than 1 billion hours each day.
These are numbers that help to foster the idea that “Google knows you better than you know yourself”, that concept — half fanciful, half true — that the human thought is so linear that it can be easily forged by algorithms. And maybe it can — say, in 70% of the cases?
But there may be a lack of literature that explains the stupid part. The other 30%, I guess. Because while YouTube is a solidified engagement machine — there's no doubt about that — , even in 2021, some of these algorithms seem more stupid than ever. I mean, either that — some algorithmic systems forgot to take their vitamins this morning — or Google came to the conclusion that it can do the trick with a more rudimentary structure. In short, either my YouTube is broken or the company messed with its algorithms on the sly (again).
Every time a channel I’m subscribed to uploads a new video, that video appears at the top of my YouTube homepage. Not only on my page, of course, this is the default on the page of any user who’s logged in. That’s YouTube 101: the channels to which the user is subscribed are usually the first ones to appear on his homepage. For some time, however, YouTube has started to “hide” from my homepage the channels to which I am subscribed but for which I have hit the “dislike” button in the previous video on this same channel. Confusing, and yet this is how it’s happening.
Let’s say I’m subscribed to Girlfriend Reviews (I am, these guys are geniuses), I watch them every week, so YouTube’s algorithms happily feature their videos on my homepage because they “know” I like it. Let’s say that, for some reason, I hit “dislike” on their most recent video (I never did that, they’re really good, but let’s say I did). Do you know what happens? Next time Girlfriend Reviews uploads a new material, YouTube will “hide” it from me (even though I’m one of the channel’s loyal subscribers).
In a supposed attempt to improve my YouTube experience, the platform decides to snap its algorithmic fingers and make these content creators disappear from my sight, even when these are channels that I’ve been subscribed to for years. The sole reason: I “disliked” one or two of their videos in the recent past. It is as if I were a Lakers fan, and I had publicly expressed my irritation with Lebron during the last match, and as a result NBA League Pass decided to stop announcing the next Lakers games for me.
This “malfunction” happens because those famous “like” and “dislike” buttons work very differently for each of the three elements that make up the YouTube ecosystem: viewer, content creator, and YouTube itself.
For most viewers, those buttons exist to signal whether they liked that video or not. That’s how YouTube’s own retrospective — YouTube Rewind 2018 — became one of the most incredible misfires in the history of the company and ended up as the most disliked video of all time with over 18 million “dislikes.” The message was loud, clear and necessary: man, this video sucks.
The main point here is a direct link between viewer and creator — the message (like or dislike) should be for YouTubers and not for YouTube, there’s a huge difference in that. Likewise, these messages should work as a powerful tool for the channel as well: too many “dislikes” mean that viewers didn’t fancy that specific video, and you, as the channel owner, should check what went wrong. There is absolutely nothing new about it; in fact, this is the most basic assessment of any internet content since real-time metrics were created. Twenty five years ago, media audience and preferences were based on guess and research (mostly on guess), today, anyone can monitor how consumers behave as it happens, every second. For some people, it’s an essential tool, for others, it’s a burden in disguise.
The third element of this equation, YouTube itself, thinks about the “like” and “dislike” buttons in a very unique way. It doesn’t care if the user is trying to express an opinion or if the YouTuber will get the message. Seriously, the company couldn’t care less about those things. It created those buttons to teach the algorithms how to behave and, in theory, how to predict what viewers want to watch.
The real problem is much more serious than having to check my YouTube “subscriptions tab” once in a while because my subscribed channels disappeared from my homepage. The actual issue is, by hiding a channel from me, YouTube is also hiding it from other users in the same situation as mine (those who hit “dislike” in the previous video of this same channel). In effect, it means that the platform is not only “hiding” eventual dissatisfaction towards that YouTuber, it is also creating a protection bubble and preventing the channel owner from receiving any bad “review” in the future. In the universe that preaches an overprotective culture, the F-word is “feedback”.
And do you know what the craziest part is? Apparently, that is exactly how YouTubers want it to be.
Coincidentally, a few days after I started to write this article, I came across a tweet from Gerald Undone, who runs a camera gear channel on YouTube. He wasn’t happy with one of his viewers who wrote “I had to dislike the video because it was too light on the specs and applications.” The answer from Gerald: “No you didn’t. You could have just moved on with your day.”
Truth be told, I wasn't familiar with Gerald — not even as a casual viewer. But that’s the exact type of behaviour that makes zero sense to me. Let’s recap: the subscriber did not like the video, hence he hit the dreaded “dislike” button, but he was kind enough to explain why he did that (I would never bother — and I have smashed-that-dislike-button many, many times in my life). Gerald — so it seems — wasn’t at all interested in the viewer’s opinion. If you don’t like it, please, do not press any buttons and “move on with your day.” Given all the support Gerald’s tweet got from fans and other YouTubers, it seems that a lot of his peers feel the same way.
If this was a joke, I didn’t get it.
The curious thing is, of those three “elements” I mentioned earlier, only one truly benefits from this scheme: YouTube, evidently.
Imagine a YouTube user named Susan. She’s subscribed to 10 channels she really likes. Every single Monday all these 10 channels upload new videos and this is already part of her routine. In the past two Mondays, however, she didn’t like the videos from three of those 10 channels — ergo she hit the “dislike” button. As a consequence, these channels disappear from Susan’s YouTube homepage the next Monday. So, although these three channels have actually released a new video, as they do every Monday, she has no idea. Thereafter, a slightly improved version of that rudimentary algorithm from the first interaction starts to look for other similar videos to fill Susan’s attention. Some do the trick; others, not so much. But it doesn’t matter: this is how YouTube increases its watch time and, as a result, its ad revenue. Susan and her opinions never had a chance.
Whatever nitpicks I may have had with YouTube’s algorithms I should get over it: it’s a system that seems to be working pretty well — at least, from a financial point of view. Last year, for the first time since it began selling ads 15 years ago, YouTube has disclosed how much ad revenue it makes. From 2017 to 2019, it made $34.46 billion, and it will probably wrap up the 2020 balance sheet with something around $18 billion to $20 billion.
This consistent growth over the past years certainly helps to mask the metaphorical “dislikes” that YouTube bears in other branches of their business. Just to name the first three that come to mind:
- It still struggles to sell YouTube Premium outside the U.S. (not to mention YouTube Red’s fiasco).
- The production of its original content (YouTube Originals) is irrelevant in the face of competition.
- The policy of dealing with misinformation on the platform remains unsettling, at the very least.
No wonder that obfuscating “dislikes” is one of the oldest corporate “technologies,” and if you can do that with staggering sums of money, even better.
Given that no mortal has access to the YouTube metrics (and the statistical representation of what I said is null), it is impossible to know what percentage of this new algorithmic “normal” is actually true. Or even if it’s actually new. Naturally, no one is bound to believe me. YouTubers themselves can run a quick test on their own channels comparing, say, the end of 2019 to the end of 2020, and they’re all invited to share the results here. Did the amount of dislikes drop on your channel? If so, do you believe that it was because your videos improved in quality or for some other algorithmic reason? “Please, drop a comment below.”
Maybe this strategy that aims to erase the “dislikes” from the face of the Earth may even be the future of the platform, who knows? YouTube has no competition, so it can dictate the terms it wants with little or no consequence. In less than a year, we may have, for example, a YouTube that will eliminate the “thumbs down” icon for good, after all who needs all this negativity in the workplace, right?
It’s the kind of practice that reminds me of Philip K. Dick’s story The Autofac, published in 1955 and adapted in 2018 as an episode of Amazon’s Electric Dreams (for those who want to avoid spoilers, I recommend you stop reading here). The main plot is the same in both original story and TV show: on Earth after an apocalyptic war, Autofac is a totally automated factory that keeps manufacturing and shipping products, while a group of humans try to shut it down.
Both endings, while very different, are brilliant, and I’ll stick to the TV show’s version to finalize my argument. When the group of humans finally managed to break into the factory and start the process of destroying it, they are brutally attacked by Autofac’s defense forces. During the battle, they’re defeated and mortally wounded, which makes them realize that they’re not human, but androids. They’re just one of the many products that the factory manufactures. In the algorithmic logic of a fully automated mega-factory, as there were no more people on Earth to consume the products, it decided to build its own “humans.”
In trying to build its algorithmic bubble, YouTube does the same thing as Autofac: it wants to mold “humans” to consume their videos in a way that suits the company’s objectives. Ironically, even in the acid criticism of Philip K. Dick story, the “humans” produced by the automated factory are capable of “disliking” it, contrary to the universe idealized by YouTube, in which you must necessarily hit that like button or “move on with your day.”