Add one to your profile picture, drop your pronouns, and change your name to something masculine, and your LinkedIn posts will become 200+% more visible. I am seeing this claim trending on LinkedIn recently and find it concerning. Not because it exposes gender bias in LinkedIn’s recommendation algorithm, but because the claims are not backed by critical thinking and well-constructed experiments.
Bias in algorithms is a real thing, but not all algorithms are the same. Content recommendation doesn’t have the same biases as large language models. Cal Newport has a great podcast describing the details of how recommendation works (Deep Questions Ep. 372: Decoding TikTok’s Algorithm). I won’t link to it on LinkedIn, because that would severely limit the reach of this post. That’s just how it is. Content recommendation biases towards increased engagement on the platform. You don’t have to like that, I don’t. But I think it is naive to assume these walled gardens will open up. You can join me on Bluesky (@petedempsey.bsky.social) if you want to see a better alternative.
The data shared to back up the claims about LinkedIn’s gender bias show several posts to external content about bias in AI, followed by a post on the platform discussing specific gender bias in the recommendation algorithm. I have personal experience posting links to external content on AI bias. They don’t generate a lot of engagement. I wish it were otherwise, but that’s just how it is. Some is the algorithm; some is what people are interested in reading.
Recommendation algorithms often play to our emotions and fears. That’s why we have problems with extreme content and division on social media. Gender biases like objectification, rigid gender identity, etc. trigger our emotions and thus will show up in recommendation patterns. The specific name or mustachioedness of the poster doesn’t influence engagement all that much. If it did, marketers like Gary V would be leveraging the heck out of it.
The posts in the experiment data that are supposedly boosted because of the mustache, etc. are hitting on topics that trigger engagement with the LinkedIn audience: frustration with the algorithm, general fears about AI and algorithms, and an allegation of blatant discrimination. That content is boosted by its emotional nature and then magnified by the viral network effects of engagement.
I have a problem with these posts because they are not backed by valid data. If you know how recommendation algorithms work, you know that the claims of causation are not likely. I would be willing to accept them if the data proves it, but it doesn’t. At best, this is a distraction from the real biased related harms out there: hiring, loan approvals, medical insurance, criminal justice, etc. At worst, it is dangerous. It is crying wolf when there is no wolf. It falls into the same techno-mysticism that supports AGI, the end of jobs, colonizing space, etc. The fight for humanity in tech needs to be grounded in science and solid research methodology.
