Elon Musk says he wants to make Twitter’s algorithm transparent in an attempt to fix this broken social network. Discrimination, hatred, fake news, manipulation or conspiracy, the obstacles to a healthy alternative media are indeed numerous. And the word “transparency”, when decontextualized, can mean a lot and a little at the same time…
Scientifically unraveling the ambitions of the billionaire is necessary to understand their relevance and their effectiveness. We all want, at least in the overwhelming majority, to navigate a new Twitter where life is good, but misused transparency can be counterproductive, because it must first be defined well.
READ ALSOInside Elon Musk’s head
To run an application like Twitter, there is not one but several algorithms that have been designed. The user categorization algorithm that classifies users twitterians based on their behavior on the platform, that of content recommendations that suggest posts to see or people to connect with, or that of the detection of inappropriate content, are part of this set.
Elon Musk wishes to publish the source code of the Twitter algorithm, which corresponds to the computer programs in which the algorithms and not the unique algorithm have been implemented. But this publication may not solve all the problems encountered by the network at the blue bird.
First of all, having the source code does not make it possible to understand the functioning and the construction of the algorithms as well as the sometimes arbitrary choices which have been made. Nor does it provide information on the tests used to validate these algorithms and assess the risk of technological discrimination. Precisely, having the source code between your fingers will not make it possible to detect biases effectively. A broader transparency which includes, among other things, that on the data sets used is then necessary.
We must be transparent with all Twitter users about the types of algorithms running on the platform, what they do and how they use the behavioral data collected about users. It is also necessary to share the company’s algorithmic governance by explaining the good development and testing practices of these entities.
READ ALSOElon Musk’s intimate wound
Finally, users should be given the power to choose, more clearly, to activate or deactivate the algorithmic editorialization of content. Algorithmic personalization is necessary to make these tools effective, but when combined with an attention-only revenue model, it becomes dangerous.
It should be noted all the same that certain algorithms which are difficult to design and which do not represent an economic differentiator for Twitter, such as those which detect hateful or pornographic content, should be shared freely with the rest of the scientific and technological community to hope build better ones.
Transparency is a good idea, but not just any idea, and not applied haphazardly, at the risk of making the system you want to repair even more opaque. That would be a shame…