Some very interesting insight imparted here from Joaquin Quiñonero Candela, a director of AI at Facebook, about how AI started to take off within Facebook, the prominence it was given, and the depths to which it was encouraged to instigate interactions.
We have known a long time, even before Cambridge Analytica, that private information was leaking out of Facebook (not what this article is about), but this exposure about how AI is used now also confirms the lengths to which Facebook would go to "fuel flames" in order to elicit more engagement. The goal really was to inflame as much of any sort of engagement as possible. Obvious misinformation often invokes more response, than agreement with any topic.
Seriously this is not what most other social networks do. Most (all?) open source social networks such as Mastodon, Hubzilla, Pixelfed, etc have very superficial algorithms to deal with popular vs chronological order choices, managing blocks and bans, etc, but those algorithms are generally visible for inspection. Generally open source social networks are fun networks for people to interact and socialise on. They are not intended to manipulate their users. Facebook is just not that in any form or way…
But it’s our own choice what social network we join. Absolutely no-one is forced to join any specific social network. You should ask what the purpose/goal of a specific social network is, and are your own goals the same?
See He got Facebook hooked on AI. Now he can’t fix its misinformation addiction
#technology #AI #deletefacebook #misinformation
Three years ago, the company began building "responsible AI." This is the story of how it failed.