Meta: more (free) speech and fewer mistakes (it's time to get back to our roots around free expression).
Buongiorno, la notizia epocale non sarà sfuggita a nessuno iscritto a questa lista, ma non si sa mai (e comunque merita di essere archiviata per benino). Già l'oggetto di questa email (titolo comunicato Meta e frase di apertura di è Zuckerberg) è TANTA ROBA™ e potrebbe bastare, tuttavia suggerisco di prendersi il giusto tempo per leggere /attentamente/ quanto DICHIARATO da Meta, oltre ad ascoltare /attentamente/ (o leggere i sottotitoli) quanto DICHIARATO da Zuckerberg. In 5+5 minuti c'è la sintesi della storia degli ultimi 20 anni di censura via social network; è un po' confessione, un po' whistleblowing (eravamo costretti) e un po' chiedere umilmente scusa. https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/ «More Speech and Fewer Mistakes» January 7, 2025 Joel Kaplan, Chief Global Affairs Officer --8<---------------cut here---------------start------------->8--- [Mark Zuckerberg video message] <box> Takeaways ───────── • Starting in the US, we are ending our third party fact-checking program and moving to a Community Notes model. • We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. • We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can. </box> Meta's platforms are built to be places where people can express themselves freely. That can be messy. On platforms where billions of people can have a voice, all the good, bad and ugly is on display. But that's free expression. In his 2019 speech at Georgetown University, Mark Zuckerberg argued that free expression has been the driving force behind progress in American society and around the world and that inhibiting speech, however well-intentioned the reasons for doing so, often reinforces existing institutions and power structures instead of empowering people. He said: “Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that's dangerous.” In recent years we've developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do. We want to fix that and return to that fundamental commitment to free expression. Today, we're making some changes to stay true to that ideal. Ending Third Party Fact Checking Program, Moving to Community Notes ──────────────────────────────────────────────────────────────────── When we launched our independent fact checking program in 2016, we were very clear that we didn't want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations. The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read. That's not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor. We are now changing this approach. We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program. We've seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they're seeing – and one that's less prone to bias. • Once the program is up and running, Meta won't write Community Notes or decide which ones show up. They are written and rated by contributing users. • Just like they do on X, Community Notes will require agreement between people with a range of perspectives to help prevent biased ratings. • We intend to be transparent about how different viewpoints inform the Notes displayed in our apps, and are working on the right way to share this information. • People can sign up today ( [Facebook] , [Instagram] , [Threads] ) for the opportunity to be among the first contributors to this program as it becomes available. We plan to phase in Community Notes in the US first over the next couple of months, and will continue to improve it over the course of the year. As we make the transition, we will get rid of our fact-checking control, stop demoting fact checked content and, instead of overlaying full screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it. Allowing More Speech ───────────────────── Over time, we have developed complex systems to manage content on our platforms, which are increasingly complicated for us to enforce. As a result, we have been over-enforcing our rules, limiting legitimate political debate and censoring too much trivial content and subjecting too many people to frustrating enforcement actions. For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies). This does not account for actions we take to tackle large-scale adversarial spam attacks. We plan to expand our transparency reporting to share numbers on our mistakes on a regular basis so that people can track our progress. As part of that we'll also include more details on the mistakes we make when enforcing our spam policies. We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We're getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It's not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented. We're also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms. Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn't have been. So, we're going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams. For less severe policy violations, we're going to rely on someone reporting an issue before we take any action. We also demote too much content that our systems predict might violate our standards. We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest. And we're going to tune our systems to require a much higher degree of confidence before a piece of content is taken down. As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations. People are often given the chance to appeal our enforcement decisions and ask us to take another look, but the process can be frustratingly slow and doesn't always get to the right outcome. We've added extra staff to this work and in more cases, we are also now requiring multiple reviewers to reach a determination in order to take something down. We are working on ways to make recovering accounts more straightforward and testing facial recognition technology, and we've started using AI large language models (LLMs) to provide a second opinion on some content before we take enforcement actions. A Personalized Approach to Political Content ───────────────────────────────────────────── Since 2021, we've made changes to reduce the amount of civic content people see – posts about elections, politics or social issues – based on the feedback our users gave us that they wanted to see less of this content. But this was a pretty blunt approach. We are going to start phasing this back into Facebook, Instagram and Threads with a more personalized approach so that people who want to see more political content in their feeds can. We're continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we're going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what's meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see. These changes are an attempt to return to the commitment to free expression that Mark Zuckerberg set out in his Georgetown speech. That means being vigilant about the impact our policies and systems are having on people's ability to make their voices heard, and having the humility to change our approach when we know we're getting things wrong. [Mark Zuckerberg video message] <https://about.fb.com/wp-content/uploads/2025/01/V2.Single-Take-CS25_MZ_JanAn...> [Facebook] <https://www.facebook.com/help/contact/1914298425761977> [Instagram] <https://help.instagram.com/contact/1223551615403090> [Threads] <https://help.instagram.com/contact/1638078013752611> --8<---------------cut here---------------end--------------->8--- ...mumble mumble. E adesso Diamogli Addosso™ anche al miliardario Mark Zuckerberg che non ha perso nemmeno un minuto ed è saltato subito sul carro del vincitore, trasformandosi in una brutta copia di Elon Musk, pur di non perdere i suoi privilegi... Saluti, 380° P.S.: tra tutto quello che hanno dichiarato, la cosa che personalmente trovo gustosamente curiosa è il trasloco della sede dei "trust and safety teams" dalla California al Texas... perbacco sta /copiando/ Musk proprio in tutto, non solo sulle "community notes" :-O -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
On Wed, Jan 08, 2025 at 12:42:04PM +0100, 380° via nexa wrote:
https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
Osservo che i quotidiani nazionali italiani (almeno quelli che leggo io) hanno dato parecchia rilevanza al fatto che John Elkann sia entrato nel CdA di Meta, ma molta meno a quello che Dana White abbia fatto lo stesso: https://en.wikipedia.org/wiki/Dana_White#Politics . Unite voi i puntini numerati per passare da questo cambiamento nel CdA al cambiamento di policy riportato da 380 (o viceversa). Ciao -- Stefano Zacchiroli . zack@upsilon.cc . https://upsilon.cc/zack _. ^ ._ Full professor of Computer Science o o o \/|V|\/ Télécom Paris, Polytechnic Institute of Paris o o o </> <\> Co-founder & CSO Software Heritage o o o o /\|^|/\ Mastodon: https://mastodon.xyz/@zacchiro '" V "'
Osservo che i quotidiani nazionali italiani (almeno quelli che leggo io) hanno dato parecchia rilevanza al fatto che John Elkann sia entrato nel CdA di Meta
In questo articolo [1] (purtroppo parecchio datato, 2011) è spiegato, con tanto di formule matematiche, come un gruppo ristretto di fondi e società finanziarie controllava il 94,2% del fatturato totale delle multinazionali ("the network consists of many small connected components, but the largest one (3/4 of all nodes) contains all the top TNCs by economic value, accounting for 94.2% of the total TNC operating revenue"). Oggi, con l'entrata in gioco di colossi cinesi come State Grid, Sinopec, ecc. un po' meno, ma tanto quanto basta a "entità" come Vanguard, JP Morgan, FMR, ecc. di sedere nel consiglio di amministrazione o comunque influenzare le scelte manageriali di Meta, Stellantis, ecc. grazie al possesso di quote azionarie ragguardevoli [2][3]. Poi, ovviamente, c'è la politica ... A. [1] https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/39994/... [2] https://it.finance.yahoo.com/quote/META/holders/ [3] https://it.finance.yahoo.com/quote/STLA/holders/
Questo sito: <https://theyrule.net/> offre una rappresentazione delle reti di potere dei CDA raffigurando in rete i loro componenti. Purtroppo anche questo non è aggiornato e i dati rimangono quelli del 2021. L'idea mi pare la stessa. A. On 08/01/25 20:52, Antonio wrote:
Osservo che i quotidiani nazionali italiani (almeno quelli che leggo io) hanno dato parecchia rilevanza al fatto che John Elkann sia entrato nel CdA di Meta In questo articolo [1] (purtroppo parecchio datato, 2011) è spiegato, con tanto di formule matematiche, come un gruppo ristretto di fondi e società finanziarie controllava il 94,2% del fatturato totale delle multinazionali ("the network consists of many small connected components, but the largest one (3/4 of all nodes) contains all the top TNCs by economic value, accounting for 94.2% of the total TNC operating revenue"). Oggi, con l'entrata in gioco di colossi cinesi come State Grid, Sinopec, ecc. un po' meno, ma tanto quanto basta a "entità" come Vanguard, JP Morgan, FMR, ecc. di sedere nel consiglio di amministrazione o comunque influenzare le scelte manageriali di Meta, Stellantis, ecc. grazie al possesso di quote azionarie ragguardevoli [2][3]. Poi, ovviamente, c'è la politica ...
A.
[1] https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/39994/... [2] https://it.finance.yahoo.com/quote/META/holders/ [3] https://it.finance.yahoo.com/quote/STLA/holders/
Buonasera, Stefano Zacchiroli <zack@upsilon.cc> writes: [...]
rilevanza al fatto che John Elkann sia entrato nel CdA di Meta, ma molta meno a quello che Dana White abbia fatto lo stesso: https://en.wikipedia.org/wiki/Dana_White#Politics . Unite voi i puntini numerati per passare da questo cambiamento nel CdA al cambiamento di policy riportato da 380 (o viceversa).
grazie per la segnalazione di Dana White che mi era sfuggita, ammetto che i Board of Directors di Meta non è la mia passione :-) Wikipedia [1] dice che Dana White è nel Board of Directors (almeno) dal Giugno 2024, non ho sotto mano un diff per verificare la data di ingresso di quell'uomo ma non sono sicurissimo che lui sia uno dei puntini numerati che portano al cambiamento di politica sui contenuti (et al) a me piuttosto salta subito all'occhio che Sir Nicholas William Peter Clegg, ovvero un importissimo leader politico britannico del partito liberaldemocratico (leggetevi il curriculum): https://en.wikipedia.org/wiki/Nick_Clegg fosse "president, global affairs" [2] fino al 2 Gennaio scorso, giorno in cui Sir Clegg è /stato/ dimesso [3] (bla, bla, bla) per essere sostituito da Joel Kaplan, ovvero un repubblicano statunitense con una discreta esperienza alla Casa Bianca a partire dal 2001 e ben introdotto nel partito oggi: https://en.wikipedia.org/wiki/Joel_Kaplan egli era «vice president of U.S. public policy» dal 2011 [4] e pare sia stato sempre contrario a molte politiche sui contenuti applicate da Facebook/Meta. ora, molti commentatori sostengono l'ovvia tesi che la sostituzione di Sir Clegg con Kaplan sia un modo per ingraziarsi Trump e i Repubblicani che lo sostengono (cioè NON i neocon) ma... ...siccome io sono complottista, sospetto ci sia sotto qualcosa di più complesso: e se /invece/ certe figure aziendali fossero imposte dal governo come /garanti/ affinché l'azienda implementi le politiche (para)governative [5] decise _nonostante_ il zuckerberg-pensiero? questo ovviamente varrebbe per ogni azienda sufficientemente potente da poter pesantemente influenzare la vita non solo dei propri cittadini ma dell'intera popolazione terrestre. mah, sarà l'aria del nuovo anno che mi fa delirare :-D Saluti, 380° [1] https://en.wikipedia.org/wiki/Meta_Platforms#Board_of_directors [2] https://en.wikipedia.org/wiki/Meta_Platforms#Management [3] https://www.facebook.com/nickclegg/posts/pfbid02pYw3yki4jXbjns4ofN8XHnHL3t4C... [4] https://en.wikipedia.org/wiki/Joel_Kaplan#Meta_(Facebook) [5] cioè di chi davvero controlla i governi -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
Buonasera, ...anzi si sono proprio offesissimi! 380° <g380@biscuolo.net> writes: [...]
https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes/
[...]
That's not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.
[...] «Meta's Fact-Checking Exit Prompts Urgent IFCN Meeting» Pranav Dixit Jan 7, 2025, 8:29 PM CET https://web.archive.org/web/20250108003607/https://www.businessinsider.com/m... --8<---------------cut here---------------start------------->8--- • [Meta] is ending US fact-checking partnerships and shifting to crowdsourced moderation tools. • The International Fact-Checking Network called an emergency meeting after the announcement. • Meta's decision affects the financial sustainability of fact-checking organizations. The International Fact-Checking Network has convened an emergency meeting of its members following [Meta's announcement] on Tuesday that it will end its third-party fact-checking partnerships in the US and replace them with a crowdsourced moderation tool similar to X's [community notes]. In an exclusive interview with Business Insider, the IFCN's director, Angie Holan, confirmed that the meeting, scheduled for Wednesday, was organized in direct response to Meta's decision. [...] The IFCN has long played a crucial role in Meta's fact-checking ecosystem by accrediting organizations for Meta's third-party program, which began in 2016 after the US presidential election that year. [Certification] from the IFCN signaled that a fact-checking organization met rigorous editorial and transparency standards. Meta's partnerships with these certified organizations became a cornerstone of its efforts to combat misinformation, focusing on flagging false claims, contextualizing misinformation, and curbing its spread. [Meta] <https://web.archive.org/web/20250108003607/https://www.businessinsider.com/m...> [Meta's announcement] <https://web.archive.org/web/20250108003607/https://www.businessinsider.com/m...> [community notes] <https://web.archive.org/web/20250108003607/https://www.businessinsider.com/m...> [Certification] <https://web.archive.org/web/20250108003607/https://ifcncodeofprinciples.poyn...> People are upset ───────────────── Holan described the mood among fact-checkers as somber and frustrated. "This program has been a major part of the global fact-checking community's work for years," she said. "People are upset because they saw themselves as partners in good standing with Meta, doing important work to make the platform more accurate and reliable." She noted that fact-checkers were not responsible for removing posts, only for labeling misleading content and limiting its virality. "It was never about censorship but about adding context to prevent false claims from going viral," Holan said. A last-minute heads-up ══════════════════════ An employee at PolitiFact, one of the first news organizations to partner with Meta on its Third-Party Fact-Checking Program in December 2016, said the company received virtually no warning from Meta before the program was killed. "The PolitiFact team found out this morning at the same time as everyone else," the employee told BI. An IFCN employee who was granted anonymity told BI that the organization itself got a heads-up only "late yesterday" via email that something was coming. It asked for a 6 a.m. call — about an hour before Meta's [blog post] written by its [new Republican policy head, Joel Kaplan,] went live. "I had a feeling it was bad news," this employee said. Meta did not respond to a request for comment. [blog post] <https://web.archive.org/web/20250108003607/https://about.fb.com/news/2025/01...> [new Republican policy head, Joel Kaplan,] <https://web.archive.org/web/20250108003607/https://www.businessinsider.com/m...> Financial fallout for fact-checkers ═══════════════════════════════════ Meta's decision could have serious financial consequences for fact-checking organizations, especially those that relied heavily on funding from the platform. According to a [2023 report] published by the IFCN, income from Meta's Third-Party Fact-Checking Program and grants remain fact-checkers' predominant revenue streams. "Fact-checking isn't going away, and many robust organizations existed before Meta's program and will continue after it," Holan said. "But some fact-checking initiatives were created because of Meta's support, and those will be vulnerable." She also underscored the broader challenges facing the industry, saying that fact-checking organizations share the same financial pressures as newsrooms. "This is bad news for the financial sustainability of fact-checking journalism," she said. [2023 report] <https://web.archive.org/web/20250108003607/https://www.poynter.org/wp-conten...> Skepticism toward community notes ───────────────────────────────── Meta plans to replace its partnerships with community notes, a crowd-based system modeled after X's approach. Holan expressed doubt that this model could serve as an effective substitute for expert-led fact-checking. "Community notes on X have only worked in cases where there's bipartisan agreement — and how often does that happen?" she said. "When two political sides disagree, there's no independent way to flag something as false." It's not yet clear how Meta's implementation of community notes will work. --8<---------------cut here---------------end--------------->8--- Porelli. Saluti, 380° -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
e proseguono... «Full Fact responds to Meta ending support for US fact checkers» https://fullfact.org/blog/2025/jan/meta-ending-support-for-us-fact-checkers/ --8<---------------cut here---------------start------------->8--- Meta, the owners of Facebook, Instagram and Threads [has announced] an end to support for third-party fact checking in the United States. Chris Morris, Chief Executive of Full Fact, responds: Meta's decision to end its partnership with fact checkers in the US is disappointing and a backwards step that risks a chilling effect around the world. From safeguarding elections to protecting public health to dissipating potential unrest on the streets, fact checkers are first responders in the information environment. Our specialists are trained to work in a way that promotes credible evidence and prioritises tackling harmful information - we believe the public has a right to access our expertise. We absolutely refute Meta's charge of bias - we are strictly impartial, fact check claims from all political stripes with equal rigour, and hold those in power to account through our commitment to truth. Like Meta, fact checkers are committed to promoting free speech based on good information without resorting to censorship. But locking fact checkers out of the conversation won't help society to turn the tide on rapidly rising misinformation. Misinformation doesn't respect borders, so European fact checkers will be closely examining this development to understand what it means for our shared information environment. Chris Morris – 7 January 2025 Full Fact campaigns to ensure a better information environment for everyone. We scrutinise all sides in any political debate impartially, and hold everyone in public office to high standards. Full Fact has participated in Meta's Third Party Fact Checking Programme since January 2019 in which we check images, videos and articles on Meta platforms and receive income depending on the amount of fact checking done under the programme. We have published independent reports on the programme with recommendations for improvements. We disclose all funding we receive over £5,000—you can [see these figures here]. We are editorially independent and our funders have no editorial control over our content. [has announced] <https://about.fb.com/news/2025/01/meta-more-speech-fewer-mistakes> [see these figures here] </about/funding/> --8<---------------cut here---------------end--------------->8--- «From safeguarding elections to protecting public health to dissipating potential unrest on the streets»: certe affermazioni meriterebbero un bel meta-fat-checking (ricorsivo), altrimenti rischiano in alternativa di apparire a. come smargiassate da palloni gonfiati b. come confessioni di complicità in eversione para-militare. Fate voi. Saluti, 380° -- 380° (Giovanni Biscuolo public alter ego) «Noi, incompetenti come siamo, non abbiamo alcun titolo per suggerire alcunché» Disinformation flourishes because many people care deeply about injustice but very few check the facts. Ask me about <https://stallmansupport.org>.
participants (4)
-
380° -
Alberto Cammozzo -
Antonio -
Stefano Zacchiroli