The EU should support Ireland’s bold move to regulate Big Tech
The EU should support Ireland’s bold move to regulate Big Tech by Zephyr Teachout and Roger McNamee Dublin, Ireland, was stunned a month ago by riots that transformed its downtown into chaos, the worst rioting in decades, stemming from far-right online rumors about an attack on children. The riots, like Jan. 6, appear to be a direct outgrowth of the amplification ecosystem supported by social media networks such as TikTok, Google’s YouTube and Meta’s Instagram, which likely keep their European headquarters in Dublin for tax reasons. Ireland, long ridiculed for bowing to Big Tech, has now come out with a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms. It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default. In practice, that means that the big platforms cannot automatically run algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it. Today, algorithms feed each user different information. They derive their power from the trove of personal data that platforms acquire about users, data that enables the identification and exploitation of emotional weak points, all to maximize engagement. Platforms do not acknowledge responsibility for downstream consequences. Some users respond best to cat videos, others to hate speech, disinformation and conspiracy theories. For many, the response to harmful content is involuntary, driven by flight or fight. Either way, users spend more time on the platform, which allows the company to make more money by showing them ads. Artificially amplifying outrage may be lucrative, but it carries a terrible cost. Recommender systems enable to migrate from the fringe to the mainstream. An investigation revealed that Meta’s algorithms were key contributors to the murderous hate that cost thousands of people their lives in Myanmar’s 2017 Rohingya genocide. Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the danger of its algorithm for years. As long ago as 2016, Meta’s internal research had reported that “64 percent of all extremist group joins are due to our recommendation tools.” It continued: “Our recommendation systems grow the problem.” When 37,000 people allowed researchers to monitor their YouTube experience, nearly all the nasty videos they had encountered were pushed into their feed by YouTube’s algorithm. An experiment using simulated users found that Facebook, Instagram and X, the platform formerly known as Twitter, recommended antisemitic and conspiracy content to test users as young as 14 years old. Earlier this year, the surgeon general spoke about the danger of algorithms that promote suicide and self-harm. A recent experiment by Amnesty International proves the point: TikTok’s algorithm recommended videos encouraging suicide to a test user posing as a 13-year-old only an hour after the account was created. Recommender systems didn’t really take hold until 2010, and we’ve tried trusting the Big Tech platforms, but 13 years into the experiment, with plenty of data showing harm, we cannot trust technology companies to regulate themselves in the public interest. We do not want the government to be in the business of sorting through what is and is not harmful if amplified. The brilliance of the Ireland model is that it offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience. After years of being a tax haven for Big Tech, Ireland is now offering the world a groundbreaking rule to protect democracy, public health and public safety. In under nine months, Coimisiún na Meán has gone from initial launch to tackling the machine at the heart of the disinformation crisis. The rule is necessary because current European Union regulations aren’t working and the new Digital Services Act is not designed to tackle the core problem. Under the EU’s General Data Protection Regulation, tech firms are already supposed to get a person’s “explicit” (two-step) consent to process inferences about their political views, sexuality, religion, ethnicity or health. Several complaints that the big firms have failed to seek or receive this consent have not been resolved years after being brought. But for over five years, Big Tech’s primary General Data Protection Regulation authority, which is in Ireland, failed to notice or act. Europe often trumpets its regulatory leadership in the world. But the so-called “Brussels Effect” of other countries heeding its rules began to dissipate when Europe failed to enforce its most famous law: the General Data Protection Regulation. The European Commission is understandably focused on the Digital Services Act, which goes into effect next month, but EU policymakers should welcome the new Irish rules. Coimisiún na Meán’s bold move would ultimately make the Digital Services Act far more successful. Europe and the Irish government are stepping up at last to regulate harmful technology products. Social media may become social again. https://thehill.com/opinion/technology/4380369-the-eu-should-support-ireland...
Salve Daniela, in un ottica cibernetica, la proposta irlandese cerca di ridurre l'efficacia sociale di alcuni attuatori dei BigTech senza intervenire sugli altri componenti di questi agenti cibernetici (controllore, sensori e altri attuatori). Per confronto, l'applicazione del GDPR ai BigTech ne accecherebbe i sensori. In sé non è una proposta priva di buon senso (in una democrazia ci si chiederebbe perché non sia già Legge, ma le colonie plutocratiche funzionano diversamente), ma per analizzarne l'efficacia potenziale bisogna considerare il sistema cibernetico in cui tale norma andrebbe ad operare nel suo complesso. Tale norma affida ai cittadini la possibilità di scegliere se rinunciare alle dosi gratuite km0 di dopamina/serotonina cui sono stati assuefatti, liberandosi dell'influenza dei GAFAM esercitano su di loro e sul proprio Paese attraverso uno specifico canale (le raccomandazioni "algoritmiche"). I cittadini però sono agenti cibernetici con risorse infinitamente inferiori a quelle dei GAFAM: sensori (naso, occhi, orecchie..) e attuatori (braccia, gambe..) a corto raggio, ed un centro di controllo che non dispone delle conoscenze necessarie a concepire ancor prima di affrontare questo tipo di controllo sociale statistico. Dall'altro lato abbiamo decine di migliaia di informatici, psicologi, sociologi persino rinomati filosofi al soldo delle BigTech che dispongono di milliardi di sensori, exabyte di dati personali dettagliatissimi e una potenza di calcolo senza precedenti nella storia umana. E i cittadini si confrontano con questi agenti cibernetici individualmente. La proposta irlandese, dunque, non raggiungerebbe gli obiettivi dichiarati più di quanto non li abbia raggiunti, ad oggi, il GDPR. Sperare che individui sorvegliati attentamente 24h si 24, 7 giorni su 7, non possano essere spinte a cliccare sul pulsante di opt-in è come sperare che un tossico-dipendente interrompa l'uso delle sostanze da cui dipende semplicemente perché gli viene chiesto di fare un click in più. Vietare l'uso aziendale dei suggestion systems avrebbe senso. Renderlo opt-in temo sia solo un blando palliativo per una democrazia morente. A pensar məle si fa peccato, ma alla luce di tale analisi, non mi sorprende che un Paese dipendente dai soldi delle BigTech abbia proposto questa soluzione. Chi l'ha imboccato, avrà fatto bene i suoi calcoli e l'avrà considerata una buona tattica elusiva rispetto a normative efficaci. Giacomo che spera di ricevere obiezioni convincenti... che lo rincuorino sul futuro. Il 31 Dicembre 2023 18:05:06 UTC, Daniela Tafani <daniela.tafani@unipi.it> ha scritto:
The EU should support Ireland’s bold move to regulate Big Tech by Zephyr Teachout and Roger McNamee
Dublin, Ireland, was stunned a month ago by riots that transformed its downtown into chaos, the worst rioting in decades, stemming from far-right online rumors about an attack on children.
The riots, like Jan. 6, appear to be a direct outgrowth of the amplification ecosystem supported by social media networks such as TikTok, Google’s YouTube and Meta’s Instagram, which likely keep their European headquarters in Dublin for tax reasons.
Ireland, long ridiculed for bowing to Big Tech, has now come out with a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms.
It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default.
In practice, that means that the big platforms cannot automatically run algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it.
Today, algorithms feed each user different information. They derive their power from the trove of personal data that platforms acquire about users, data that enables the identification and exploitation of emotional weak points, all to maximize engagement.
Platforms do not acknowledge responsibility for downstream consequences. Some users respond best to cat videos, others to hate speech, disinformation and conspiracy theories. For many, the response to harmful content is involuntary, driven by flight or fight. Either way, users spend more time on the platform, which allows the company to make more money by showing them ads.
Artificially amplifying outrage may be lucrative, but it carries a terrible cost. Recommender systems enable to migrate from the fringe to the mainstream. An investigation revealed that Meta’s algorithms were key contributors to the murderous hate that cost thousands of people their lives in Myanmar’s 2017 Rohingya genocide.
Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the danger of its algorithm for years. As long ago as 2016, Meta’s internal research had reported that “64 percent of all extremist group joins are due to our recommendation tools.”
It continued: “Our recommendation systems grow the problem.”
When 37,000 people allowed researchers to monitor their YouTube experience, nearly all the nasty videos they had encountered were pushed into their feed by YouTube’s algorithm. An experiment using simulated users found that Facebook, Instagram and X, the platform formerly known as Twitter, recommended antisemitic and conspiracy content to test users as young as 14 years old.
Earlier this year, the surgeon general spoke about the danger of algorithms that promote suicide and self-harm. A recent experiment by Amnesty International proves the point: TikTok’s algorithm recommended videos encouraging suicide to a test user posing as a 13-year-old only an hour after the account was created.
Recommender systems didn’t really take hold until 2010, and we’ve tried trusting the Big Tech platforms, but 13 years into the experiment, with plenty of data showing harm, we cannot trust technology companies to regulate themselves in the public interest.
We do not want the government to be in the business of sorting through what is and is not harmful if amplified. The brilliance of the Ireland model is that it offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience.
After years of being a tax haven for Big Tech, Ireland is now offering the world a groundbreaking rule to protect democracy, public health and public safety. In under nine months, Coimisiún na Meán has gone from initial launch to tackling the machine at the heart of the disinformation crisis.
The rule is necessary because current European Union regulations aren’t working and the new Digital Services Act is not designed to tackle the core problem. Under the EU’s General Data Protection Regulation, tech firms are already supposed to get a person’s “explicit” (two-step) consent to process inferences about their political views, sexuality, religion, ethnicity or health. Several complaints that the big firms have failed to seek or receive this consent have not been resolved years after being brought. But for over five years, Big Tech’s primary General Data Protection Regulation authority, which is in Ireland, failed to notice or act.
Europe often trumpets its regulatory leadership in the world. But the so-called “Brussels Effect” of other countries heeding its rules began to dissipate when Europe failed to enforce its most famous law: the General Data Protection Regulation. The European Commission is understandably focused on the Digital Services Act, which goes into effect next month, but EU policymakers should welcome the new Irish rules.
Coimisiún na Meán’s bold move would ultimately make the Digital Services Act far more successful. Europe and the Irish government are stepping up at last to regulate harmful technology products. Social media may become social again.
https://thehill.com/opinion/technology/4380369-the-eu-should-support-ireland... _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
Buongiorno, Giacomo, concordo sul fatto che l'obbligo di opt-in non equivalga a, e dunque non possa sostituire, lo smembramento di un monopolio. Per l'Irlanda, mi pare comunque un'inversione di tendenza apprezzabile (Coimisiún na Meán è entrata in carica a marzo 2023, se non ricordo male): https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town L'atto, del resto, fa seguito a un dibattito sui sitemi di raccomandazione tossici e sui danni ai bambini, in particolare: https://www.iccl.ie/resources/publication/the-european-commission-must-follo... Nel suo rapporto (accessibile dalla pagina indicata sopra), l'lIrish Council for Civil Liberties prevede che potrà esserci una "malicious compliance", da parte delle aziende: "For example, platforms may rely on one or more of the following – and currently do so in addition to their recommender systems or as part of them: • the user’s selection from a menu of the categories of content they are interested in; • expert editors curating categories of video and video creators; • ranking content by other factors, such as number of views, reputation of author/ producer, quality rating feedback users, etc. Despite the power of alternatives, some platforms may respond with “malicious compliance” by implementing poor experiences, in order to provoke outcry against regulation. The following, for example, must not be accepted from the platforms: switching from a recommender system to an entirely unedited and unordered feed of randomised video, and then prompting users to switch back on the recommender system. Moreover, digital platforms who maliciously comply create the risk that their users will depart to competitors who offer better service. Malicious compliance should be damaging to the platform, not the user." Simile campagne, e gli eventuali provvedimenti normativi che seguono (per quanto parziali), sono elementi indispensabili per una presa di coscienza collettiva, non trovi? Un saluto e buon anno, Daniela ____________ Da: Giacomo Tesio <giacomo@tesio.it> Inviato: lunedì 1 gennaio 2024 22:21 A: nexa@server-nexa.polito.it; Daniela Tafani; nexa@server-nexa.polito.it Oggetto: Re: [nexa] The EU should support Ireland’s bold move to regulate Big Tech Salve Daniela, in un ottica cibernetica, la proposta irlandese cerca di ridurre l'efficacia sociale di alcuni attuatori dei BigTech senza intervenire sugli altri componenti di questi agenti cibernetici (controllore, sensori e altri attuatori). Per confronto, l'applicazione del GDPR ai BigTech ne accecherebbe i sensori. In sé non è una proposta priva di buon senso (in una democrazia ci si chiederebbe perché non sia già Legge, ma le colonie plutocratiche funzionano diversamente), ma per analizzarne l'efficacia potenziale bisogna considerare il sistema cibernetico in cui tale norma andrebbe ad operare nel suo complesso. Tale norma affida ai cittadini la possibilità di scegliere se rinunciare alle dosi gratuite km0 di dopamina/serotonina cui sono stati assuefatti, liberandosi dell'influenza dei GAFAM esercitano su di loro e sul proprio Paese attraverso uno specifico canale (le raccomandazioni "algoritmiche"). I cittadini però sono agenti cibernetici con risorse infinitamente inferiori a quelle dei GAFAM: sensori (naso, occhi, orecchie..) e attuatori (braccia, gambe..) a corto raggio, ed un centro di controllo che non dispone delle conoscenze necessarie a concepire ancor prima di affrontare questo tipo di controllo sociale statistico. Dall'altro lato abbiamo decine di migliaia di informatici, psicologi, sociologi persino rinomati filosofi al soldo delle BigTech che dispongono di milliardi di sensori, exabyte di dati personali dettagliatissimi e una potenza di calcolo senza precedenti nella storia umana. E i cittadini si confrontano con questi agenti cibernetici individualmente. La proposta irlandese, dunque, non raggiungerebbe gli obiettivi dichiarati più di quanto non li abbia raggiunti, ad oggi, il GDPR. Sperare che individui sorvegliati attentamente 24h si 24, 7 giorni su 7, non possano essere spinte a cliccare sul pulsante di opt-in è come sperare che un tossico-dipendente interrompa l'uso delle sostanze da cui dipende semplicemente perché gli viene chiesto di fare un click in più. Vietare l'uso aziendale dei suggestion systems avrebbe senso. Renderlo opt-in temo sia solo un blando palliativo per una democrazia morente. A pensar məle si fa peccato, ma alla luce di tale analisi, non mi sorprende che un Paese dipendente dai soldi delle BigTech abbia proposto questa soluzione. Chi l'ha imboccato, avrà fatto bene i suoi calcoli e l'avrà considerata una buona tattica elusiva rispetto a normative efficaci. Giacomo che spera di ricevere obiezioni convincenti... che lo rincuorino sul futuro. Il 31 Dicembre 2023 18:05:06 UTC, Daniela Tafani <daniela.tafani@unipi.it> ha scritto:
The EU should support Ireland’s bold move to regulate Big Tech by Zephyr Teachout and Roger McNamee
Dublin, Ireland, was stunned a month ago by riots that transformed its downtown into chaos, the worst rioting in decades, stemming from far-right online rumors about an attack on children.
The riots, like Jan. 6, appear to be a direct outgrowth of the amplification ecosystem supported by social media networks such as TikTok, Google’s YouTube and Meta’s Instagram, which likely keep their European headquarters in Dublin for tax reasons.
Ireland, long ridiculed for bowing to Big Tech, has now come out with a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms.
It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default.
In practice, that means that the big platforms cannot automatically run algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it.
Today, algorithms feed each user different information. They derive their power from the trove of personal data that platforms acquire about users, data that enables the identification and exploitation of emotional weak points, all to maximize engagement.
Platforms do not acknowledge responsibility for downstream consequences. Some users respond best to cat videos, others to hate speech, disinformation and conspiracy theories. For many, the response to harmful content is involuntary, driven by flight or fight. Either way, users spend more time on the platform, which allows the company to make more money by showing them ads.
Artificially amplifying outrage may be lucrative, but it carries a terrible cost. Recommender systems enable to migrate from the fringe to the mainstream. An investigation revealed that Meta’s algorithms were key contributors to the murderous hate that cost thousands of people their lives in Myanmar’s 2017 Rohingya genocide.
Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the danger of its algorithm for years. As long ago as 2016, Meta’s internal research had reported that “64 percent of all extremist group joins are due to our recommendation tools.”
It continued: “Our recommendation systems grow the problem.”
When 37,000 people allowed researchers to monitor their YouTube experience, nearly all the nasty videos they had encountered were pushed into their feed by YouTube’s algorithm. An experiment using simulated users found that Facebook, Instagram and X, the platform formerly known as Twitter, recommended antisemitic and conspiracy content to test users as young as 14 years old.
Earlier this year, the surgeon general spoke about the danger of algorithms that promote suicide and self-harm. A recent experiment by Amnesty International proves the point: TikTok’s algorithm recommended videos encouraging suicide to a test user posing as a 13-year-old only an hour after the account was created.
Recommender systems didn’t really take hold until 2010, and we’ve tried trusting the Big Tech platforms, but 13 years into the experiment, with plenty of data showing harm, we cannot trust technology companies to regulate themselves in the public interest.
We do not want the government to be in the business of sorting through what is and is not harmful if amplified. The brilliance of the Ireland model is that it offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience.
After years of being a tax haven for Big Tech, Ireland is now offering the world a groundbreaking rule to protect democracy, public health and public safety. In under nine months, Coimisiún na Meán has gone from initial launch to tackling the machine at the heart of the disinformation crisis.
The rule is necessary because current European Union regulations aren’t working and the new Digital Services Act is not designed to tackle the core problem. Under the EU’s General Data Protection Regulation, tech firms are already supposed to get a person’s “explicit” (two-step) consent to process inferences about their political views, sexuality, religion, ethnicity or health. Several complaints that the big firms have failed to seek or receive this consent have not been resolved years after being brought. But for over five years, Big Tech’s primary General Data Protection Regulation authority, which is in Ireland, failed to notice or act.
Europe often trumpets its regulatory leadership in the world. But the so-called “Brussels Effect” of other countries heeding its rules began to dissipate when Europe failed to enforce its most famous law: the General Data Protection Regulation. The European Commission is understandably focused on the Digital Services Act, which goes into effect next month, but EU policymakers should welcome the new Irish rules.
Coimisiún na Meán’s bold move would ultimately make the Digital Services Act far more successful. Europe and the Irish government are stepping up at last to regulate harmful technology products. Social media may become social again.
https://thehill.com/opinion/technology/4380369-the-eu-should-support-ireland... _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
Grazie per la segnalazione, Daniela. Se capisco bene la proposta irlandese (non l'ho letta) fa diventare di default quello che nell'articolo 38 del Digital Services Act era solo un'opzione garantita all'utente ---- In addition to the requirements set out in Article 27, providers of very large online platforms and of very large online search engines that use recommender systems shall provide at least one option for each of their recommender systems which is not based on profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679. ---- (l'articolo 4 richiede che i recommender systems mostrino in modo trasparente i criteri su cui sono basati) Sai se nella loro proposta le "big platform" sono equivalenti ai "very large online platform" del DSA o c'è un'altra definizione? Concordo comunque che sarebbe una mossa importante: passare da un default di "opt-out" a un default di "opt-in" sarebbe secondo me una prevenzione necessaria (anche se non risolutiva, concordo con Giacomo) per difendere i cittadini dallo strapotere dei GAFAM. Ciao, Enrico Il 31/12/2023 19:05, Daniela Tafani ha scritto:
The EU should support Ireland’s bold move to regulate Big Tech by Zephyr Teachout and Roger McNamee
Dublin, Ireland, was stunned a month ago by riots that transformed its downtown into chaos, the worst rioting in decades, stemming from far-right online rumors about an attack on children.
The riots, like Jan. 6, appear to be a direct outgrowth of the amplification ecosystem supported by social media networks such as TikTok, Google’s YouTube and Meta’s Instagram, which likely keep their European headquarters in Dublin for tax reasons.
Ireland, long ridiculed for bowing to Big Tech, has now come out with a powerful proposal to address the problems of algorithmic amplification. Ireland set up Coimisiún na Meán, a new enforcer, this year to set rules for digital platforms.
It has proposed a simple, easily enforceable rule that could change the game: All recommender systems based on intimately profiling people should be turned off by default.
In practice, that means that the big platforms cannot automatically run algorithms that use information about a person’s political views, sex life, health or ethnicity. A person will be able to switch an algorithm on, but those toxic algorithms will no longer be on by default. Users will still have access to algorithmic amplification, but they will have to opt in to get it.
Today, algorithms feed each user different information. They derive their power from the trove of personal data that platforms acquire about users, data that enables the identification and exploitation of emotional weak points, all to maximize engagement.
Platforms do not acknowledge responsibility for downstream consequences. Some users respond best to cat videos, others to hate speech, disinformation and conspiracy theories. For many, the response to harmful content is involuntary, driven by flight or fight. Either way, users spend more time on the platform, which allows the company to make more money by showing them ads.
Artificially amplifying outrage may be lucrative, but it carries a terrible cost. Recommender systems enable to migrate from the fringe to the mainstream. An investigation revealed that Meta’s algorithms were key contributors to the murderous hate that cost thousands of people their lives in Myanmar’s 2017 Rohingya genocide.
Frances Haugen, a whistleblower, revealed in 2021 that Meta had known the danger of its algorithm for years. As long ago as 2016, Meta’s internal research had reported that “64 percent of all extremist group joins are due to our recommendation tools.”
It continued: “Our recommendation systems grow the problem.”
When 37,000 people allowed researchers to monitor their YouTube experience, nearly all the nasty videos they had encountered were pushed into their feed by YouTube’s algorithm. An experiment using simulated users found that Facebook, Instagram and X, the platform formerly known as Twitter, recommended antisemitic and conspiracy content to test users as young as 14 years old.
Earlier this year, the surgeon general spoke about the danger of algorithms that promote suicide and self-harm. A recent experiment by Amnesty International proves the point: TikTok’s algorithm recommended videos encouraging suicide to a test user posing as a 13-year-old only an hour after the account was created.
Recommender systems didn’t really take hold until 2010, and we’ve tried trusting the Big Tech platforms, but 13 years into the experiment, with plenty of data showing harm, we cannot trust technology companies to regulate themselves in the public interest.
We do not want the government to be in the business of sorting through what is and is not harmful if amplified. The brilliance of the Ireland model is that it offers a way forward: rules that are content-neutral, giving users control of one critical aspect of their online experience.
After years of being a tax haven for Big Tech, Ireland is now offering the world a groundbreaking rule to protect democracy, public health and public safety. In under nine months, Coimisiún na Meán has gone from initial launch to tackling the machine at the heart of the disinformation crisis.
The rule is necessary because current European Union regulations aren’t working and the new Digital Services Act is not designed to tackle the core problem. Under the EU’s General Data Protection Regulation, tech firms are already supposed to get a person’s “explicit” (two-step) consent to process inferences about their political views, sexuality, religion, ethnicity or health. Several complaints that the big firms have failed to seek or receive this consent have not been resolved years after being brought. But for over five years, Big Tech’s primary General Data Protection Regulation authority, which is in Ireland, failed to notice or act.
Europe often trumpets its regulatory leadership in the world. But the so-called “Brussels Effect” of other countries heeding its rules began to dissipate when Europe failed to enforce its most famous law: the General Data Protection Regulation. The European Commission is understandably focused on the Digital Services Act, which goes into effect next month, but EU policymakers should welcome the new Irish rules.
Coimisiún na Meán’s bold move would ultimately make the Digital Services Act far more successful. Europe and the Irish government are stepping up at last to regulate harmful technology products. Social media may become social again.
https://thehill.com/opinion/technology/4380369-the-eu-should-support-ireland... _______________________________________________ nexa mailing list nexa@server-nexa.polito.it https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa --
-- EN https://www.hoepli.it/libro/la-rivoluzione-informatica/9788896069516.html ====================================================== Prof. Enrico Nardelli Presidente di "Informatics Europe" Direttore del Laboratorio Nazionale "Informatica e Scuola" del CINI Dipartimento di Matematica - Università di Roma "Tor Vergata" Via della Ricerca Scientifica snc - 00133 Roma home page: https://www.mat.uniroma2.it/~nardelli blog: https://link-and-think.blogspot.it/ tel: +39 06 7259.4204 fax: +39 06 7259.4699 mobile: +39 335 590.2331 e-mail: nardelli@mat.uniroma2.it online meeting: https://blue.meet.garr.it/b/enr-y7f-t0q-ont ====================================================== --
participants (3)
-
Daniela Tafani -
Enrico Nardelli -
Giacomo Tesio