From an ethical perspective, the AIA inherits the same foundational approach seen in the GDPR: it is based on protecting human dignity and fundamental rights. This is a very positive feature, even if the current proposal of the AIA is more top-down, less flexible and less focused on the protection of citizens and their rights than the GDPR. Unfortunately, the AIA uses an anachronistic terminology to define this approach as “human-centric”, that is, as an approach that places humanity at the centre of technological development. Yet this is both trivially true and dangerously ambiguous. [...] Fortunately, despite the unfortunate and obsolete terminology, the underlying vision is sound: the AIA emphasises the value of AI as a technology that can be very “green” and provide extraordinary support against pollution and climate change and for the sustainable development of information societies. [...] Correctly, the AIA treats AI as a technology for solving problems and performing tasks, not as some kind of Frankenstein’s monster. Therefore, the proposal excludes the possibility of assigning to AI systems any status as a legal person, with rights and duties, such as the possibility of owning property, entering into contracts, suing and being sued, and so forth. The responsibility of any AI system rests entirely with the people who design, manufacture, market, and use it. Coherently, the proposal stresses the importance of human oversight throughout the text. [...] At times, the text is ambiguous. [...] Some AI systems are discussed as low- or zero-risk, and as such, they seem to fall outside the scope of strict compliance and be subject only to voluntary codes of conduct, [..] leaving too much room for uncertainty and loopholes. [...] If one does not distinguish between these two senses of high-risk system—something is high risk if it fails to work vs. something is high risk if it is put to work—then one may end confusing the resilience that “good” AI systems must have, with the resistance that must be exerted towards the “bad” AI systems. [...] In other cases, the proposal is vague, such as when it comes to banning the use of AI systems intended to distort human behaviour, with probable physical or psychological harm. The intent is commendable, but it might risk banning even unproblematic AI systems if this approach were applied in a Draconian way. Finally, some expectations in the proposal seem too idealistic. [...] The new legislation may not improve but merely push out of the EU some risky AI R&D and its related ethical-legal problems, inviting companies to develop their products and services in other countries where legislation is absent, or less stringent, or not enforced, while the EU turns a wilful blind eye—or just inadequately enforces its legislation [...] Not respecting this atemporality (it should not matter when unethical steps were taken) would contradict the aterritoriality of the legislation. In this case, the recommendation is obvious: the EU must keep both eyes open and apply its ethical and legal requirements consistently and without hypocrisy, not just to the status quo, but also to the history of what comes from other places. Tratto da https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3873273 Personalmente l'analisi mi sembra ottima seppur non esaustiva nonché leggermente ingenua in alcuni passaggi (che non ho riportato qui). Definire le "AI" non sarebbe difficile (cambiando loro nome), ma è impossibile se si pretende di distinguerle dal resto del software. Si tratta infatti di software programmato statisticamente che riproduce uno degli infiniti pattern deducibili dai dati disponibili. Una volta in funzione è indistinguibile dal software programmato linguisticamente eccetto che per l'effettiva impossibilità di prevederne il comportamento negli infiniti corner-case non sufficientemente rappresentati dai dati. Comunque non ho idea di cosa abbia in mente Floridi quando parla di "AI systems intended to distort human behaviour" che potrebbero avere usi "unproblematic". La diffusione di qualsiasi automatismo (AI o meno) che possa distorcere il comportamento umano causando danni fisici, psicologici o sociali (Floridi non considera quest'ultima categoria) andrebbe semplicemente messa al bando. Giacomo