Germany’s data ethics commission releases 75 recommendations with EU-wide application in mind

A new document presented to the German government argues for more regulations in automated decision-making. AlgorithmWatch welcomes some proposals.

This Wednesday, the “data ethics commission” of the German government released a 240-page report (pdf). It contains 75 concrete recommendations regarding the digitization of society, many of which have to do with algorithmic decision-making.

The 16-strong commission, which included 9 men, started work in 2018. It brought together a majority of scholars, data protection officers and some representatives of the industry. While their work addressed several aspects of digitization, in particular a recommendation to abandon any plan to treat personal data as pieces of property that could be bought and sold, the third part of their report was entirely devoted to algorithmic systems.

On this point, the central recommendation of the commission is to apply different regulations to autonomous systems based on a 5-point scale:

  1. Systems with low potential harm such as drink dispensers should not be regulated.
  2. Systems with some potential harm such as dynamic pricing in e-commerce should be lightly regulated and post-hoc controls should be set up.
  3. Systems with regular or obvious potential harm such as personalized pricing should undergo an approval procedure associated to regular controls.
  4. Systems with considerable potential harm, such as companies that have quasi-monopolies in credit scoring, should publish the details of their algorithms, including the factors used in the calculations and their weights, the data processed and an explanation of their inner logic. Controls should be possible via a real-time interface.
  5. Systems with unwarranted potential harm such as autonomous weapons should be “fully or partially” forbidden.

[...]

continua qui: https://algorithmwatch.org/en/germanys-data-ethics-commission-releases-75-recommendations-with-eu-wide-application-in-mind/