On 28/02/24 10:54, Daniela Tafani wrote:
More and more evidence will emerge that generative AI and large language models provide false information and are prone to hallucination—where an AI simply makes stuff up, and gets it wrong. Hopes of a quick fix to the hallucination problem via supervised learning, where these models are taught to stay away from questionable sources or statements, will prove optimistic at best. Because the architecture of these models is based on predicting the next word or words in a sequence, it will prove exceedingly difficult to have the predictions be anchored to known truths.
Ci sono però casi, quali la valutazione dell'affidabilità di chi chiede un prestito, in cui l'imprecisione - eufemisticamente detto - dei SALAMI è ben nota e non è considerata un problema. Come scrive Brett Scott in Cloud Money (2022), capitolo 9. And there are countless possible mis-categorisations. To use a light-hearted example, I discovered that Twitter’s Robo-Sherlocks have me categorised – among other things – as a possible ‘working-class mom’. In Twitter’s case this may mean I’m shown adverts for cut-price family holidays, but in the case of a bank it could mean getting blacklisted from credit, or even its opposite – getting enticed into over-indebtedness. Regardless, ***if bluntly auto-categorising can help a bank make money by allowing it to churn through many more customers at low cost, the bank will try to do just that.*** Much in the way YouTube does not care if it gets video suggestions wrong for 20 per cent of people, so too will banks write off those customers who find themselves punished by the imprecision of their systems. ***The motivation in automating ‘intelligence’ is not to seek the truth and nuance of every case: it is to optimise profit at scale,*** and, unless it hurts the bottom line, financial institutions will not discard a system if they discover it only has the capabilities of a lazy holiday intern. Questo vale in particolare per il celebrato microcredito: Poorer people are unprofitable if an institution has to spend too much time serving or thinking about them, so the solution is to automate the process of doing both. Rebus sic stantibus, monopolisti e oligopolisti possono tenersi i SALAMI traendone profitto, anche se è possibile che chi deve misurarsi con la concorrenza ne tragga invece un danno economico. Ma "competition" - come scriveva Peter Thiel - "is for losers". Possiamo estendere queste considerazioni anche all'AI generativa e ai LLM? A presto, MCP