I have a half-baked idea here about machine learning and alienation coming on. There’s an interesting difference from a machine learning tool built by someone to analyze scientific data or to predict market moves, and machine learning built to analyze re-offender likelihood or predict credit risks. It’s this sort of alienation here, about whether a machine learning system is used to amplify human reason or to evade it.
It also feels like a question of consent tbh, if an algorithm mistakenly places my money in penny stocks with a huge FX risk, at least nobody forced me to use that algorithm. That’s not quite the same when it comes to sentencing guidelines.