I have a half-baked idea here about machine learning and alienation coming on. There’s an interesting difference from a machine learning tool built by someone to analyze scientific data or to predict market moves, and machine learning built to analyze re-offender likelihood or predict credit risks. It’s this sort of alienation here, about whether a machine learning system is used to amplify human reason or to evade it.
It also feels like a question of consent tbh, if an algorithm mistakenly places my money in penny stocks with a huge FX risk, at least nobody forced me to use that algorithm. That’s not quite the same when it comes to sentencing guidelines.
frankly i also didn’t consent to being judged by “””my peers”””, ie some collection of randos whose iqs are some fraction of mine that is worryingly closer to 0.5 than 1
objections that reduce to “it sucks to be in someone else’s power” are entirely too general to invalidate specific uses of power
In so far as those wielding this power against you are nominally in my employ, I also don’t want them to abdicate that responsibility when it’s you instead of me that’s in their grasp.