“we will end up creating a dystopic information environment”
David Kaye, writing for The LA Review of Books on, The Digital Deluge and the Age of AI:
The public’s impression of AI is that it is machines taking over, but — for now, for the foreseeable future, and certainly in content moderation — it is really human programming and the leveraging of that power, which is a massive one for corporations. The machines have a lot of difficulty with text, with all the variations of satire and irony and misdirection and colloquial choppiness that is natural to language. They have difficulty with human difference and have facilitated the upholding of race, gender, and other kinds of biases to negative effect. Even worse, as the scholar Safiya Noble argues in her book Algorithms of Oppression, “racism and sexism are part of the architecture and language of technology.” And all of this is not merely because they are machines and “cannot know” in the sense of human intelligence. It is also because they are human-driven.
We often do not know the answers about meaning, at least not on a first review. The programmers have biases, and those who create rules for the programmers have biases, sometimes baked-in biases having to do with gender, race, politics, and much else of consequence. Exacerbating these substantive problems, AI’s operations are opaque to most users and present serious challenges to the transparency of speech regulation and moderation.
When systems scale, shit gets crazy.