“we will end up creating a dystopic information environment”

David Kaye, writing for The LA Review of Books on, The Digital Deluge and the Age of AI:

The public’s impression of AI is that it is machines taking over, but — for now, for the foreseeable future, and certainly in content moderation — it is really human programming and the leveraging of that power, which is a massive one for corporations. The machines have a lot of difficulty with text, with all the variations of satire and irony and misdirection and colloquial choppiness that is natural to language. They have difficulty with human difference and have facilitated the upholding of race, gender, and other kinds of biases to negative effect. Even worse, as the scholar Safiya Noble argues in her book Algorithms of Oppression, “racism and sexism are part of the architecture and language of technology.” And all of this is not merely because they are machines and “cannot know” in the sense of human intelligence. It is also because they are human-driven.

We often do not know the answers about meaning, at least not on a first review. The programmers have biases, and those who create rules for the programmers have biases, sometimes baked-in biases having to do with gender, race, politics, and much else of consequence. Exacerbating these substantive problems, AI’s operations are opaque to most users and present serious challenges to the transparency of speech regulation and moderation.

When systems scale, shit gets crazy.

Lying Robots

Gizmodo contributor George Dvorsky interviewed the authors of Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence and they discuss why we might want to consider programming robots to lie to us:

Gizmodo: How can we program a robot to be an effective deceiver?

Bridewell: There are several capacities necessary for recognizing or engaging in deceptive activities, and we focus on three. The first of these is a representational theory of mind, which involves the ability to represent and reason about the beliefs and goals of yourself and others. For example, when buying a car, you might notice that it has high mileage and could be nearly worn out. The salesperson might say, “Sure, this car has high mileage, but that means it’s going to last a long time!” To detect the lie, you need to represent not only your own belief, but also the salesperson’s corresponding (true) belief that high mileage is a bad sign.

Of course, it may be the case that the salesperson really believes what she says. In that case, you would represent her as having a false belief. Since we lack direct access to other people’s beliefs and goals, the distinction between a lie and a false belief can be subtle. However, if we know someone’s motives, we can infer the relatively likelihood that they are lying or expressing a false belief. So, the second capacity a robot would need is to represent “ulterior motives.” The third capacity addresses the question, “Ulterior to what?” These motives need to be contrasted with “standing norms,” which are basic injunctions that guide our behavior and include maxims like “be truthful” or “be polite.” In this context, ulterior motives are goals that can override standing norms and open the door to deceptive speech.

Maybe robots should start reading fiction too.

“The narrative is a bug, not a feature.”

One reason we easily dismiss the astonishing things computers can do is that we know that they don’t carry around a narrative, a play by play, the noise in their head that’s actually (in our view) ‘intelligence.’

It turns out, though, that the narrative is a bug, not a feature. That narrative doesn’t help us perform better, it actually makes us less intelligent. Any athlete or world-class performer (in debate, dance or dungeonmastering) will tell you that they do their best work when they are so engaged that the narrative disappears.

Seth Godin

Categories:

Technology