People want their devices to know everything about them AND they want their privacy.

Nicole Lee at Engadget explores what Siri can learn from Google Assistant:

Another area that Siri can learn from Google Assistant is simply a better understanding of who you are and have that inform search results. For example, if I tell Google that my favorite team is the San Francisco Giants, it’ll simply return the scores of last night’s game if I say “how are the Giants doing?” or just “how did my favorite team do last night?” Siri, on the other hand, would ask me “Do you mean the New York Giants or the San Francisco Giants?” every single time. That’s just tiresome.

I agree with this article, Siri is definitely lagging behind Google Assistant and Alexa.

At the same time the quote above highlights an interesting situation many people are hypocritical about: they expect their devices to acquire a deep understanding of who they are, what they like, and how they behave, but they’re fiercely vocal about privacy rights.

This might be a more reasonable expectation with Apple, since their business model doesn’t rely on aggregating user data for advertising. Google, on the other hand, makes the majority of their money through advertising.

I’m not saying you should be willing to throw away your privacy rights if you own one of these devices, but just know terms and conditions you’re agreeing to.

Never go full robot.

“The guy telling everyone to be afraid of robots uses too many robots in his factory”:

Elon Musk says Tesla relied on too many robots to build the Model 3, which is partly to blame for the delays in manufacturing the crucial mass-market electric car. In an interview with CBS Good Morning, Musk agreed with Tesla’s critics that there was over-reliance on automation and too few human assembly line workers building the Model 3.

Earlier this month, Tesla announced that it had officially missed its goal of making 2,500 Model 3 vehicles a week by the end of the first financial quarter of this year. It will start the second quarter making just 2,000 Model 3s per week, but the company says it still believes it can get to a rate of 5,000 Model 3s per week at the midway point of 2018.

You went full robot. Never go full robot.

Lying Robots

Gizmodo contributor George Dvorsky interviewed the authors of Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence and they discuss why we might want to consider programming robots to lie to us:

Gizmodo: How can we program a robot to be an effective deceiver?

Bridewell: There are several capacities necessary for recognizing or engaging in deceptive activities, and we focus on three. The first of these is a representational theory of mind, which involves the ability to represent and reason about the beliefs and goals of yourself and others. For example, when buying a car, you might notice that it has high mileage and could be nearly worn out. The salesperson might say, “Sure, this car has high mileage, but that means it’s going to last a long time!” To detect the lie, you need to represent not only your own belief, but also the salesperson’s corresponding (true) belief that high mileage is a bad sign.

Of course, it may be the case that the salesperson really believes what she says. In that case, you would represent her as having a false belief. Since we lack direct access to other people’s beliefs and goals, the distinction between a lie and a false belief can be subtle. However, if we know someone’s motives, we can infer the relatively likelihood that they are lying or expressing a false belief. So, the second capacity a robot would need is to represent “ulterior motives.” The third capacity addresses the question, “Ulterior to what?” These motives need to be contrasted with “standing norms,” which are basic injunctions that guide our behavior and include maxims like “be truthful” or “be polite.” In this context, ulterior motives are goals that can override standing norms and open the door to deceptive speech.

Maybe robots should start reading fiction too.