Reply Hazy, Try Again Later

Over at Slate, Will Oremus lets us know Alexa is losing her edge:

As recently as a year ago, Amazon single-handedly controlled the global smart speaker industry, with a market share upward of 75 percent, according to estimates from two of the leading market watchers, Strategy Analytics and Canalys, based in Singapore. Amazon itself boasted in a February earnings report that it had sold “tens of millions” of Echo devices in 2017. That figure included not only its flagship Echo smart speaker but the Echo Dot, Echo Show, and other Echos, the company clarified to me (though not other Alexa-powered gizmos, such as the Tap or Fire TV). It makes sense that Amazon was crushing the competition, because there wasn’t much competition yet: Google had just launched the Home in late 2016, and Apple’s HomePod was not yet on the market. The Echo has been available since 2014.

Would-be rivals faced an uphill struggle. Amazon’s head start in smart speakers resembled the daunting leads that Apple famously built in portable MP3 players, smartphones, and tablets. But Apple’s high prices at least gave competitors an opening to build cheaper alternatives for the mass market. Not so with Amazon. Because it viewed Echo partly as a path to Amazon purchases, the company sold its smart speakers at affordable prices, opting to maximize sales rather than profit margins. How could latecomers compete?

First off, Oremus is being selective with his MP3 player timeline.

Apple entered an already crowded MP3 player market when it launched the original iPod in 2001. The classic and often quoted ‘BrownFury’ on SlashDot said what all short-sighted nerds at the time were thinking, No wireless. Less space than a nomad. Lame.

Table stakes, erroneously determined by the dorks, had been set. Apple was a day late and a dollar short. Except they weren’t.

Amazon was first to market with the Echo, but not best to market. The smart speaker product segment is still a young one and it’s unclear which one(s) will be the winner(s). A few weeks ago I questioned the value of a speaker you can order things from (will this blog entry look cute and naive in a decade?).

As I see it, Google seems to have the biggest lead in AI assistants and voice recognition/dictation, but Apple will be releasing iOS 12 in a month which includes Siri Shortcuts, something I’m very excited about.

Maybe Apple ends up dominating the premium end of the smart speaker category, mirroring what they have been doing with the iPhone for 10 years, while Google and Amazon fight for the rest. Maybe Google winds up the winner.

We don’t know. Won’t know for a while.

Tags:

 /  /  / 

People want their devices to know everything about them AND they want their privacy.

Nicole Lee at Engadget explores what Siri can learn from Google Assistant:

Another area that Siri can learn from Google Assistant is simply a better understanding of who you are and have that inform search results. For example, if I tell Google that my favorite team is the San Francisco Giants, it’ll simply return the scores of last night’s game if I say “how are the Giants doing?” or just “how did my favorite team do last night?” Siri, on the other hand, would ask me “Do you mean the New York Giants or the San Francisco Giants?” every single time. That’s just tiresome.

I agree with this article, Siri is definitely lagging behind Google Assistant and Alexa.

At the same time the quote above highlights an interesting situation many people are hypocritical about: they expect their devices to acquire a deep understanding of who they are, what they like, and how they behave, but they’re fiercely vocal about privacy rights.

This might be a more reasonable expectation with Apple, since their business model doesn’t rely on aggregating user data for advertising. Google, on the other hand, makes the majority of their money through advertising.

I’m not saying you should be willing to throw away your privacy rights if you own one of these devices, but just know terms and conditions you’re agreeing to.

Never go full robot.

“The guy telling everyone to be afraid of robots uses too many robots in his factory”:

Elon Musk says Tesla relied on too many robots to build the Model 3, which is partly to blame for the delays in manufacturing the crucial mass-market electric car. In an interview with CBS Good Morning, Musk agreed with Tesla’s critics that there was over-reliance on automation and too few human assembly line workers building the Model 3.

Earlier this month, Tesla announced that it had officially missed its goal of making 2,500 Model 3 vehicles a week by the end of the first financial quarter of this year. It will start the second quarter making just 2,000 Model 3s per week, but the company says it still believes it can get to a rate of 5,000 Model 3s per week at the midway point of 2018.

You went full robot. Never go full robot.

Lying Robots

Gizmodo contributor George Dvorsky interviewed the authors of Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence and they discuss why we might want to consider programming robots to lie to us:

Gizmodo: How can we program a robot to be an effective deceiver?

Bridewell: There are several capacities necessary for recognizing or engaging in deceptive activities, and we focus on three. The first of these is a representational theory of mind, which involves the ability to represent and reason about the beliefs and goals of yourself and others. For example, when buying a car, you might notice that it has high mileage and could be nearly worn out. The salesperson might say, “Sure, this car has high mileage, but that means it’s going to last a long time!” To detect the lie, you need to represent not only your own belief, but also the salesperson’s corresponding (true) belief that high mileage is a bad sign.

Of course, it may be the case that the salesperson really believes what she says. In that case, you would represent her as having a false belief. Since we lack direct access to other people’s beliefs and goals, the distinction between a lie and a false belief can be subtle. However, if we know someone’s motives, we can infer the relatively likelihood that they are lying or expressing a false belief. So, the second capacity a robot would need is to represent “ulterior motives.” The third capacity addresses the question, “Ulterior to what?” These motives need to be contrasted with “standing norms,” which are basic injunctions that guide our behavior and include maxims like “be truthful” or “be polite.” In this context, ulterior motives are goals that can override standing norms and open the door to deceptive speech.

Maybe robots should start reading fiction too.