Gizmodo contributor George Dvorsky interviewed the authors of Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence and they discuss why we might want to consider programming robots to lie to us:
Gizmodo: How can we program a robot to be an effective deceiver?
Bridewell: There are several capacities necessary for recognizing or engaging in deceptive activities, and we focus on three. The first of these is a representational theory of mind, which involves the ability to represent and reason about the beliefs and goals of yourself and others. For example, when buying a car, you might notice that it has high mileage and could be nearly worn out. The salesperson might say, “Sure, this car has high mileage, but that means it’s going to last a long time!” To detect the lie, you need to represent not only your own belief, but also the salesperson’s corresponding (true) belief that high mileage is a bad sign.
Of course, it may be the case that the salesperson really believes what she says. In that case, you would represent her as having a false belief. Since we lack direct access to other people’s beliefs and goals, the distinction between a lie and a false belief can be subtle. However, if we know someone’s motives, we can infer the relatively likelihood that they are lying or expressing a false belief. So, the second capacity a robot would need is to represent “ulterior motives.” The third capacity addresses the question, “Ulterior to what?” These motives need to be contrasted with “standing norms,” which are basic injunctions that guide our behavior and include maxims like “be truthful” or “be polite.” In this context, ulterior motives are goals that can override standing norms and open the door to deceptive speech.
Maybe robots should start reading fiction too.
Samsung’s new voice assistant Bixby has finally arrived, and unfortunately, it was accompanied by sexist descriptions for its male and female voice options.
Under “language and speaking style” in the Bixby menu, as several have pointed out on Twitter, the female voice was accompanied by descriptive tags such as “chipper, clear, and cheerful,” while the male voice was described as “assertive, confident, and clear.” After it was spotted and dissent circulated online, Samsung said it would remove the gendered hashtags, telling Gizmodo it is “working diligently to remove the hashtag descriptions from the Bixby service,” and it is “constantly learning from customer feedback.”
The subtitle to this article is “Why does this keep happening”.
I can’t tell you why this keeps happening but I’ll tell you this: the Korean-American women I know personally refuse to date Korean men from Korea because of their — pick your adjective — outdated, sexist, and/or backwards views on the roles of men and women.
via Daring Fireball
Economically speaking—that is, in the most brutal terms—truckers are disposable. Almost anyone can become a professional driver with a month or so of training, and most don’t stick around for long; median pay is about $40,000 per year, and the work is often unhealthy, painful, and lonely. Software engineers, on the other hand, are some of the best-paid, hardest-to-hire employees in the modern economy. The variety that Seltz-Axmacher employs—specialists in AI and machine learning—are even better paid and even harder to hire. Google has been known to pay its self-driving car engineers millions or even tens of millions. Starsky’s coders don’t make that much, but the point remains: In its cabs, side by side, are representatives of some of the most and least promising careers in America.
Self-driving tractor trailers are replacing truck drivers, but truck driving is an under-paid, unappealing job. In this respect, it’s a perfect place for AI, but it doesn’t address the 3.5 million Americans who drive trucks for a living and need income.
Samsung has released a preview of their AI assistant ‘Bixby’ to beta testers:
First things first. If you registered to be a beta tester, make sure you’re running the current version of Bixby by going to its “about” screen. Download any updates that appear there. I also had to clear the data and cache for Bixby apps in my S8+’s settings screen before Bixby Voice appeared. Once it does, you’ll get a tutorial that involves teaching you how to trigger the voice feature and then teaching it to recognize your voice.
This already sounds like a great user experience. Clearing cache, tutorials. Bixby sounds even more beta than Siri was went Apple launched it.
Even the essential task of text messaging someone is surprisingly hard to pull off. For one, you’ve got to use Samsung’s Messages app as your default SMS app. And if you don’t word things exactly right, it won’t happen. “Text mom and ask ‘how are you’” sent me to a Google search. “Send a text to mom and ask ‘how are you’” worked — but still necessitated a few taps to fire off the message. What’s the point of voice, then? Google Assistant nailed it with a single attempt.
One more nugget:
Compared directly against Google Assistant, Bixby Voice is in for some embarrassing showdowns. Until things get better, a lot of people will be asking “What’s the point?” I’m not really sure Bixby Voice saves you much in the way of time since it often runs through the same menus and screens you would with your finger when performing tasks.
Haha. Good luck with this, Samsung.
What I love most about this is that Samsung phones run on Android and Google already has Google Assistant which is way more advanced than Bixby, but Samsung wants to differentiate themselves from the sea of other Android phone manufacturers.
I suppose in that respect they’ve accomplished their mission.
Let’s face it, jobs have always been a necessary evil. A means to an end — unless you’re one of those people who has a career and loves what they do. Showoff.
Computer scientists, programmers, engineers, and all the other smart people on planet Earth have been working hard to ensure we don’t have to work hard anymore.
Over at Boing Boing, Cory Doctorow describes a study done by university economists who concluded these jobs ain’t coming back:
In Disappearing Routine Jobs: Who, How, and Why? economists from USC, UBC and Manchester University document how the automation of “routine” jobs (welders, bank tellers, etc) that pay middle class wages has pushed those workers out of the job market entirely, or pushed them into low-paying, insecure employment.
The study links the phenomenon of unemployment with automation — an obvious-seeming connection — by establishing a causal relationship that goes beyond mere correlation: people stop working because robots take their jobs. Robots don’t just co-occur with unemployment, they cause it.
The share of Americans working in routine jobs has fallen from 40.5% in 1979 to 31.2% in 2014, according to the paper. The federal government’s official measure of Americans age 16 and over who are working or seeking work has fallen from a recent high of 67.3% in 2000 to 62.7% in November 2016.
“Routine jobs are disappearing and more and more prime-age Americans aren’t working,” said Mr. Siu. “These things are two sides of the same coin.”
Last week the Guardian published an article by Justin McCurry on Japanese insurance firm Fukoku Mutual Life replacing 34 employees with IBM’s Watson Explorer AI:
A future in which human workers are replaced by machines is about to become a reality at an insurance firm in Japan, where more than 30 employees are being laid off and replaced with an artificial intelligence system that can calculate payouts to policyholders.
Fukoku Mutual Life Insurance believes it will increase productivity by 30% and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.
Productivity, what’s that? Work extra hard so I can make the same salary and my boss can get a bigger bonus? Ha! Yeah right!
The problem with artificial intelligence and automation is that we’ve only solved half of the problem. We’ve figured out how to replace a human with a robot or a computer.
What we need to do now — what should have been done in tandem with the first part — is figure out a way to allow these redundant humans to stay employed through alternate means.
Some people say job automation and AI are necessitating a system like universal basic income while others said it’s not economically feasible.
I don’t know what the solution is, but we have to figure something out because shit is going to get real very soon.
Now, the Finnish government is exploring how to change that calculus, initiating an experiment in a form of social welfare: universal basic income. Early next year, the government plans to randomly select roughly 2,000 unemployed people — from white-collar coders to blue-collar construction workers. It will give them benefits automatically, absent bureaucratic hassle and minus penalties for amassing extra income.
The government is eager to see what happens next. Will more people pursue jobs or start businesses? How many will stop working and squander their money on vodka? Will those liberated from the time-sucking entanglements of the unemployment system use their freedom to gain education, setting themselves up for promising new careers? These areas of inquiry extend beyond economic policy, into the realm of human nature.
Like climate change, we’re beginning to understand how robots and artificial intelligence will affect us and our employment on Planet Earth (at least those of us who don’t have their heads buried in the sand).
We can’t just automate every job, lay millions of people off, and expect things to just work themselves out. It may mean universal basic income and it may not, but countermeasures have to be made to ensure we’ll be ok.
And for all those people who say, “Sure, but a computer will never be able to do my job doing [insert your trade].”
Never say never.
From a The New York Times story on the deadly shooting in Dallas yesterday where 5 police officers were killed by a sniper:
“The negotiations broke down, and we had an exchange of gunfire with the suspect,” the chief said. “We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was.”
Robots and artificial intelligence are doing more and more of what we humans used to do, including kill other humans.
Marco Arment worries that Apple isn’t skating to where the puck is going to be:
Today, Amazon, Facebook, and Google are placing large bets on advanced AI, ubiquitous assistants, and voice interfaces, hoping that these will become the next thing that our devices are for.
If they’re right — and that’s a big “if” — I’m worried for Apple.
Today, Apple’s being led properly day-to-day and doing very well overall. But if the landscape shifts to prioritize those big-data AI services, Apple will find itself in a similar position as BlackBerry did almost a decade ago: what they’re able to do, despite being very good at it, won’t be enough anymore, and they won’t be able to catch up.
It’s interesting that Google riffed off of Amazon’s Echo with their Google Home, rather than what they’ve usually done in the past: riff off of something Apple has made.
One reason we easily dismiss the astonishing things computers can do is that we know that they don’t carry around a narrative, a play by play, the noise in their head that’s actually (in our view) ‘intelligence.’
It turns out, though, that the narrative is a bug, not a feature. That narrative doesn’t help us perform better, it actually makes us less intelligent. Any athlete or world-class performer (in debate, dance or dungeonmastering) will tell you that they do their best work when they are so engaged that the narrative disappears.