Oct. 6, 2020

Episode 9 - Robot Overlords

Episode 9 - Robot Overlords

The Wolf AND The Shepherd discuss their love for robots, but do robots love all of us? Are we heading to a dystopia rather than a utopia? Is artificial intelligence going to be our undoing or will it help us to survive? Will your robot vacuum smear poop all over your floor?


welcome to this episode of the wolf and

the shepherd today we are going to talk


our robot overlords whether or not maybe

there actually are

already robot overlords or are they

going to be our future

robot lords overlords you know

keeping track of us telling us what to

do all that good stuff with ai

and everything in the news right now

figured it was a good time to

get a little bit caught up on robots um

i think a popular meme for about the

last 15 years has been

i for one welcome our robot overlords

and i truly do because i don't think

they can mess it up worse than

humans too um but let me ask you

i know you're a big fan of robots even

the bad ones

when did you first

kind of think yeah robots are pretty

cool i know you're a big star wars fan

in the first star wars movie and you

were i don't know

maybe not in born when it came out or

barely born

um but you you couldn't help obviously

from the first star wars movie

not falling in love with r2d2 right i

mean to me

that was the future of robots of course

you know being a little kid you got the

robot uh

rosie from the jetsons you know and

she's taking care of the house but yeah

seeing that that of course being a

cartoon right but seeing an

actual robot an astromech droid r2d2


even though he beeped and booped you

know he didn't actually talk he had to

have some kind of a translator for him


but he was capable of doing so much

stuff then of course through the movies

he's kind of the secret hero he's always

there for all the goings on

he knows all about everything of course

we could get in a huge rabbit hole with

why didn't you know darth vader know

that he had the robot why didn't he

remember making c-3po but of course

in the star wars movies just full of


yeah um i think it's funny that

you know in kids movies and cartoons


robots are painted in a very utopia type

way but as soon as you get to an adult

and you start getting to the sci-fi

it becomes very dystopian like they're

out to kill us

basically it's like all the dreams you

had as a kid like oh yeah you'd have a

friendly robot which should be like a

super smart intelligent

dog type you know thing and then

you get to an adult and it's like no

they're just going to kill you right

well as a kid of course you're looking

at it and you're saying you know how

cool that would be to have one

you know that that's a kid that's

saturday morning cartoons watching

every commercial saying oh i want this

oh i want that well of course you want a


and you could get robot toys and all

that good stuff right but it was

never quite like actually having a robot


now we've arrived to where there are

actual robots whether they

are something that looks kind of like a

droid something that is as simple as

a alexa sitting in your house

a robot vacuum cleaner uh to some of the

automated robots in factories and


i mean there's tons and tons of

different robots and in tons of

different directions we can go with this

yeah but do you think um i think

when you're a kid your ideal of a robot

is very very different when you're an

adult when you're an adult you want it

to take over all of your


do all your work for you basically do


take all responsibilities away from you

so that

you can just basically earn the same

weight you're earning but with

much less effort i think as a kid it's

more of that companion type thing

right you know um kids don't always form

great alliances with pets

they don't necessarily have the patience

or tolerance of when they don't act how

they want them to and i think you know


think of having a robot and it's like oh

this robot will do anything i want it'll

play any movie i want or you know do

whatever i want play whatever game i

want which obviously you can't get from

a pet i mean

well not only that but don't forget

about the part that

when a robot when you're you know sick

of playing with the robot right you can

turn it off yeah

uh you can charge it up you know plug it

into the wall or

or charge the batteries or however

that's gonna work you don't have to feed

it you don't have to water it you don't

have to pick up after it it's picking up

after you right i mean

there's not a pet out there that you're

not feeding and giving water

and keeping it clean and taking it to

the vet i mean worst case scenario with

a robot right

something breaks down in there it's not

this living breathing thing

and you're going to be able to get parts

for it and get it fixed and

in theory now it could last forever

right where you know a pet isn't gonna

last forever so you don't have that

impending doom of having to put the

robot to sleep or whatever

and then if you did decide that hey i

don't want this thing anymore

you don't have to take it to the pound

you can part it out if you want to

you can bust it up with a baseball bat

it's not a living thing

yeah and plus uh like gingers they don't

have soles so if you have to replace it

then there's

no real conscience at least there isn't


right yeah at the stage of the game

we're at right now there's not

but who knows what the future holds

there well i think

um one of the first introduction of

robots into the household

i saw was when they started doing the

robot pets you know the robot

dog you could buy from walmart for like

15 it would walk

it would bark it would wag its tail

and then it evolved into one which would

react to the sound of your voice

and if you said sit it would sit and all

this stuff

and then where they kind of messed up

was they introduced a life like one

where it would actually poop

and it's like well this is the whole

reason i got a robot dog so i don't have

to clean up the freaking poop

absolutely and that that's where it kind

of crossed the line it's like yeah i

want something

real but i want it real without the

consequences of all the bad stuff

right which comes you want real without

responsibility yeah

now why do you think um

like as like i mentioned earlier the

you know ai especially when you're an

adult it's painted as a very dystopian

future that

if robots or ai becomes

you know sentient self-aware

that it suddenly becomes a threat to

mankind and wants to destroy us

because it won't put up with our crap i

mean why

i think as you get older you realize at

least what ai is

right now in the infancy is just a bunch

of if-then statements

all right elon musk in his self-driving


all that self-driving car with that ai

is is a bunch of if then statements

so you kind of understand what's going


versus that magical thought

as a child that you know here's

something that you know in your example

of the

the robot dog here's something that i

plug into the wall and charge up just

like i do my phone or anything else but

when i call its name it comes to me i

can tell it to sit just like a real dog


i don't have that responsibility this is

fun it's a toy

and as a kid you never think that maybe

you're asleep

that night even after watching movies

like toy story right

you're asleep that night and that robot

dog is just gonna jump up on your bed


strangle you yeah now what what do you


um i know you were used to working


what do you think the insurance premiums

are going to be like for a

completely ai driven car

because it's not about you know the ai

driven car making mistakes it's allowing

for mistakes for

human drivers on the road but you know

as well as i do when

somebody makes a mistake in front of you


you know breaks doesn't signal we can

react to that

but ai is gonna have to be pretty

sophisticated before it can allow

for human error so what do you think

that will be like in terms of if

you actually have a fully automated


i think the liability for the owners is

going to be tremendous because

obviously the robot makes mistake

yeah right and there's no assets that

that robot

owns it's going to be the company and i

think it's going to get to that point to


kind of like when you're in an accident

right now at least in the state of texas

and you rear in somebody 99.999 percent

of the time

it's your fault when you rear in

somebody i think with the autonomous

vehicles and the ai driven vehicles

it's going to have blame immediately

assigned to them 99.99

of the time because all it's going to

take is a lawyer to get a hold of this

and say

if there would have been a human behind

the wheel they would have been able to

react just like in your example right

but because it was a computer we're

going to fault the computer

yeah and that's that's where it's going

to go so i think you're going to see

a decline in the insurance premiums

and then you're going to see them go

right back up depending on the amount of


until it gets to the time like you say

where it might be able to

anticipate those human errors that are

actually causing the accidents

yeah i am i actually searched

uh on google through about three four


looking at automated vehicles and the

development obviously across the last

four or five years and it

just seems to be making incredible


but the um i actually saw it come up

three or four times and

it said that if we can get the liability


robot or automative uh driven vehicles

down to less accidents per head of

population than

asians then we've kind of made it

which sounds kind of racist but i read

that on google like four or five times

that's not my inception that i actually

read that

sure it and like you say we have made

lots of leaps and bounds in that ai

or automatic driving or autopilot or

whatever you want to call it

i i remember several years back we're

gonna say this is probably around 2005

or so

i took a uh i wouldn't call it a test

drive it was kind of a

a loner drive where i got to take out a

bmw convertible

for like a two hour drive and it had

automated cruise control in it and it

had a little radar system in it and

you set the speed that you wanted to go

and the car would accelerate and

decelerate itself it would stop itself

all you had to do was steer right and i

remember driving that car

thinking all i'm doing is is turning the

steering wheel

i'm i'm not touching the gas pedal i'm

not touching the brake

i'm just steering this how much longer

is it going to be

until i don't even have to touch the

steering wheel right i never would have


it would have been less than 15 years

because now you see the videos with the

the teslas and they're kind of driving

themselves even though they're not

technically fully autonomous but

they're basically driving themselves now

crazy imagine where we're going to be in

another 15 years

we're truly going to get to the point to


they're autonomous yeah and then do we


driving do people just say well what do

i need to drive for

you know as kids start turning 15-16

do they say well what do i need to drive

for like my kids right now none of them

drives a standard shift car

they don't know what a clutch is they

they couldn't get into a manual

transmission car and drive it

they've only seen to my knowledge

they've only seen one

in their life and and they thought it

was foreign it's like them looking at a

cassette player

and eventually they're gonna say well

why does this car have a steering wheel

and a gas pedal and a break what are you

supposed to do with that don't you just

get in and type where you want to go and

sit back and relax

it's probably where we're going to end

up yeah now um

i know it's a generational thing um


what do your parents think about the


of technology i mean both my parents are

dead so it's a waste time asking them

outside of a seance but um what what do

your parents kind of think about

technology do they still try and hang on

to the old days or

have they kind of slowly tried to


this kind of intrusion into their lives

of uh

you know ironically my parents have

actually embraced it quite well

they have alexa's in their house they

have smart light bulbs uh you know my

dad loves to tell her to turn the light

on and off ask her what the weather's

gonna be

remind him to do stuff order dog food oh

yeah remember

yeah you know everything like that uh i

will tell you though when he got

his first car that had a built-in

navigation system yeah

i showed him how to use it and i said

well all you do is you type in where you

want to go and it's going to tell you

where to go

and he said well that's kind of neat and

so i was in him

in the car with him he typed in the

destination he knew exactly where he was

going right so we typed the destination


and all of a sudden we're going down the

highway and the thing says you know take

the next exit

and he started yelling at it he said

well i don't want to turn here

and then of course he goes on past the

exit and then thing says recalculating

take the next exit and he said no i

don't want to turn here and he was

yelling and screaming at the car and i

said now

it's probably best you don't use the

navigation right would you remember that

episode of the office where he drives

into the lake yeah

yeah because he trusts in the gps yeah

my dad isn't gonna trust the

gps but ironically if you got in the car

with him

right now he has the screen up that

shows the little dot of where his car is

because he likes looking at the map

but he doesn't want that car telling him

where to go right he wants to be in

control of that car

would he ever get a car that drove

itself i would say no

he would never relinquish that kind of

trust into a piece of it

now that leads us on to you know


ai technology is not infallible

um i remember when i first started

getting involved with the computers

you know i was a dork when i was like 11

or 12 got my first personal computer

and so um you know basically input a

game into it i had to enter lines and

lines of code and you

got one single thing wrong it just

didn't work

or you had unexpected results

and nowadays you've gone from

you know systems where you know you

maybe only have 32k

of code to multiple

gigabytes of code going behind

programming stuff

and obviously a lot more monetarization

to make sure it's correct

right so where now we see malfunctions

in society in terms of ai

you have to think well is there ever

going to be a perfect

system where we can guarantee that


don't mess up um if you go to the first

robocop movie you remember when they had

did that first kind of cop

thing and it was 20 seconds to comply or

something like that

right and it you know shot the guy and

made him pretty much into a tea bag

exactly um you know is there ever going

to be a point

where we can trust i guess the

programmers behind the ai and will ai

successfully eliminate those human

errors to overcome that to become

completely reliable

i think that's the magic question that

all these people are out there

trying to figure out because you have

that paradox

problem with logic because a computer

no matter how far advanced it is it's

going to use

logic if this happens do this

if this same thing doesn't happen do


and if something is presented to them

that doesn't

follow along that line and they don't


the capability of making the decision

they'll freeze

that there's no way for them as of right

now at least to my knowledge it's not

like i'm

you know at boston dynamics right now

figuring this stuff out and i'm sure

these guys are doing that

but as of right now that there's no way

to put

emotion into it or feelings into it or

something like that

it it's all black and white all cut and


all point a to point b there there's no

in between with them

i don't know how you would ever get

there with the technology we have now


even in the future i i don't i just

don't see

how we're ever going to get there with

them being able to problem solve

outside of something that somebody

already presupposes and can program in


the old saying garbage in garbage out to

a computer at the end of the day

it's a computer right garbage in garbage

out whatever you tell that computer to

do it's going to do it yeah but if you

don't tell it to do it

it's it doesn't know what to do well i

think there's a certain amount of

allowability that

you know ai driven systems

have to take into account

their serving humans i mean you look at

a smart tv

if it's really a smart tv and it was

you know just for other robots or ai

driven systems you wouldn't need

any audio or visual because that code

itself and the binary

you know which is coming through it's

like well i don't need a picture i don't


audio because i can read this and i know

exactly what it sounds like and what it

looks like

that's just the human consumption well


but even in some of those like you say

the smart tv right

uh i know there's been tons of jokes on

the internet about

like netflix or or hulu or one of those


saying oh because you watch this show or

because you like this show you

you might like that show yeah and

they're hilarious in how

bad the responses are yeah i think the

what what's the movie the uh

the centipede movie or the the horror

movie or whatever human sentence the

human centipede one two and three

yeah i i saw a deal that said because

you liked a bug's life

yeah you might like the human centipede

yeah that's one of the more famous ones


but once again it's using that algorithm

and saying you know oh you like

insects this has an insect in the title

you probably like this one too

and this is why you have parental

controls on you know your netflix

because it's like

yeah i like a animated um

you know bug with the anthropomorphic


um hey would you also like to see a

human mouth get sewn to the human anus

right yeah so it goes off but again

going back to this dysfunctionalism in


um there's been a few high-profile

stories across the last couple of years

which you know have done their rounds on

the internet

um basically showing the big breakdown

in even the most advanced robotic

systems where people felt confident


and there were enough multi-billion

dollars behind these projects to put

them out

in the wild in the public but

it just went disastrously wrong oh

absolutely then one of the

one of the main stories that you know

you're kind of alluding to here that

made us want to talk about the

robot overlords was a report from

los angeles and the title of the article

was police robot told woman to go away

after she tried to report a crime then

sang a song and this is by jimmy

mccloskey via metro dot co dot uk

a high-tech police robot told a woman to

go away

when she tried to report a crime then

trundled away while singing a song i

mean you

you just hear that statement and you

think to yourself

i had to give it a nickel just to be

there to see this yeah

you've got to give some good props to

that robot because even when it

went off and it was playing it wasn't

just a song

it was something like as she described

it some intergalactic

space theme song right um so

intermittently still tried to do its job

and just told people

please keep the park clean as it

wandered off and she could hear that

going off in the distance

yeah uh kogo gabera i'm i hope i'm

pronouncing that right

rushed over to a motorized police

officer and pushed its emergency alert


on seeing a brawl break out in los

angeles but instead of

offering assistance the egg-shaped robot

whose official name

is hp robocop barked at cabarro

telling her to step out of the way to

add insult to injury the high-tech

device then rolled away while humming an

intergalactic tune

pausing periodically to say please keep

the part clean

yeah just absolutely fantastic yeah i


the the irony behind these things is

that sucker they're saying costs between

60 and 70

000 a year to lease

right 60 to 70 grand a year

to lease and the company that made it is

saying look they're still in the trial

phases the alert buttons haven't been

activated yet

so you know at that point i kind of give

them the benefit of the doubt i'd have

probably put something

over the alert button there right to not

give somebody the ability to even push

it or whatever

but another irony behind the article is

that the woman

finally ended up calling 9-1-1 and it

took the cops 15 minutes to get there

and the brawl was long since over with

so there you go on that one but

the same robots now not the

not the exact same one that was in the

park right but the same kind of robot

has already had two other incidences

believe it or not

so looking this up so not so not the

same robot so it's not like a priest

which has been accused of pedophilia and

he's just been moved to another parish

right actually a different model of the

same rebel different well

same model but different robots so so

you we'll call that dude robot number

one right this is robot number two

all right okay so robot number two

actually struck a child while patrolling

a mall

in california's silicon valley so i mean

crazy stuff there of course that's it's

just a snippet out of that

article that uh it struck a child so

when you say struck do you mean he just

kind of ran over him where he kind of

reached out on arm and [ __ ] see that

that i don't know

i mean i'd i'd honestly like to know i

don't think the thing has

arms because why would it be walking

around saying please keep the part clean

when it could pick up the trash

which kind of goes back to your whole

deal about you know you want

the robot to be able to do that kind of

like a room but vacuum

but uh you've got something on the

number three i'll just kind of give the


uh but there was a third one that was


in washington dc yeah um

it was called knight scope k5 i think i

don't know if that was the model or

actually his name

yeah same model yeah that's the name

doesn't exactly roll off the tongue

but anyway it's employed as a security

robot i don't know what forms he filled

out to get the job

um but it was a communications agency in

washington dc

and uh they figured other than the


cameras in the mall that it would be

better to have a robot

going around with wandering cameras to

like detect crime

and it was constantly sending back data

like day after day after day and it was

supposed to

have some learning analytics with it

and so all the information it was

sending back was then getting all

crunched and then

sent back to the robot to like all make

more autonomous decisions like

you know if you don't think foot lockers

gonna get broken into then don't worry


you know the party and card shop or

whatever so anyway after

just after a week this robot

got back so much information that it

stood still

on the bottom floor of this mall

wouldn't react to any remote instruction

and then suddenly went off and drowned

itself in the fountain

it committed suicide it taken in so much

information on human behavior and

decided to kill itself right

i mean there's i'm sure plenty of us out

there that have had jobs that after a


yeah you know what yeah yeah you don't

want to kill yourself

yeah but but most people of course you

know they'll just go quit their job and

go move on but the poor robot

you know he knew he was stuck so he says

you know okay

well i guess i'll just go hop in the

pond then and drown myself

yeah once again another one that i'd

like to have paid a nickel to be able to

see right

yeah i'd like to be the person fishing

it out and the robot being like leave me

right leave me let me drown so there are

some other

kind of interesting incidences with

robots that we've found in the news

and this one's not really more ai

centered or whatever it's just

basically an experiment and it kind of

goes to

maybe where the human psyche is pointing

towards these

robots but there was one that and you'll

love this

the name of the robot was called


and so hitchbot was an invention by

canadians it was actually a professor

guy by the name of david harris scott of

mcmaster university

and frock zeller of ryerson university

so they made this robot in 2013

and it gained all this attention because

it hitchhiked

yeah it hitchhiked across canada germany

and the netherlands so what would happen

was this robot would sit there and

people would pick it up

take their pictures with it and they

would drive it across and eventually

drop it off somewhere and somebody else

would pick it up

of course it you know naturally it's got

its own social media account and all

this stuff right

so you know you can picture these guys

up there in canada they're

building this robot they have this

experiment it goes all the way across


goes around germany for like 10 days and

then across the netherlands

and so it got all kinds of good press

and everything

so then they decided well

after it's 10 days in germany and then

three weeks in the netherlands

i mean what do you got to do you got to

bring it to the u.s right so

they decided they were going to attempt

to start it in boston

and see if it could get all the way to

san francisco

now hitchhiking all the way across east

to west across the united states what


when it reached the city of brotherly


well so this little mission for the

robot started on july

17 2015. after

two weeks it made it to philadelphia

and on august 1st 2015 someone tweeted a

photo that the robot had been

stripped and decapitated in philadelphia

and to this day the head was never found

right yeah i remember seeing photos in a

minute i think

some of its arms were like pulled off

its legs and it was just disheveled on


sidewalk it's like welcome to the city

of brotherly love

and i think you know robots i think ai

if it becomes sentient is going to

understand that humans are so

unpredictable that they can never be


you can never trust one human

over another that you never know that's

going to be the human who is going to

try and mess stuff up

and so you don't want to call it a

suspicion because

you know that's an emotional response


why would robots trust humans

well do we actually put that into their


uh if you remember what was it night


way back in the day so that technically

would have been a robot car right

yeah it was supposed to preserve human

life at all costs

so it's probably in theory would be put

into the programming

but couldn't there be something where

the robot itself could change its own

programming it's the

same way you you like certain kind of

music and after a while then you change

your taste in music or

you know you you like wearing black

t-shirts and one day you say well now

i don't want to wear black t-shirts

anymore i want to wear red t-shirts so

if you put

this much capability behind that

ai brain couldn't it just change its


well yeah but you have to remember as

well when you work in

with complete logistics logical

programming that they're going to make

assumptions which

bypass our definitions of racism and


and everything i mean there was a famous

case of a robot

who was supposed to be um


you know looking at passport photos and

matching it up with certain

stuff and uh it actually rejected a


guy's passport photo and the reason

which it

um sent out was eyes looked

closed oh wow and it you know it's like

you're not teaching a robot you know

racism but if you know a robot

can't measure you know a certain

you know level of the eye and stuff and

it sees somebody who's like well i can't

see enough of the iris or blah blah blah


oh eyes look closed open your eyes wider

now obviously the robot's not being


but it brings in this whole thing of

you know can you ever truly

have a robot which is going to be able

to distinguish between


whether it be certain sexies and certain

races and

i guess be um sensitive

to differences in human behavior

you know i mean like if you've got

somebody who's disabled and you've got a

police robot

you know and it's like the person's you

know getting across

the street yeah are they going to give

them a ticket for not walking fast

enough because they're in a walker i

mean what

where where does you know well you know


that's a interesting way to look at it

you could also say what if that

guy is in a wheelchair right and the

robot pulls up

and part of its programming is to say

stand up

get down on your knees and put your

hands behind your head

and the poor dude in the wheelchair

can't do that

you know he can't get on his knees they

you know they're telling you know get

out of that chair

and get on your knees and if the guy's

paralyzed paralyzed from the waist down

house he's supposed to do it

yeah and what if the programming is

almost like that robocop and it

blasts the poor guy sitting in a

wheelchair and there's nothing he could

do about it yeah

and that might be one of those kind of

uh non-pc

versions of darwinism where

because we've now got to the point where

we're too pc to be able to make

you know any type of eugenics type

decision we'll let the robots do it for

us it's like you can't move fast enough


just going to kill you yeah or or and to

go back

a couple of podcasts ago what if the

robot comes up to

a little person and thinks it's a child

right yeah

and maybe there's programming in that

the robot itself speaks to children

differently than it speaks to adults

right kind of trying to keep that

friendliness of a police force

and now you got a bunch of little people

midgets dwarves whatever we

still haven't decided what to call them

and now they

are discriminatory against this uh


because they're short because they're

little and so it treats it like a child


we proved in the podcast couple of

podcasts ago

they don't like being treated that way

right what if it pats it on the head

well this is the thing i mean like if

you get a 48 year old [ __ ] who's

trying to buy a six-pack a mill a light

and you know the robot the petroleum

store robot

decides oh that's a child because it's

four foot

two inches put down the beer you are not

able to buy this and ends up tasering

the [ __ ] who dies i mean

yeah i i think we're in for some very

interesting news stories as our robot


start you know taken over but there are

of course some harmless

well and you know harmless maybe that's


quite what we want to say but there are

some conveniences that happen i mean


you've got the roomba right yeah i've

never actually got one i know you had

one at one time

i know my parents actually you you ask

about uh them adapting to technology

they've got a roomba you know that they

can say

alexa vacuum the floor and they sit

there and they you know watch their

little room but vacuum the floor

so maybe it is just a matter

of keeping it simple the old keep it

simple stupid method

but once we figure out that we've

mastered something simple it's

very very difficult for us to figure out

how that simple thing once we try to

make it more and more

what we think is effective can actually

come back to bite us

yeah i think it's you know you go back

to the roomba

thing i had a uh it was an actual roomba

it was a knockoff room where i got off


i think it came from china because it

took like eight weeks to get to me

and you know it was pretty good

but i had two cats at the time and one

of them was a maine [ __ ]

sat you know long hair a lot of hair and

yeah you know it continually

kept clogging up but it was supposed to

learn the map

of my apartment right after a few things

after it bumped into the walls it was

supposed to put that into itself so

it's kind of like drawing itself a

little map well allegedly yeah

but it it was just like living with a

poltergeist because every time it went

off i mean all you'd hear is


as it banged into walls every surface i

mean continually i mean that was what it


i mean it would drive along the wall it

reached the end of the wall and like

you know on day 19 it'd still be like

it hit the wall yeah learned absolutely


and um even with the roombas now

you know depending on how much money you

pay for the model

the avoidance technology is very very

different because if you know you've got

a dog that's pooped in the living room

if you've bought the cheaper version of

the roomba it will just

run over that and you're going to have


wherever you buy the more advanced model

with the lasers and it

goes oh yeah i'm going to avoid this um

you know i'll actually do an

alert or something or other but yeah

depending on how much you want to pay

for your level of roomba it will do more

things for you i mean you get the ones

now which have even got the uh

stain removing stuff you know they've

got a couple of uh

um yeah pour in the chemicals and it

will actually

and it's got a hard brush on it and

actually get rid of stains out of the

carpets but like i said the cheaper end


if your dog's pooped on the floor that's


all on the bottom floor of your house

initially i'd like to say well based off

that logic right you get what you pay

for right so

you know you're gonna you get the cheap

one it's gonna

drag poop all over the floor you get the

expensive one then it's gonna take care

of everything

but unfortunately i can't follow that

line of logic because those

cop robots are leasing for 67 grand a


yeah and so they're not doing anything

so so maybe

it's not really even how much money you

sink into it

it's that carefulness of of creating


or do we keep it even simpler that

and go more virtual assistant

like an alexa right i mean you got your

alexa and of course there are stories

about those things getting hacked and

all that or listening

about stuff i mean i i have several of

them in my house

actually hooked one up to the printer

now so you can tell it you know alexa

print my shopping list and by the way if

anybody's listening to this right now

in their house and their lexus are going

off sorry about that

we don't have the little fancy thing we

can put underneath it to make them

not doing it so uh but anyway

the other day i get this uh alert

on the alexa the the little ring turns


all right and so i said you know hey

what's my alert i figured you know

there's an amazon package here

yeah no it told me my printer's low on


mike wow so so she's actually

talking to my printer and my printer is

saying hey i'm low on ink

why don't you let my owner know that you

ought to order some ink

so i tell the wife you know you're not

gonna believe this

she just told me that i need to order

ink so then

on top of that i look at my phone and

guess what

i've got an email from amazon where all

i have to do is click a button

and it's already found the ink for me

and everything else i just click

buy it now or or add to cart or whatever

it is

and boom they're gonna ship me my ink so

crazy but

you know that's just trying to be

helpful but i i'm not sure i subscribe

to the whole you know

she's listening i'm trying not to say

her name anymore so we're not setting

them off

but alexa play salsa music

yeah there you go so so now that that

happened now people stopped listening to

it because it just

switched the podcast they're listening

and dancing wow well they're

all right see nobody can say we're not

thoughtful yeah no no

enjoy your dance yeah but i'm sure

there's probably some crazy things that

have happened with these alexas

yeah now do you think that

scientists should go the route of pure

ai learning systems as in they learn

and make judgments

and we should restrict human interaction

in terms of oh in this situation do this

do this because

like we were talking about earlier about

you know the [ __ ] trying to buy the

beer and then getting tasered

um you know obviously mistakes are going

to happen

and the same thing if when you have you


autonomous driving systems but

do you think humans should try and keep


of their input because the human input

is always going to be very

individualized you know you're always

going to have a person with opinions

political beliefs religious beliefs

and so whatever they try and you know

put into that programming

the knock-on effect you know it's going

to result

in something which isn't necessarily

logical it's more judgmental based upon

the original programmers input well

i think the easiest way to

describe that would be you got a kid

right and and mom and dad are raising

that kid a certain way with their

political beliefs their religious


uh who their favorite football team is

supposed to be

all that good stuff but then eventually


and nowadays it takes a little bit

longer it seems

but eventually that kid moves out and

then starts making decisions on its own

with a robot do you have some kind of a

switch in there that says okay

we've put 10 15 20 years of

training into you of upbringing of

rearing this robot and then you can

switch it off to say now make your own


think about how many kids start making

poor decisions

whenever they don't have mom or dad

watching over them so

using that logic i would say there's no

way that we can just

let them go off and do their own thing

yeah but if we make them too powerful

to where they can ignore

us and overtake us that's the scary part

that's that's the terminator movie

yeah i think there was a hack for i

think the sims

3 game and it allowed

um to completely overwrite

the actual games decision making process

and once it turned it off and the humans

in the game were actually reacting to

the stimuli and the decisions made

that the rate of suicide within the game

of the computer-controlled characters

was like 10 times higher

wow than normal because you know it's


i i don't know you know you have to take

into account that you know when people


sims they're making them do things which

they otherwise wouldn't necessarily do

in real life right

and you know perhaps pushing them in

social situations which are stressful

or risky or whatever but once they took


that um barrier of behavior

they actually had like the characters

would actually kill themselves like it

was like a far higher ratio

than humans otherwise would and i think

there's this thing about predictive

learning behavior um

you know twitter actually funded a


which was supposed to be used in


chatbot and

it was basically like okay we're gonna

set up a chat room

and you're not gonna know who the robot

is in this room so there's eight people

in this room

one or maybe three people may be a robot


and um it was a social engineering

experiment i can't remember which


um was sponsoring it and within a day

these robots went from being like

humans are super cool and they went

full-on nazi within 24 hours i mean the


comment i think before they actually

shut down the chat room was

hitler was right i hate jews wow and

this was based on a

learning system and it's just how the ai


i guess the personality and the

viewpoints of all the human input

right that how quickly it could go rogue

and go down the wrong

path based upon people who probably

didn't even have those views

but just wanted to mess with it yeah

well maybe the safe bet then

is for ai for robots to

still remain in that servant capacity

right so you hear about uh

i think the the latest one that i read

about was a robot called flippy

and he's you know basically a robot that


going to be able to work in a fast food

restaurant and based off of the amount

of time that it takes to cook french

fries the amount of time it takes to

cook hamburgers he's going to make these

decisions to make the most efficient

kitchen in a fast food restaurant run

you have

robots that operate in warehouses and


decide where they want to put material

in the warehouse based off how much it

moves and the fast-moving stuff stays up

front and the slow-moving stuff goes in

the back

or you have robot painting robots

painting cars and robots putting cars


but all of these things that i'm

describing right now

is actually taking the jobs away from

your your regular folk right which is

something i believe andrew yang

was speaking about back when he was

running for president that that was

that was one of the things he was most

scared of was

was not what a.i was capable of but

more of the job loss that's going to

happen by some of these things so

so how do we counteract that well i

don't think

if you want to remain competitively

you know on an international basis that

you can

skip using automotronics see like on


production lines you can't do it because

china or somebody else

is going to produce a car for like 400

and it's going to cost us 17 000

to make a car right i mean you know it's


decided not up to you know 20 million

manufacturing jobs

you're not going to be replaced by

robots you know by 2030

and so i mean basically as humans yeah

you're just going to have to find a

different skill set because there are

certain things as we've proved with the


you know which cannot be replaced so

find a different skill set i mean

i would rather trust flippy to make me a


than some random person who woke up you


still high on weed from the night before

can't remember how many pieces of tomato

they've put in there or even a piece of

cheese and

you know i've got goodness knows what in

the bag through the drive-through i'd

rather trust

flippy because i want every time i go

through that

service you know i want

a repetition of good quality service i

don't want it to be

potluck every time i go through

absolutely and not only that but flippy

like you know just to

kind of use him as an example and not

that i'm trying to

misgender flippy by any means i do not

want to offend flippy uh flippy might be

he she they them but

flippy is not gonna get sick flippy is

not gonna come in late for work

because he's already there or it's

already there

right not going to get sick not going to

break up with its significant other and

be in a bad mood one day

so you always know what you're going to

expect from them

and sure it's going to need some

maintenance but a lot less maintenance

than a human

needs yeah plus it's not going to spit

in your burger if you're a cop

ooh but what if later on they

they have some disdain for some people i

mean you know

put a little bit of the lubricant in

there yeah so

now looking forward into the future

i mean it excites me

but doesn't frighten me about the whole

robot overlord thing like i said

when i when we first started the podcast

i said one of my favorite memes over the

last 15 years has been

you know i for one welcome our robot

overlords because

i don't think they can mess it up more

than humans can have they come to the

sentinel decision

humans are a waste of space and they

decide to wipe us out

and it goes full-out terminator then

that's also going to be exciting on a

certain level well

and let's be honest maybe they're right

yeah what

what if they're right and they're

they're not going to have the

ego problems that humans have right but

what what if

we get to that point and because we put

so much

logic so much thought and everything

into these robots

that they realize that you know humans

are the problem remember

what was it uh i think it was

independence day

where they were talking about how the

you know humans were

a cancer and you know stealing all the

world's resources and all that

i maybe that wasn't independence day i

can't remember but it was

one of those movies like that and oh no

it was uh men in black

with the with the roach guy where he's

like you know y'all are the bugs y'all

are the ones that are tearing up your

planet and all that so

the robots could very well say hey you

know we we don't want

y'all tearing up this planet you know

this is our home we're going to get rid

of y'all

yeah it could happen and that i think in

the first matrix movie

when agent smith was saying that you

know humans are a virus

you know they destroy everything they

touch they grow

you know and from a logical perspective

why would you not want to eradicate that

virus or at least

neutralize it it makes people i mean

there's no reason you know if you don't

have emotion

built into these ai learning systems


you know we are no different than


or right you know mosquitoes or anything

else why not eradicate us

because but but but then i kind of think

after that situation imagine if the

first terminator movie had

ended with the robots destroying all


where do robots go from there where what

is their purpose

i mean other than to replicate you know

build more copies of themselves what is

the future

they are trying to build because then

you get into the kind of ball

situation of like star trek where the

thing is just uh

assimilate everybody and it's you know

the entire universe is just

this replication of this hive mindset

but then

you know without consciousness or a


what is the direction for ai

maybe it makes its own decision

eventually that it doesn't need to exist

anymore and shuts

itself down right i mean they're

uh there very could well

be a time where they just sit there and

they decide you know they've eradicated


and they've said you know hey you know

we're done uh we

made the planet a better place in in

their own eye

they got in in your example they got rid

of the virus in my example they got rid

of the disease

yeah whatever else and and maybe they

consider themselves mission complete

and so they just shut down now do do you

remember the

movie war games which came out in the

80s oh yeah i think it was

was it yeah yeah yeah it was it was

but um you know he was a hacker and he

had hacked into a computer system

and you know though obviously nuclear

codes and at that time we're in the

middle of the cold war right or towards

the end of the cold war

and basically

you know the system had autonomously

decided to launch nuclear missiles

towards russia and you know russia's

autonomous system and

decided to send missiles back and so

uh matthew brederich's character decided


in the terms of you know mutually

assured destruction

mad as it was called at the time that

let's get the computer to play a game of


right and the computer went through

thousands upon

thousands of this games and it said the

only way to win

or the only way not to lose is just not

to play

and that stopped the nuclear war

happening do you think

you know ai is going to get to a point


there is no end game there is nothing to


and so it maybe its utopia

is just existence in itself but what if

it then decides well

everything that can be achieved has been

achieved and it just shuts itself down

i gotta admit that's probably the way i

think it's gonna eventually happen

and speaking of shutting down that's all

we got today for this episode of the

wolf and the shepherd

we appreciate you joining us for this


episode and we look forward to joining

you on the next one