Press "Enter" to skip to content

GOTO 2017 • Deep Learning: What It Is & What It Can Do For You • Diogo Moitinho de Almeida


00:00:06[Music]

00:00:09cool I’ll get started I’m Diego I work

00:00:14at Google brain but that’s all I can

00:00:16tell you my normal rule is if I told you

00:00:19I’d have to kill you but there’s a whole

00:00:20lot of you guys which makes that kind of

00:00:22impractical right now a standard

00:00:24disclaimer everything I say reflects my

00:00:26own opinions it’s not representative of

00:00:28my employer I have not been at Google

00:00:30long enough to know any secrets though I

00:00:32will admit that I’ve pilfered some

00:00:34publicly available slides so if there is

00:00:36a Google bias it’s not because they made

00:00:38me do it it’s because I’m lazy as far as

00:00:42some background for myself

00:00:44I broke a 13-year losing streak for the

00:00:47Philippines in the International math

00:00:48Olympiad I got the top prize in the

00:00:50world in the inter display compass in

00:00:52modeling and if you’re familiar with

00:00:53kayo competitions I also want one of

00:00:55those so I like to compete and do all of

00:00:59these kinds of things and hopefully that

00:01:01convinces you that I know that I’m

00:01:02talking about but let’s get started the

00:01:07presentation is deep learning what is it

00:01:09and what can it do for you

00:01:11but I think the very first question is

00:01:13why should I care and that reminds me of

00:01:16a story a machine learning researcher a

00:01:19cryptocurrency expert and an Erlang

00:01:22programmer walk into a bar

00:01:23Facebook buys the bar for twenty seven

00:01:25billion dollars and also another

00:01:29disclaimer you may not know this but I’m

00:01:30both a machine learning researcher and

00:01:32from San Francisco that means all of my

00:01:34information comes from Twitter that’s

00:01:36not a joke so prepare for that for my

00:01:39slides but back to why you should care

00:01:41machine learning artificial intelligence

00:01:43deep learning they’re all getting a lot

00:01:44of press these days they’re all doing

00:01:46lots of stuff and there’s lots of hype

00:01:48lots of news articles about all sorts of

00:01:50things and everyone seems to want to get

00:01:53into it but people don’t really know

00:01:54what they’re talking about it seems the

00:01:56only thing everyone’s really sure of is

00:01:57that artificial intelligence and it

00:02:00seems like very recently in particular

00:02:02deep learning will be a catalyst for a

00:02:04lot of change that’s happening and

00:02:06people are asking any questions all the

00:02:08time about this kind of thing and some

00:02:10of these recurring themes are how will

00:02:11the world change

00:02:12what can a AI and in particular deep

00:02:15learning do and how do I take advantage

00:02:17of these trends

00:02:19even the legendary programmer Jeff Dean

00:02:22has said if you’re not considering how

00:02:24to use deep neural nets to solve your

00:02:25problems you almost certainly should be

00:02:28it almost sounds like a threat either

00:02:31way I hope you’re motivated to learn

00:02:33that’s all I have for motivation so

00:02:35let’s get started into what is deep

00:02:37learning and it’s quite easy this is

00:02:41deep learning you could memorize this

00:02:44diagram this will make the presentation

00:02:46a lot easier so that’s basically it just

00:02:52kidding this is neither complete there’s

00:02:55a lot more to it than that and it’s also

00:02:57pretty complicated we’re gonna start

00:02:58with something much more simple namely

00:03:01calculus may not sound right it’s even

00:03:05chapter 1 of the book calculus made easy

00:03:07it’s titled to deliver you from the

00:03:10preliminary terrors but we only need a

00:03:12little bit of calculus and particularly

00:03:14we need an algorithm called gradient

00:03:17descent the derivative of one thing with

00:03:20calculus tells you is how to take

00:03:21derivatives and a derivative loosely

00:03:24speaking if it tells you how a functions

00:03:27output changes when you change its input

00:03:29and gradient descent is just moving

00:03:32along the direction of the derivative in

00:03:34order to minimize a function that you

00:03:36can take the derivative of so the

00:03:38insight into all of machinery Sigma Xin

00:03:41learning is figure out how to frame your

00:03:44problem in such a way that what you care

00:03:46about is differentiable or a proxy of

00:03:49what you care about is differentiable

00:03:50and then minimize it and this like one

00:03:55extremely simple equation summarizes

00:03:57almost all of the recent work in machine

00:03:59learning that’s happened in the last

00:04:03half decade roughly there’s obviously

00:04:05exceptions to this rule but majority of

00:04:08what’s done either has been done with

00:04:11this exceptionally simple thing or can

00:04:13be done with this exceptionally simple

00:04:14simple thing and that’s basically it as

00:04:18far as what deep learning really is in

00:04:20its essence

00:04:21this might sound well good but this is

00:04:24just machine learning when’s it become

00:04:25deep learning also easy it’s when you

00:04:28make it really deep it might sound like

00:04:31a joke but acts

00:04:32really what happens when you have these

00:04:35you know these machine learning used to

00:04:37be composed these very very simple

00:04:39functions because this is all we knew

00:04:41how to optimize and what happens when

00:04:43you stack multiple of these simple

00:04:45functions together you get something

00:04:46that’s much much more powerful if we

00:04:49don’t really know how much more powerful

00:04:50it is some might claim it’s

00:04:52exponentially more powerful but either

00:04:55way we know it’s much more powerful and

00:04:57simply stacking these things and using

00:04:59the simple algorithm is what’s caused

00:05:01the deep learning revolution to hit and

00:05:03it’s just using same old simple

00:05:05algorithm even though as a caveat to

00:05:09that we part of the hard part of deep

00:05:12learning is knowing that the simple

00:05:13algorithm will work for these very

00:05:15complicated models that have like stacks

00:05:18of layers and making problems non convex

00:05:20is that it for deep learning yes this is

00:05:25it roughly I’m skipping all sorts of

00:05:29details that I’m sure will be covered

00:05:31later from software that makes things

00:05:33easier to write like tensorflow or pi

00:05:35torch to hardware that makes things

00:05:37faster to run like GPUs or multi-core

00:05:41CPUs or TP use to commonly use some

00:05:44functions or layers that people have

00:05:47done lots of trial and error and just

00:05:50found to work well for some of today’s

00:05:52problems without very good justification

00:05:54and also commonly use combinations of

00:05:57these layers or architectures that

00:06:00similarly people have used trial and

00:06:02error and found to work without very

00:06:04good justification but for the purpose

00:06:06of that talk the simpler algorithm is it

00:06:10next big question is what can it do for

00:06:13you

00:06:15this may there was a recent paper that

00:06:17came out that makes it a little bit

00:06:18easier to answer because this article

00:06:20came out that surveyed a I assume lot of

00:06:25machine learning researchers all of

00:06:26these Alliance are people so and there’s

00:06:31a survey on the future progress on AI so

00:06:33what you think will happen when will

00:06:35happen and there’s a lot of really

00:06:37interesting things here that are amusing

00:06:40and possibly informative may be mostly

00:06:43amusing

00:06:45and let’s break it down on the easy end

00:06:47you see Angry Birds at roughly the same

00:06:49difficulty as the World Series of Poker

00:06:52that’s very unusual to me I thought

00:06:55Angry Birds was solved I’m pretty sure

00:06:56Angry Birds is solved and the World

00:06:58Series of Poker is actually sounds

00:07:00really hard but it’s it’s down there

00:07:02near the easy end on the difficult and I

00:07:06find it really interesting that AI

00:07:08researchers think it’s it’s like the

00:07:10second top there a researcher is

00:07:13significantly harder than math

00:07:15researcher seems like a 50 year gap

00:07:18between math research being solved an AI

00:07:20research being solved not saying that I

00:07:22agree or disagree it’s just interesting

00:07:24to point out it actually looks like the

00:07:27gap between AI researcher and math

00:07:29researcher is larger than the gap

00:07:31between math researcher and playing

00:07:33Angry Birds at a human level so I yeah I

00:07:38I don’t know if this reflects something

00:07:40about the field maybe that’s why there’s

00:07:41no good deep learning theory right now

00:07:43but who knows what’s going on but

00:07:46importantly for knowing what deep

00:07:48learning can do for us is there’s a lot

00:07:50of differing opinions on what’s going to

00:07:52happen and when it’s going to happen

00:07:53there’s some people who think that we

00:07:54will be getting generally AI in roughly

00:07:5710-15 years and there’s people who think

00:08:01it’s over a hundred years away I’m

00:08:03definitely in the latter camp to clarify

00:08:06but this makes it seem like answering

00:08:08the question what can I do for you

00:08:09correctly be really hard because there’s

00:08:12just so many different opinions how do

00:08:13you really know what the correct answer

00:08:14is it’s a tough problem but luckily I’m

00:08:17the only one on stage so I can just say

00:08:19whatever I want and no one can disagree

00:08:22with me I guess you can disagree with me

00:08:23in the Q&A and it’ll be a good debate

00:08:25but I’m gonna say what I think on this

00:08:29and take it with a grain of salt doesn’t

00:08:31reflect me it does reflect me it doesn’t

00:08:32reflect Google doesn’t reflect anything

00:08:34else I do like to stand on the shoulders

00:08:37of giants though and I think there’s

00:08:38been some people who have said things

00:08:40that resonates a lot with me and like

00:08:42really like when they say some things

00:08:44really precisely I feel like that helps

00:08:46refine my thinking on the problem this

00:08:49is one that I don’t know if I agree with

00:08:51but it’s a really strong statement and

00:08:52it seems like it could be a pretty good

00:08:54heuristic this is by and ruing who is no

00:08:56longer the chief scientist at Baidu

00:08:58but he says if a typical person can do a

00:09:01mental task with less than one second of

00:09:03thought we can probably automate it

00:09:05using AI either now or in the near

00:09:07future I can’t think of that many

00:09:10counter arguments that don’t require

00:09:12like a lot of like very specific domain

00:09:15knowledge like maybe people who play

00:09:16games a lot can play those games really

00:09:19fast cuz they’ve practiced it well but

00:09:21roughly it seems like a pretty good

00:09:23heuristic and it’s a very like strong

00:09:24statement so maybe this is something

00:09:27that could guide you answering that

00:09:28question another thing that isn’t as

00:09:32specific but I think is very important

00:09:36to consider is that a lot of the deep

00:09:39learning successes today have been I

00:09:42haven’t used the word simple but I feel

00:09:45like they’re more simple memorization

00:09:47problems and not really thinking

00:09:50problems it’s always hard to really say

00:09:54what this thinking really means because

00:09:55that might be a moving goalpost of like

00:09:57of course if algorithm is not thinking

00:10:00it’s using an a-star algorithm or

00:10:01something while we might think that’s

00:10:03kind of like thinking but even in this

00:10:06case it seems like when you’re when a

00:10:09task requires multiple steps of

00:10:11reasoning where you can’t like use

00:10:14heuristics to jump all the way from

00:10:16input to output it seems the deep

00:10:17learning has not been very good at that

00:10:19especially not without a lot of help

00:10:21which leads me to my general rule which

00:10:25is deep learning is an appropriate tool

00:10:27for supervised direct pattern matching

00:10:30tasks bonus points if you can design

00:10:32priors that are particularly suited to

00:10:34your problem the the priors in this case

00:10:36are specific layers that are popular for

00:10:39certain tasks but we don’t have to get

00:10:41into that right now

00:10:42even though feel free to ask me in the

00:10:44Q&A but here when I say supervised I

00:10:47mean that we tell the model directly

00:10:49what the correct answer is so roughly a

00:10:51human or some other process figures out

00:10:54what the right answer is via some means

00:10:56and tells the model this is what you

00:10:59should be outputting next time there are

00:11:01there have been incredible successes

00:11:04using reinforcement learning which is

00:11:07not supervised especially in the

00:11:10game-playing domain so if you’ve seen

00:11:12deep Minds deep queue networks playing

00:11:15Atari or deep Minds alphago playing go

00:11:18those use a lot of reinforcement

00:11:21learning and if they definitely have in

00:11:24some successes but how I feel about

00:11:26reinforcement learning is that it can

00:11:27work but you don’t want to rely on it

00:11:29working and in the bay everyone hasn’t

00:11:32started up on everything and there’s

00:11:33been a lot of people who kind of have

00:11:35bet their companies on this is a

00:11:37reinforcement learning problem let us

00:11:39sell people on using deep reinforcement

00:11:42learning to get this working and ending

00:11:43up with vaporware and kind of a sad end

00:11:49to that supervise for it but as far as

00:11:51direct pattern matching goes this goes

00:11:52back to what I was saying earlier where

00:11:54you want simple relationships between

00:11:56the input and the output

00:11:58almost almost like that a fraction of

00:12:02the input directly maps to some fraction

00:12:04of the output in some sort of additive

00:12:07ish way it doesn’t have to be completely

00:12:09additive but usually having some easy

00:12:12mapping allows you to bootstrap the more

00:12:13complicated mappings and a lot of the

00:12:17more complicated mappings turn out to be

00:12:20like lots of little simple mappings

00:12:22composed together and this kind of thing

00:12:24seems to be how deep learning tel tends

00:12:26to work this is all very vague but I am

00:12:29about to talk about some specifics about

00:12:31where deep learning has succeeded and

00:12:33where it seems have not succeeded yet

00:12:36another disclaimer this is only a subset

00:12:39of the potential cool things to talk

00:12:40about and I’m only talking about the

00:12:44intersection of things I find

00:12:45interesting because I want the slides to

00:12:47be interesting there’s lots of like

00:12:48little things that are cool but maybe

00:12:50wouldn’t be that interesting to people

00:12:51and visual and things that I know how to

00:12:54put in a presentation so I I have some

00:12:57attempts at videos but they’re optional

00:12:58but I there’s some cool lots of cool

00:13:01working audio but I just have no idea

00:13:03how to put that in a presentation and

00:13:05[Music]

00:13:06yeah maybe that’s my bad but we can

00:13:09solve go and build robots but technology

00:13:11isn’t there yet for reliable audio and

00:13:14video this is also what happens when I

00:13:16make the graphics myself so all of the

00:13:19pretty animated graphics have been

00:13:20stolen from other Googlers who know how

00:13:23to do ours

00:13:25back on topic let’s start with the

00:13:28easiest thing whenever you have a metric

00:13:30that when that metric goes up money goes

00:13:33up you probably want to use machine

00:13:36learning possibly deep learning but

00:13:38definitely machine learning this is

00:13:40actually I would describe the main use

00:13:41case of machine learning

00:13:42I have knobs to turn some combinations

00:13:45of these knobs are better than others

00:13:46how do I turn them basic stuff but we’re

00:13:49saying a big thing that people get

00:13:53caught on is unsupervised learning it’s

00:13:57a very interesting research problem but

00:14:00if you want to do anything practical I

00:14:02would probably advise you to not do that

00:14:05I think this is actually absolutely

00:14:07excellent advice rather than trying to

00:14:09if you can spend a month figuring it out

00:14:12on supervised learning please do it that

00:14:14that will solve a lot of people a lot of

00:14:15time if you could spend a year that

00:14:18would probably save a lot of people out

00:14:19of time if you could do ten years you’re

00:14:21probably on track with the rest of the

00:14:22field so if you have a problem that you

00:14:26care about don’t try to do some magic

00:14:29where you don’t know if it’s gonna work

00:14:30label some data usually these things are

00:14:32a lot more data efficient and people say

00:14:34they are and sticking to supervised

00:14:36learning will be much easier for your

00:14:38sanity as well as your eventual impact

00:14:42speech recognition has done really well

00:14:45really really well people think that

00:14:47this is probably going to be one of the

00:14:49biggest changes to interfaces in not

00:14:53just our lifetimes but in the next

00:14:54decade right now people don’t like to

00:14:56talk in phones because they could be

00:14:58kind of sucks but people can talk much

00:15:01faster they can type and a lot of people

00:15:02don’t know how to type very well so this

00:15:04could completely change the way people

00:15:05interact with electronics things like

00:15:08Google glass or I hear it’s really big

00:15:11in China speech recognition there’s all

00:15:13sorts of things that this could enable

00:15:14and this is only a fraction of the cool

00:15:16things happening in audio but I not

00:15:19think to talk about that much but

00:15:20there’s things with generating

00:15:22generating audio generating music lots

00:15:25of cool stuff there translation this

00:15:29animation is really cool and this

00:15:31problem is really cool this is showing

00:15:33that not only can deep networks improve

00:15:36on like the traditional statistical mess

00:15:39that things like Google Translate used

00:15:41to do but where you just have matching

00:15:44corpuses or Corp I I don’t know what the

00:15:46plural is but you can also translate

00:15:49between language pairs that you’ve never

00:15:51you don’t even have matching corpuses on

00:15:53so in this example you have English to

00:15:56Japanese pairs as well as Japanese so

00:15:59sorry English to Japanese in English to

00:16:00Korean and using these networks you can

00:16:03actually translate directly between

00:16:04Korean and Japanese without ever seeing

00:16:07paired data between Korean and Japanese

00:16:09which is actually huge

00:16:11it could enable a lot of translation on

00:16:14languages that between languages that

00:16:16there’s just no data on and you can do

00:16:18it in a much more accurate way because

00:16:20you don’t need to translate into an

00:16:21intermediate language where you lose

00:16:23some information if you ever like do

00:16:26Google like what’s the game where you

00:16:29have a like a Markov chain with Google

00:16:30Translate you start with a thing you

00:16:32translate to one language you translate

00:16:33back eventually becomes garbage and

00:16:35nothing like the original thing you said

00:16:37and you just avoid that problem entirely

00:16:39with this image classification this is

00:16:43like the bread and butter of deep

00:16:44learning it’s what made deep learning a

00:16:46big deal people it was kind of a not

00:16:49mainstream thing until about two twenty

00:16:52twelve when deep learning one this

00:16:54imagenet competition and beat all of the

00:16:58other things by a fairly large margin

00:17:00and made everyone realize hey this

00:17:02solves problems that nothing else could

00:17:04solve before and there’s real-world

00:17:07applications to this like google photos

00:17:10an example there’s a lot of api’s where

00:17:12people have made a business of telling

00:17:16you what’s in an image people do face

00:17:18classification Faith’s

00:17:20detection there’s a lot of money and

00:17:24sentiment recognition you know like have

00:17:27a camera here and look at the room tell

00:17:28them if they’re enjoying the talk or not

00:17:30based on like people’s smiles and stuff

00:17:32maybe not for talks but like for ads and

00:17:35stuff something that can’t be done yet

00:17:38though is unbiased image classification

00:17:41or it’s still a lot of work this was a

00:17:43huge issue for Google photos actually

00:17:47like I think it’s like a few days after

00:17:49they released it people were complaining

00:17:51on Twitter that

00:17:53their friends were being classified as

00:17:58gorillas due to a lack of diversity in

00:18:01the training data and this is kind of

00:18:03unavoidable when you have imperfect

00:18:05datasets I actually don’t know how they

00:18:07solve this they might have just removed

00:18:09some of the classes that could be been

00:18:11taken as offensive but that’s just a

00:18:15hack right like we want like real

00:18:17algorithms that don’t make these kinds

00:18:19of stupid mistakes talking about not

00:18:22making stupid mistakes a problem near

00:18:25and dear to my heart is medical imaging

00:18:26they’ve been a bunch of huge successes

00:18:29on medical imaging in particular there’s

00:18:32been some really cool stuff done reading

00:18:34x-rays and CT scans cool stuff with

00:18:36segmenting pathology scans detecting

00:18:40diabetic retinopathy all of these things

00:18:43it’s people have been getting superhuman

00:18:46results like better than what seems to

00:18:48be the best doctors and hopefully very

00:18:51soon this kind of stuff will be like

00:18:52reaching the end users and helping

00:18:54people so this is a really exciting area

00:18:56of deep learning progress similar in

00:19:00that vein it’s not limited to either 2d

00:19:04images or having a single prediction per

00:19:06image you can do what’s called semantic

00:19:09segmentation where you label each pixel

00:19:11or in this case voxel in an image and

00:19:14you can also it also works for high

00:19:16dimensional data so this for example is

00:19:183d segmentation of I believe in neuron

00:19:21and this algorithm actually is iterative

00:19:23and how it like expands over time and

00:19:25this is very similar to how a human

00:19:28would segment a neuron it would not just

00:19:30say all those months here’s a neuron if

00:19:32it starts at something and being like

00:19:34okay this is close to this other thing

00:19:35this is maybe a neuron so we are as we

00:19:39were like expanding the reach of deep

00:19:41learning more people are designing more

00:19:42and more of these priors to build into

00:19:44the architectures to do much smarter

00:19:46things so whenever I say not yet on

00:19:48something it might be that technology is

00:19:50there P we just haven’t tried hard

00:19:52enough talking about and not yet there’s

00:19:56been some really cool work on image

00:19:58captioning so instead of given an image

00:20:00output a object in the image it’s given

00:20:04an image describe the image

00:20:06and this is a much harder task because

00:20:08there’s a lot of things that can go on

00:20:10in an image and there’s a lot of

00:20:11possible ways to describe an image so

00:20:13how do you say something is riot and

00:20:15what what set of things do you choose to

00:20:19have something described and this is

00:20:21pretty good like these descriptions are

00:20:24actually this is a good case there’s

00:20:27many bad cases of this but they still do

00:20:29make some really dumb mistakes it might

00:20:31reflect underlying issues with our

00:20:33imaging models or it might be due to

00:20:35dataset size but this is still an open

00:20:36research problem similarly to that it’s

00:20:41not very good at answering questions

00:20:42about images or stories it can be good

00:20:47at finding specific things in the images

00:20:49but there’s other things that seem to be

00:20:52easier than finding a thing or just as

00:20:54easy as finding a thing that deepening

00:20:56currently it’s not good at like counting

00:20:58if you ask good this is I don’t have a

00:21:00counting example here but if you have

00:21:02like a bowl of oranges and you ask like

00:21:04how many oranges are in this bowl this

00:21:06sounds like a very easy task but it’s

00:21:09quite hard for models right now so

00:21:12that’s a big problem talking about big

00:21:16problems we definitely are nowhere close

00:21:18to automating research this is a great

00:21:22tweet we’re the researchers were the

00:21:26ones that wanted to make the AI do all

00:21:27the work and play games and while they

00:21:29play games but instead it’s the opposite

00:21:31right the yeah is just playing games all

00:21:33day and researchers are working harder

00:21:35than ever it’s a tough life I think the

00:21:38comments on this were equally great

00:21:40because maybe this is a sign that the AI

00:21:43is actually intelligent you know maybe

00:21:45it’s like just pretending to be dumb and

00:21:46being like why would I want to do all

00:21:48the work I’m just gonna keep playing

00:21:49games all day some aspects of research

00:21:53might be automated something that some

00:21:56people consider to be either boring or a

00:21:59waste of time or hard is designing these

00:22:02architectures in the first place and

00:22:03there has been some work in using deep

00:22:07learning to automate the architect the

00:22:10design of architectures for more deep

00:22:12learning and you get like these crazy

00:22:14things that no one would ever design

00:22:17yeah I would definitely not

00:22:20think to do that in the right so this

00:22:24stuff has had some fairly promising

00:22:27results I I put this under a maybe of

00:22:29what can be plausible because it’s both

00:22:32it was both very expensive and not quite

00:22:34as good as a state of the art but this

00:22:36seems like a really promising Avenue and

00:22:38a potential place that it could make a

00:22:41big impact so maybe all of our learning

00:22:44about architecture and studying this and

00:22:45trial and error maybe all of this will

00:22:47be outsourced to you know farms of

00:22:50computers somewhere and we could just

00:22:52you know stick to the high level tasks

00:22:54but life is rarely that kind a despite

00:23:00fake news to the contrary we are long

00:23:02ways away from automating software

00:23:04development there were some articles on

00:23:07algorithms automating coding and III

00:23:13think that some people were a little bit

00:23:16panicked on this maybe all of the

00:23:18articles when deepening automates X

00:23:20causes some panic but I hang around I

00:23:22hang out with lots of software engineers

00:23:24so they were worried for like a second

00:23:28until they realized that this thing was

00:23:30actually really really dumb not that the

00:23:32the work was done but how the algorithm

00:23:35did it was nowhere close to software

00:23:37engineering it was a slightly better

00:23:40heuristic for picking random bits of

00:23:43code together and doing trial and error

00:23:46on that code and as we all know that is

00:23:49absolutely not how we do software

00:23:51engineering right like we design stuff

00:23:53upfront not trial and error it’s like

00:23:55all done by the books yeah this

00:23:58algorithm definitely can’t do that

00:24:00so we our jobs are safe right guys

00:24:05ok so some people some people know what

00:24:08I’m talking about some great memes this

00:24:13is not really model output but if you do

00:24:15follow the field people love to have fun

00:24:19things in there some people actually

00:24:21wrote a conf a fait paper I think that’s

00:24:25pretty incredible it’s someone can like

00:24:27dedicate a research paper with I assume

00:24:30a real idea I didn’t read this but I

00:24:32assume it’s a real idea

00:24:33to a troll name I think it’s great and

00:24:36it shows like the speed of publishing in

00:24:38the field common Silicon Valley problem

00:24:43no deep learning will not solve all of

00:24:46your problems especially not your

00:24:48product definition problems it won’t

00:24:49find something useful for you to do and

00:24:52it will not make you magically rich

00:24:54despite a lot of belief to the contrary

00:24:57and similar to this image general chat

00:25:02bots are actually quite difficult you’d

00:25:06think that you just give you no model a

00:25:08dataset of two people talking and it’ll

00:25:10be able to replicate those people

00:25:12talking but it turns out that our

00:25:14language models are quite good at making

00:25:16things that look grammatically correct

00:25:18but are semantically quite terrible so

00:25:21they don’t have like they don’t have any

00:25:23history involved they don’t have there’s

00:25:25lots of issues of them and this like a

00:25:30misunderstanding too that led to a lot

00:25:32of companies starting products that

00:25:34ended up pivoting away from using deep

00:25:37learning at all and ended up using like

00:25:39an army of workers in the Philippines

00:25:42this manually doing the chatting for

00:25:44them which turns out to be a pretty

00:25:46economical way to do things but specific

00:25:50chat bots are very doable so when if you

00:25:53turn the problem from hey let’s generate

00:25:56arbitrary text to hey let’s pick among a

00:26:00small set of valid responses things

00:26:03become a lot easier this is in boxes

00:26:07smart reply which apparently is used by

00:26:09over ten percent of mobile infox replies

00:26:13which sounds like a lot of

00:26:15qualifications but I just think it’s

00:26:18cool that something started out as an

00:26:19April Fool’s Day is now real April

00:26:23Fool’s Day joke it’s also sweet

00:26:25animation but this kind of stuff is very

00:26:27plausible and I think that people who do

00:26:29use machine learning for chat BOTS will

00:26:31end up constraining the problem quite a

00:26:33bit and that’s actually very doable if

00:26:35you’re trying to classify like do I have

00:26:37enough information or is this person

00:26:39satisfied or do I need to pull another

00:26:42human in to actually chat through this

00:26:44person that’s much more doable than hey

00:26:47automatically solve this person’s IT

00:26:50issues which sounds really hard and

00:26:55similar in vain to that coherent text is

00:26:59quite a challenge like long any long

00:27:01amount of text a lot of journalists

00:27:03journalists I feel like a big victim to

00:27:05hype because it’s kind of their fault

00:27:07uh-huh and they’re kind of worried about

00:27:10their jobs about RDR deep nets and like

00:27:13start writing articles for us and the

00:27:17answer seems to be no so if you’re a

00:27:19journalist

00:27:20don’t worry sorry

00:27:32did you say it’s hard to investigate

00:27:34journalists as an AI

00:27:41I couldn’t quite hear the last part of

00:27:43that oh yeah for sure

00:27:53he said as may I it’s hard to do

00:27:55investigative journalism that is

00:27:58definitely true debatable about how much

00:28:02investigative journalism current

00:28:04journalists do but that’s definitely the

00:28:08case I think in this case it is even the

00:28:14worry that a lot of journalism is read

00:28:17stuff on Twitter turn it into an article

00:28:19hope to get lots of clicks make click

00:28:22vadie headline topic definitely not all

00:28:26of it but some fraction of it is that

00:28:29and I think that there is some worry

00:28:30about this like I believe in finance

00:28:33there’s a big race to like who can

00:28:35publish these articles first based on

00:28:38various data sources and if you’re not

00:28:42talking about quality but speed these

00:28:45things definitely have a speed advantage

00:28:48yeah III I’m not I’m personally not

00:28:51worried about journalists jobs being

00:28:53taken for sure cool sorry sorry I

00:29:07couldn’t hear you very well but thank

00:29:09you for yelling that time yeah so

00:29:15something that seems to be really

00:29:17promising I actually think that this is

00:29:18one of the most promising upcoming uses

00:29:21of deep learning that’s like not quite

00:29:23there but might be there and it’s like a

00:29:26would be a really sexy field to get into

00:29:27with robotics it seems like there’s a

00:29:29lot of really good stuff happening with

00:29:31imitation learning and a lot of people

00:29:34are invested just working oh it is

00:29:36working this is pretty cool people are

00:29:39investing in a lot of the the research

00:29:42labs are investing and getting the

00:29:47robots training together in like how do

00:29:49we collect lots of data for robots to

00:29:51get them working automatically because

00:29:52right now

00:29:53at least to my understanding I’m no

00:29:55roboticist is that majority of the work

00:29:57done by robots is done manually and if

00:30:00we can like make it a lot easier to

00:30:02train robots to do things that we care

00:30:03about maybe

00:30:04all of a sudden we’re going to have like

00:30:05more general programmable robots that

00:30:07people can do stuff with so that I think

00:30:11is really really promising and at least

00:30:15from a research perspective of someone

00:30:16who reads the papers and like keeps up

00:30:18with what people are doing it seems very

00:30:21plausible that this kind of thing could

00:30:25make a breakthrough in the near term

00:30:26especially with what’s called imitation

00:30:29learning where robots rather than

00:30:31learning by trial and error which can be

00:30:33very hard they just learn to copy humans

00:30:35which is goes back into the rule of

00:30:37thumb I was talking about we’re giving

00:30:41giving these algorithms supervised data

00:30:43telling them what to do generally works

00:30:44a lot better than hoping for magic that

00:30:49you know hoping that they will magically

00:30:51figure out the thing to do which is what

00:30:53a lot of the field is trying to get

00:30:55working right now depending on who you

00:30:58ask

00:30:59game playing I would count categories

00:31:02that is not yet there there have been

00:31:05some amazing successes in game playing

00:31:07but a lot of those successes aren’t

00:31:10quite super general a lot of it it’s

00:31:12like very input simple input output

00:31:14mapping like I was mentioning so Atari

00:31:17seemed to be a lot of that debatable

00:31:20whether or not go was that even though

00:31:21that was definitely a huge win but

00:31:23there’s been other games where models

00:31:26are nowhere near as successful so things

00:31:29like even like very simple Minecraft

00:31:31mazes it’s still the model still aren’t

00:31:35quite there yet or recently there’s been

00:31:37a bunch of work on doom visual doom like

00:31:40do them from the pixels and this model

00:31:44actually super cool let me see here does

00:31:47this work you can skip to the fighting

00:31:52so there’s been a lot of progress on

00:31:54that really recently this is like the

00:31:57what was the state of the art in 2013 if

00:32:00you can tell it’s like pretty dumb like

00:32:02shooting a wall right now let’s see here

00:32:07yeah this is so this is a little bit

00:32:09smarter this was state of the art in I

00:32:12would say 2016 ish mid 2016 this is

00:32:18still pretty dumb

00:32:19and people been making a lot more

00:32:22progress recently with this we’re look

00:32:25at this this is actually intimidating

00:32:28it’s like moving around its shooting

00:32:31intelligently etc etc there’s been a lot

00:32:35more progress being made in this and it

00:32:36seems that we’re nowhere near close to

00:32:39or at least to my knowledge solving

00:32:42something like Starcraft but it’s really

00:32:45promising and people are putting a lot

00:32:46of effort into this so the likelihood

00:32:48that we make some big breakthroughs in

00:32:50the coming years seems to be likely cool

00:32:55and this is category of stuff it’s like

00:32:58what I am one of the most excited about

00:33:01just because I would never consider

00:33:03using like these extremely powerful

00:33:06classification models for artsy things

00:33:08maybe that’s just me but some of these

00:33:12use cases are incredibly creative and

00:33:14incredibly cool and like this stuff is

00:33:17amazing this one came out pretty

00:33:19recently and it learns to transform

00:33:22images from different domains so

00:33:25transforming like a zebra into a horse

00:33:27or vice versa so image transformation it

00:33:31can be done can even be done with videos

00:33:34so this is like actually really done by

00:33:37a model it’s not like cherry pick data

00:33:42but so it’s actually a transform this

00:33:48video and like this is not seamless but

00:33:50that’s pretty good better than I could

00:33:53do with Photoshop which is not saying

00:33:55much but like this is like pretty

00:33:56impressive and I would not even have

00:33:57thought of this as a use case like hey

00:33:59I’m a machining researcher at Google I

00:34:03have a you know giant cluster

00:34:05I’m gonna transform a horse into a zebra

00:34:07right

00:34:10unfortunately this kind of thing is not

00:34:13completely reliable but this is pretty

00:34:18amusing so like with all machine

00:34:21learning like making it completely

00:34:23reliable can be challenging talking

00:34:26about that there’s been an app that’s

00:34:28been gaining popularity called face app

00:34:31that

00:34:32does facial transformation so in the top

00:34:34left you see the original photo top

00:34:37right you see like a more manly

00:34:40transformation you know like more edgy

00:34:43chin bill beardy then bottom left and

00:34:47old miss transformation and bottom right

00:34:50a smiling transformation and this is

00:34:53actually pretty good pretty good and

00:34:56like an app can do it on your phone no

00:34:58human input it just does it it’s pretty

00:35:01impressive that you can do this and this

00:35:03is a really cool use case unfortunately

00:35:05it’s not perfect in particular it’s also

00:35:08suffers from that bias problem like with

00:35:11many other things when you turn a cool

00:35:13model into a product it there’s a

00:35:14different set of requirements in this

00:35:16case they had a transformation which

00:35:19makes a person’s face hotter and one of

00:35:23the things it did was it always lighten

00:35:25skin which was offensive to some people

00:35:28yeah that they had to pull that feature

00:35:31I think or change I think they actually

00:35:33changed the name from hot to something

00:35:35else I can’t remember art is doable this

00:35:41is art from scratch or unconditional art

00:35:43like it’s like these models can just

00:35:46create these artsy things and I think

00:35:50that this is really really cool I

00:35:52personally think that these both look

00:35:54really good can I get a show of hands of

00:35:58who thinks the one on the left is better

00:36:01what about the right oh it looks like a

00:36:05tie I made the one on the left so I was

00:36:09hoping that people would vote for that

00:36:11one cool I think they’re both really

00:36:15cool I would definitely have a poster of

00:36:17that in my room or a painting of that in

00:36:19my house then this kind of thing like

00:36:22who would have even thought that as a

00:36:24side effect of these really powerful

00:36:27actually useful things we would get art

00:36:46oh yeah I’m definitely not claiming that

00:36:49this is the like the the the first thing

00:36:53in terms of algorithmic arts it’s just I

00:36:56find to just be a really cool use case

00:36:58of deep learning because when III don’t

00:37:02think ten years ago people would have

00:37:04imagined like yeah imagine all of these

00:37:06cool pictures will make and every

00:37:09actually every time there’s a new use

00:37:10case in art I’m just amazed like who

00:37:13thought of this like who spent their

00:37:15time on this and I’m thankful for that

00:37:17because I wouldn’t have done it but I

00:37:19think it’s really awesome and I think in

00:37:22some ways it’s also kind of cool that

00:37:26unlike fractals or something like that

00:37:29it feels like there’s there’s a lot more

00:37:32like there’s more unknown unknowns in

00:37:34this right now which makes it really

00:37:36promising as well maybe that’s from my

00:37:39misunderstanding of art though or

00:37:41algorithms or anything I’m not expert in

00:37:44any of this stuff sketching with another

00:37:50recent use case where you just train in

00:37:52a data set of humans drawing little

00:37:53things and the things in the the top

00:37:56corner up there is things that the model

00:37:58drew and in the bottom here you can

00:38:01actually do math on sketches so you take

00:38:04like a cat face you add in a pig with a

00:38:07body you subtract a pig face and you end

00:38:10up with like a cat with a body and like

00:38:13it’s kind of cool that it works I mean

00:38:18the math checks out so awesome for for

00:38:26the non artists in the room you can also

00:38:29turn what is arguably not art in the

00:38:33bottom-left into something that is

00:38:35potentially art so this is another very

00:38:38cool use case where you like it can like

00:38:41enable people to it becomes almost like

00:38:46you know a new artistic medium right

00:38:49where you can now use these things to

00:38:51enhance existing art to do things maybe

00:38:53that people wouldn’t have done before or

00:38:55enable people who couldn’t have done

00:38:57this before

00:38:57or maybe just make it more faster

00:38:59thing like that it feels almost like you

00:39:04know like a new instrument from the

00:39:06musical sense so this stuff is really

00:39:08cool style transfer I think this is

00:39:11crazy because a year and a half ago this

00:39:15was already looking really good and it’s

00:39:18just gotten better and better so this is

00:39:19like going so well I actually I should

00:39:24have put the old pictures here as well

00:39:25but like these are the new what I think

00:39:28is the latest in style transfer and this

00:39:31is pretty good like you can see

00:39:34transferring the style of a fire into a

00:39:37bottle like this is like a professional

00:39:39Photoshop job and I guess and this is

00:39:44like impressive and I would want to do

00:39:46this and I look forward to this being

00:39:49able to be done for me because I don’t

00:39:51want to implement it myself but there’s

00:39:55a lot of really cool stuff being done

00:39:57with style transfer and this stuff is

00:40:00really pragmatic because like

00:40:02aesthetically this is already like very

00:40:04high quality this is my crowning

00:40:08achievement actually mixing my face with

00:40:11that of a Pokemon probably my best

00:40:14achievement and deep learning definitely

00:40:16works would recommend trying it again

00:40:18and probably newer stuff will work even

00:40:20better and as far as specific go there’s

00:40:25all sorts of other things a rough

00:40:27formula is taken input that is similar

00:40:31to another input that deep learning has

00:40:33succeeded on like images audio raw text

00:40:37other domains like that pick a response

00:40:40that is a relatively simple mapping from

00:40:44that input so nothing too complicated

00:40:47but simple mappings like are they human

00:40:51faces and collected data set trainer

00:40:54model usually something like that gets

00:40:58just something that works quite well and

00:41:01as far as what it can do if you if you

00:41:05pick the right things generally it

00:41:07generally makes it is the valgar isn’t

00:41:11helped you a lot in doing

00:41:13a lot of the easy work for you getting

00:41:15like the last bit of presents always a

00:41:17lot of work but you you’ll know if you

00:41:21can get it which is it which makes it a

00:41:23little bit easier

00:41:24oh so back to the big questions how will

00:41:31the world change I think this is a great

00:41:34tweet like Andrew Inge

00:41:36I do believe automation and steroids is

00:41:38the right way to think about it not

00:41:39sentience or AI overlords or anything

00:41:43else like that I actually would be quite

00:41:47pleasantly surprised to see generally I

00:41:49in my lifetime just because I think

00:41:52that’s so unlikely and that’s not

00:41:54because despite what my current pants

00:41:56might imply that I’m one of the live

00:41:58fast die young types it’s it’s that I I

00:42:02think that it’s quite quite a ways away

00:42:04though I would love to be wrong as far

00:42:09as how the world change I won’t claim to

00:42:12be an expert on the societal effects of

00:42:14automation but luckily this guy would he

00:42:16had a TED talk called will automation

00:42:19take away all our jobs all seems like a

00:42:21little bit of a weasel word here it has

00:42:23over a million views and bet riches law

00:42:27of headlines applies here any headline

00:42:30that ends in a question mark can be

00:42:31answered by no so the answer is no

00:42:35saving you 18 minutes the claim is that

00:42:39AI automation just like other automation

00:42:41will take some jobs away but it will

00:42:43probably do much more transformation of

00:42:45jobs because a lot of jobs aren’t these

00:42:47just simple mappings and there’s more

00:42:49complicated new instincts to them but

00:42:53automation increases the leverage of any

00:42:55person and there will be a lot more jobs

00:42:58that we just can’t imagine will happen

00:42:59so that’s his view I don’t have strong

00:43:05views in this yeah what can deep

00:43:10learning do the fields really exciting

00:43:12there’s a whole lot of things that we

00:43:13can do now that we previously couldn’t a

00:43:15lot of things that were once thought to

00:43:16be really really hard are now doable

00:43:19people fought out of solving go was like

00:43:21a hundred years away or more and there’s

00:43:25all sorts of fields that this could

00:43:27affect

00:43:27as far as specifics ago which is what

00:43:30would actually be more useful to you

00:43:32guys

00:43:33the answer is it’s kind of complicated

00:43:36my advice would be to look at example

00:43:38failure and success cases network with

00:43:41researchers and people in industry or

00:43:42use someone’s rule of thumb maybe my own

00:43:45maybe take it and change it but really

00:43:48you want to build your own mental

00:43:49classifier of what isn’t isn’t possible

00:43:52and refine that classifier around the

00:43:54set of problems that you specifically

00:43:56care about so if you want like a

00:43:58specific like I want to figure out if

00:44:00deep learning can find X in a genomic

00:44:03data set you like go into the research

00:44:05look at that figure out like what seems

00:44:07possible what’s not and there’s so many

00:44:10problems out there that you might have

00:44:12you might end up being the world expert

00:44:14in knowing if deep learning works for

00:44:16your problem so it’s just there’s so

00:44:18much opportunity right there that if you

00:44:21ask anyone there probably will tell you

00:44:22an answer because people love to give

00:44:24answers but they probably won’t give you

00:44:25a very good one including myself so

00:44:28right now it’s unavoidable to do some of

00:44:30that work unless you do something that

00:44:32someone’s already solved but that’s kind

00:44:34of a cop-out answer the last big

00:44:38question it was how do I take advantage

00:44:40of these trends I think it’s a lot like

00:44:42learning softer engineering especially

00:44:44back in like when the internet was young

00:44:47and my answers scratch your own itch

00:44:49play around with it a lot of the work on

00:44:52art specifically was done by hobbyists

00:44:54and not researchers and we have no idea

00:44:57yet what can be done on you know the

00:44:59problems you care about and for all you

00:45:01know you might be sitting on depending

00:45:03the next killer app that no one else is

00:45:04thought of and it’s scratching your own

00:45:07itch leads to something valid for others

00:45:09start a company on it there’s lots of

00:45:12money for companies going around right

00:45:14right now and the world needs more AI

00:45:16companies that actually provide value

00:45:18I’m not gonna name names and also

00:45:23prepare for like a sweet transition

00:45:26consider joining Google it’s the best or

00:45:30any other like AI focused company which

00:45:32is like interesting impactful problems

00:45:34and the resources to solve those

00:45:36problems

00:45:36thank you

00:45:39you