00:00:06[Music]
00:00:10welcome this obligatory slide so yeah
00:00:15ask questions through the app and rate
00:00:16the session thank you
00:00:18we’re gonna talk about just machine
00:00:20learning and your first steps into the
00:00:21world of machine so if you already
00:00:23involve the machine learning you might
00:00:26as well leave because everything I’m
00:00:28telling you you already know so my name
00:00:31is Dave still I’m developer for kwinto
00:00:35kwinto is an agile consultancy agency
00:00:37which specializes in dotnet in Java you
00:00:41can have Twitter me at my handle yeah
00:00:46I’m a father of one little AI and
00:00:48another one is on its way soon so today
00:00:53we’re going to talk about intelligence
00:00:55artificial challenges machine learning
00:00:58how does relate to each other what
00:01:00methods are used in this area how
00:01:04machine learning is applied in everyday
00:01:06life section and how to get started so
00:01:10how to get yourself on your way
00:01:12eventually so intelligence artificial
00:01:16term is a machine learning let’s have a
00:01:18quick introduction machine learning yeah
00:01:22it sounds quite scary nobody really
00:01:25knows what it is but it is very popular
00:01:27in the media you see everywhere in
00:01:29articles in newspapers machine learning
00:01:32this machine learning that how do you
00:01:34teach a driverless car to drive etc etc
00:01:38and it’s also yeah you could find it on
00:01:42Netflix for example most of you probably
00:01:46already know but when you use a when a
00:01:49few movies like Arrested Development it
00:01:52usually gives a suggestion about well
00:01:53you’ve seen Arrested Development’s they
00:01:55must like Legally Blonde and Bob’s
00:01:58Burgers and some other movies it doesn’t
00:02:01just do these recommendations nearly
00:02:03really it does it’s based on well
00:02:06millions and millions of user inputs
00:02:08from other accounts and using machine
00:02:11learning to make these recommendations
00:02:14other way well in which it participates
00:02:17in real life is your mo
00:02:19everyone has one for example your Google
00:02:23assistants your katana your Alexa or
00:02:26your Siri they’re all basically smart
00:02:30digital assistants which are capable of
00:02:33voice recognitions and capable of
00:02:36parsing simple questions for example who
00:02:40played James Bond in quantum solace
00:02:43it’s capable of answering that it’s
00:02:45Daniel crack but it was also capable of
00:02:49answering is what other movies did he
00:02:52play however this is a quite a more
00:02:55complex question for simple fact that it
00:02:57requires context for he refers to
00:03:01previous results so it needs to know
00:03:05what the previous results for were
00:03:07before it can answer that question now
00:03:11nowadays those digital agents are quite
00:03:14capable of answering those questions and
00:03:17also this is because of a copious amount
00:03:19of machine learning intelligence what is
00:03:25intelligence
00:03:25well most of you guys know this but it’s
00:03:28basically the capability of reasoning
00:03:31and solving problems artificial
00:03:34intelligence is simulating this through
00:03:38computers so a computer who’s capable of
00:03:41reasoning and solving problems it’s a
00:03:47big problem main and it’s divided in
00:03:50several actually quite a few super mice
00:03:53these are four of these just a subset
00:03:57for example natural language processing
00:04:00that’s the ability that you’re capable
00:04:02of processing natural language by a
00:04:07computer and it’s capable of
00:04:08interpreting exactly what you say
00:04:11knowledge representation would be the
00:04:13problem main where you have to have a
00:04:16computer resemble internally the
00:04:20knowledge of a certain problem may for
00:04:23example medical assistance it would have
00:04:27to have a model internally from which
00:04:29you can deduce solutions and problems
00:04:32automated reasoning that means that
00:04:34you’re capable of coming up with new
00:04:37solutions or written solutions for
00:04:39example for puzzles and also for comedy
00:04:42questions for example if you’re at home
00:04:44at work and Google of course know says
00:04:48Google knows where you are and he notice
00:04:52also there’s a Wi-Fi at home is in use
00:04:55well Google should be able to do 2d
00:04:58views well oh that is not supposed to
00:05:03happen yes not at home
00:05:05so it should be able to alert you and
00:05:09inform you about these things but what
00:05:11we’re gonna concentrate about on today
00:05:14it’s machine learning machine learning
00:05:16is the area where actually a machine is
00:05:19capable of forming his own solutions to
00:05:23problems a short history of machine
00:05:29learning will be to start with the
00:05:32Turing test the Turing test was devised
00:05:35by Alan Turing in 1950 and capable
00:05:39it’s basically where there’s an operator
00:05:41sitting behind the screen and conversing
00:05:46actually with someone on the other side
00:05:48or screen and on the other side should
00:05:52be a computer or a human and the idea is
00:05:55that the computer should be able to fool
00:05:57the human into believing that he is a
00:05:59computer if he’s capable of doing that
00:06:02then each part smart enough actually to
00:06:04be called intelligent and well our
00:06:09touring would that’s what I’m doing of
00:06:11course a computer would deserve to be
00:06:13called intelligent if could deceive a
00:06:16human into believing there was human Oh
00:06:201952 we already had the first a I
00:06:24application actually our to assemble
00:06:28implemented the game engine actually AI
00:06:31which was capable of playing checkers
00:06:34using thousands of rheticus register
00:06:38checker games and having the application
00:06:41learn from those games
00:06:43well fast-forward five 45 years to the
00:06:48future or actually almost 20 years in
00:06:51the past 1997 IBM deep blue defeated
00:06:56Kasparov it was a big thing back then at
00:07:02the time it was fought okay we made big
00:07:05progress but for example the game of Go
00:07:08which is much higher complexity it’s way
00:07:11way in the future decades well there was
00:07:16another big breakthrough actually in
00:07:202011 IBM Watson was capable after
00:07:25defeating his opponents in jeopardy
00:07:27jeopardy is basically a game quiz an
00:07:30answer is given and a contestant has to
00:07:33give the question that longs to that
00:07:35answer and the difficulty is you have to
00:07:40actually first know the context you have
00:07:43to be able to formulate correct
00:07:45sentences so all adjectives nouns
00:07:47pronouns conjugations do you have to be
00:07:50correct oh yeah I need to know what
00:07:54you’re talking about
00:07:54so it was an impressive feat however
00:07:59even then go was still a long way well
00:08:05actually it wasn’t in 2016 most of you
00:08:10know google deepmind defeated go go
00:08:17champion lee sedol
00:08:18and actually did so four times against
00:08:22one and actually now they already
00:08:25retired it but basically there was a
00:08:28breakthrough because in 2015 their
00:08:31father would still be decades away so
00:08:33it’s like having a flying car being able
00:08:36for pre-order tomorrow it’s really
00:08:39impressive how fast I went well there
00:08:44was short introduction about history
00:08:46what kind of intelligence do we have we
00:08:48have considered weak AI a strong AI
00:08:52strong AI usually means that you have
00:08:55artificial generic
00:08:57basically that means you have artificial
00:09:00intelligence as a human level you’re
00:09:02capable of reasoning of several problems
00:09:05domains and you’re not stuck on only
00:09:07checkers for example we on the other end
00:09:10is really really specific it means that
00:09:14the AI is basically of capable
00:09:17performing one task for example checkers
00:09:20or chess in this game so as we saw in
00:09:26the history of AI there was suddenly a
00:09:31big progress first was 45 years then
00:09:36suddenly without within 17 years or
00:09:38everything changed how was this possible
00:09:41well the first thing will be big data
00:09:44we’ve now such copious amounts of data
00:09:47that were capable of actually training
00:09:49these algorithms that we want to use for
00:09:51AI this was actually in 1997 really
00:09:55unimaginable that there will be Google
00:09:57which is warehouses and warehouses and
00:09:59warehouses of computers storing data
00:10:02with cat pictures I don’t know and we
00:10:06have big computer this basically means
00:10:09that we have also large large data
00:10:12centers of computing power cheap and
00:10:16specialized another one less known fact
00:10:21actually still advancing algorithms
00:10:25actually
00:10:30so what’s it transmitting algorithms to
00:10:35that when I studied in 1999 neural
00:10:39networks were basically well nice but
00:10:42not really feasible due to the fact that
00:10:45it was slow it took a lot of computing
00:10:48power and time to train such a model
00:10:53however in 2006 there was this person
00:10:57whose name I forgot sorry who actually
00:11:02revolutionize a part of the training
00:11:04mechanism in neural networking making it
00:11:07possible actually to use neural
00:11:08networking in 2010 this already was
00:11:12redundant but it gave a new boost to
00:11:16neural networks and making it possible
00:11:18to actually use these the other one is
00:11:22the fact that there are several
00:11:24companies now heavily invested in
00:11:26machine learning throwing big money
00:11:28against it and really really putting
00:11:31time and effort in it so these factors
00:11:33combined is why AI is out of seven so
00:11:37let’s just say the past seven years
00:11:39making a really really strong return
00:11:45machine learning how does machine
00:11:49learning differ from normal programming
00:11:51most you know but basically machine
00:11:54learning it means that in regular
00:11:56programming year data if you program if
00:11:59they eat into computer you get output
00:12:01basic machine learning it’s a little bit
00:12:04different of course you have data give
00:12:08it a certain output that you expect for
00:12:10the data and the computer supposed to
00:12:13produce a program that would solve that
00:12:16problem for you so you’re not actually
00:12:19doing everything by itself for example
00:12:23well we have an apple FF orangish you
00:12:28can’t really compare apples and oranges
00:12:30but that’s right how would you
00:12:33differentiate these well first of all I
00:12:35I would think caller so well what if
00:12:40it’s a grayscale image you would try
00:12:43texture
00:12:44well if you add bananas well it would
00:12:49have to add another exclusion another
00:12:50exception etc etc try a French
00:12:53eventually you get a whole bunch of code
00:12:56which is actually I’m maintainable it
00:12:58will never be actually fully covering
00:13:02all areas of fruit so this is where
00:13:05machine learning comes in yeah you train
00:13:09it and it will come up with the answer
00:13:11program itself so what matters of
00:13:15machine learning you have where
00:13:19supervised learning with unsupervised
00:13:22learning ever
00:13:25we have reinforcement learning
00:13:28semi-supervised learning is basically a
00:13:301/2 a solution between supervised
00:13:32learning and unsupervised learning we
00:13:36will discuss each of these in the
00:13:39conversation flight supervised learning
00:13:44so what supervised learning supervised
00:13:47learning means that you have a machine
00:13:49learning algorithm which you feed input
00:13:52and keep training it down at the input
00:13:54each time you give input it will give a
00:13:57prediction and then you tell it whether
00:14:01the prediction of the label for example
00:14:03is right or wrong based if it’s wrong it
00:14:07was just this model I keep repeating
00:14:10keep repeating it until the error is low
00:14:14eventually well you extract the
00:14:17classifier model from that machine
00:14:18learning algorithm because that’s what’s
00:14:20all about it put it in your program and
00:14:23use it you give the input it will give a
00:14:25prediction so what the main algorithms
00:14:31used what it’s used for for supervised
00:14:34learning classification and regression
00:14:38classification actually means that let’s
00:14:41say you have two features feature is
00:14:46actually the best aspects of an input on
00:14:49which you want to train the model so
00:14:53feature 152 and it tries to classify
00:14:56each one
00:14:56and for regression is usually you have a
00:15:00feature so I have value is an output
00:15:04anyone to have it estimate which value
00:15:08it might occur if you give it an input
00:15:10to clarify this for classification for
00:15:15example we have a data set of houses and
00:15:19we’re going to classify kind of train
00:15:22into a classification where is something
00:15:24the house is cheap or house is expensive
00:15:27so as input features we use living area
00:15:30you use price and give it the trip you
00:15:34keep training keep training and each
00:15:37time it will tell it will give the house
00:15:40of saying it’s cheap it’s expensive we
00:15:42say now your fingers cheap but this
00:15:45experience it was justice model
00:15:47eventually we’ll have a model that can
00:15:50separate houses based on the living area
00:15:53and the price and determine yeah that’s
00:15:55cheap that’s expensive regression on our
00:16:00hand works a little different let’s say
00:16:04we have the living area yes and the
00:16:08corresponding prices I will keep telling
00:16:13it just given the data what the idea is
00:16:17then eventually it will have a function
00:16:19that will actually match living area
00:16:22with with the price if and if the model
00:16:26we can give it just a living area and
00:16:28will produce a price for us an
00:16:30estimation of the expected price so
00:16:34assured classification would label it
00:16:35and a regression would try to estimate
00:16:39actual value if there are any questions
00:16:43do ask please well for this problem we
00:16:49use neural networks usually you can do
00:16:52it in other ways I’ve not seen it but
00:16:54you could and their network is based on
00:16:58the biological neuron it looks
00:17:01schematically like this inputs foreigner
00:17:05come through the dendrite so top left I
00:17:08will go for the necklace and
00:17:10it will give a signal based on the input
00:17:12food axon to the axon terminals quite
00:17:18basic in computer science you would
00:17:21model it like this basically these are
00:17:24all the inputs each input has certain
00:17:27weight so important it was some the
00:17:32weights times the input to make a
00:17:34summation of step and then it will
00:17:38determine whether or not it will output
00:17:40the signal and how strong that signal
00:17:42will be so that will be the activation
00:17:45function well combining all these
00:17:49networks you will get something like
00:17:52this this is a free layered neural
00:17:55network you get the input layer you have
00:17:58the output layer every hidden layer it’s
00:18:01called the hidden layer because nobody
00:18:03sees hidden layer you only see the input
00:18:05and output as a outside view and this is
00:18:11fully connected to network so each
00:18:12neuron is connected to each other never
00:18:19to make this a little bit clearer I
00:18:21would like to demonstrate this x-ray
00:18:24through something called
00:18:31tensorflow preyed playground all of you
00:18:34have already played with this or see a
00:18:37few hands well it’s really nice tool it
00:18:41gives you an idea about how neural
00:18:43networks work you play around with it
00:18:45I created a match for spiral here but
00:18:49let’s just it’s a very handy tool for
00:18:53visualizing so you can determine here
00:18:56whether it want to classify or regress
00:19:01for this input we take the two
00:19:05coordinates of each point so X 1 X 2 so
00:19:09the x and y coordinate and we’re going
00:19:12to Train it well don’t have any hit
00:19:13liars there’s a nice separation we don’t
00:19:17need hidden edge for this so let’s make
00:19:21a little more complex
00:19:23that’s my so you have four groups
00:19:25basically she’s here
00:19:27orange is here blue is there and blues
00:19:31there let’s try to train it well it’s
00:19:36not gonna match is it so we add a hidden
00:19:39layer let’s see what it does you will
00:19:44see that suddenly is this nice color up
00:19:46basically the figure the line is the
00:19:50bigger the weight if it’s blue is
00:19:53positive if it’s orange is negative and
00:19:57these blue white deficiency is here on
00:20:03the neurons are basically the activation
00:20:05functions a representation of the
00:20:07activation function basically if you see
00:20:09blue here it means everything that comes
00:20:12in this area would be positive and the
00:20:15rest will be nil so this is not gonna
00:20:21work this separation so let’s add some
00:20:24extra and try it again
00:20:27so a few extra neurons and suddenly it’s
00:20:32capable of really dividing and
00:20:34classifying the output
00:20:41so this is clear for everybody okay so
00:20:50in deep learning deep learning actually
00:20:54means nothing more than that you use a
00:20:57neural network that is more than one lis
00:20:59hidden layer nothing else so this is
00:21:02deep ler deep neural network because
00:21:05that’s free hidden layer this is an
00:21:08example of how in short these layers
00:21:14would represent each activation method
00:21:17so let’s say you have an input layer
00:21:20which is a picture the cashman are
00:21:21categorized in cats and dogs will
00:21:23influence the human face and each for
00:21:27each pixel we input a nerve so let’s say
00:21:30there’s an 18 by 18 pixel top it off I
00:21:33wouldn’t know how much many notes it
00:21:34would be but plenty in the inner layer
00:21:38we get actually the edges first then
00:21:42later on you’ll see that it’s a
00:21:44combination of edges so the nose eyes
00:21:48after that in later life later you see
00:21:52the object models coming out a nice one
00:21:59would also be online you can also find
00:22:01the inception model the exception model
00:22:05is an efficient model trained
00:22:07pre-trained Fisher model from cool that
00:22:09they released this can just use play
00:22:12around with it’s quite complicated the
00:22:15model itself it’s this big and they also
00:22:19made a project someone it from Google
00:22:22inception which actually showed you
00:22:24shows actually all the layers and shows
00:22:30you how this would look as we just shown
00:22:34before on a real neural network so this
00:22:37would be on the first level the
00:22:40separators and if we look at the output
00:22:46after the first layer I don’t know where
00:22:49you can see but this is basically a heat
00:22:51map of what is highlighted of the dog
00:22:53that you saw previously
00:22:55with regard to the edges so you can see
00:22:58here be the shape of the dog here yeah
00:23:01it’s interesting to see I would suggest
00:23:05recommend what looking at by the way
00:23:08this is a Python notebook many of you
00:23:11probably have heard it in previous
00:23:13sessions about it I would really
00:23:15recommend using it because you maybe if
00:23:17you smart mark or something this is
00:23:19basically the same principle you can
00:23:21just write text put in code that you can
00:23:24reuse later it gets explained it gets
00:23:28shown and it’s executable continuing so
00:23:34there’s not just one neural network
00:23:35architecture there are quite a few this
00:23:37from the neural network is you and it
00:23:41shows well that you really have to
00:23:43consider what kind of architect you
00:23:45would decide upon if you want to build
00:23:47your own neural network so there are
00:23:53several frameworks that already provide
00:23:55trained models through an API train
00:23:57while some through an API is some don’t
00:23:59for example do Google Cloud machine
00:24:01learning engine the IBM Watson machine
00:24:04learning that’s fruit bluemix my
00:24:07crewmate machine learning recently apple
00:24:10also introduced around machine learning
00:24:12for in your applications cafeteria to
00:24:17has a whole model zoo which can download
00:24:20and use but for first for actually
00:24:23provide api city can directly using the
00:24:26application IBM watson was by the way a
00:24:29byproduct of IBM watson of course after
00:24:34day did the Jeopardy faying they had a
00:24:36very nice answer a question question
00:24:38answer mechanism but also learned a
00:24:41whole lot and gave the profiles of api
00:24:45for the public so like they have a
00:24:47fishing api Oh
00:24:50voice recognition API well the same goes
00:24:52for Google Cloud and all the others but
00:24:56it’s very nice you can play for example
00:24:59for google you can play around with the
00:25:01fishing api you can just well let’s just
00:25:04take my daughter for example it’s very
00:25:06nice so you can just
00:25:08upload it tip tip and will detect
00:25:12actually everything let’s show you it
00:25:15will say she’s neat I don’t have no idea
00:25:18what her emotions are not join the Sora
00:25:21no anger no surprised she’s like numb
00:25:25emotionless apparently yeah she is
00:25:29apparently a person infant child skin
00:25:33and for some reason Google thinks is day
00:25:36though most of you probably see it’s
00:25:38night inside lights on so it’s very nice
00:25:42you can just get an account for free
00:25:45it’s very fun to play with
00:25:47and you can just call through REST API
00:25:49from your programs
00:25:51alright that was supervised learning
00:25:55unsupervised learning there basically
00:25:57means the opposite of supervised
00:25:59learning of course what this mean is
00:26:01basically I have a bunch of data I’ve no
00:26:05idea how the structure it might be I
00:26:07have a machine learning algorithm I just
00:26:10give it to the algorithm and say ok you
00:26:14try to sort this out I’ll come back
00:26:16later and after you’ve done that you
00:26:19have a classifier model it can give
00:26:21input and we’ll give you more or less
00:26:23where he expects your input to belong to
00:26:26so they’re mainly free algorithms that
00:26:30are used for this clustering when you’ve
00:26:33tried to find similar instances so these
00:26:35are the instances these would be feature
00:26:37showed aspects that you put into it and
00:26:40here is she okay these free instances
00:26:45have the same features these belong to
00:26:48each other together anomaly detection
00:26:51well everything here would be white and
00:26:52there’s just one entry which is
00:26:54completely different than the rest
00:26:57Association discovery that’s and when
00:27:01you basically see okay for example these
00:27:05free instances have feature from the
00:27:07second column but they also have all
00:27:09features from the fourth column well an
00:27:12example from practice will be let’s say
00:27:16from food web shop and you notice
00:27:20everyone who buys
00:27:22buns and salad also seems to buy burgers
00:27:24there will be Association discovery so
00:27:28when someone buy sell it and buns yeah
00:27:31there might be some burgers in force for
00:27:35clustering by the way it’s means that
00:27:38you don’t change the data set or you
00:27:40don’t go grouping it but basically you
00:27:42try to find where the clusters are so
00:27:45this is basically an example k-means
00:27:46clustering means you have before you
00:27:49start you to say okay I want to find
00:27:51three clusters
00:27:52that’s the K free and I’m gonna start at
00:27:57random I just point here there and there
00:27:59and then we’re going to try to shift
00:28:02those mean points of the clusters the
00:28:05center points of the clusters until the
00:28:07distances of points become minimized and
00:28:09keep iterating that until there are no
00:28:12changes so keep shifting the mean points
00:28:15those are these tilt found the best
00:28:21solution for free clusters the downside
00:28:24is you would have to know that it’s free
00:28:26clusters in advance that’s a bit of the
00:28:29– well we also did it a quinta for too
00:28:32much we offer products products by
00:28:35customer profile so we also did
00:28:37clustering basic category manufacturer
00:28:40postal code and eventually we were
00:28:43capable of actually making product
00:28:46suggestions based on what they were
00:28:49searching for so for example there was a
00:28:52probability of 0 90 percent that product
00:28:54ID 11 807 which is SES plus drill set
00:28:59would should be recommended
00:29:04so okay that was supervised learning
00:29:07with unsupervised learning and now this
00:29:10reinforcement learning what is
00:29:14reinforcement learning that’s when you
00:29:16have an algorithm they can perform
00:29:19certain actions for example game you can
00:29:21go left
00:29:22you can’t go right maybe sure each
00:29:25action has an effect on the world and as
00:29:28a return you get a reward increasing
00:29:31score for example and get a new status
00:29:34you moved right you’re about that same
00:29:36position where you were before well a
00:29:40nice example is actually this game it’s
00:29:44a brick most of you know right break out
00:29:49it’s an Atari game and it was actually
00:29:52used by deepmind before they get to
00:29:55train their models using reinforcement
00:29:57learning it’s also one of the reasons
00:29:59Google bottom like this so well awesome
00:30:12yeah ok so basically they just took two
00:30:16pics or social scream its input and
00:30:17score that’s all that was input for the
00:30:20model nothing else just the score and a
00:30:23pixels and he knew what what he had to
00:30:27do left or right but okay it starts
00:30:31training minutes of training
00:30:32Leslie sucks at it so you see man
00:30:37sometimes he does hit the ball by
00:30:39coincidence but yeah it’s not very very
00:30:43proficient but after two hours played
00:30:53like an expert so he had no problems so
00:31:03he’s pretty good I would have problems
00:31:06beating him
00:31:09yeah the reward change is basically the
00:31:12model it’s based on this Markov decision
00:31:15process and I’ll get back to that in a
00:31:19second after two hours actually
00:31:21something special happens he finds out
00:31:25that there’s even better way to score
00:31:28points basically by tunneling which was
00:31:31see in a second so it’s pretty good so
00:31:40yeah basically someone yeah he update
00:31:43this model by basically by the rewards
00:31:45he gets so try game of the game of the
00:31:48game of the game and he makes a decision
00:31:51tree basically and each decision he
00:31:55makes has a certain probability for
00:31:56certain reward and it tries keeps
00:31:59updating it based on experience and
00:32:03eventually it does pretty well and
00:32:05deepmind is for all kinds of Atari games
00:32:07and actually showed it as basically a
00:32:10possibility for generic intelligence
00:32:13there for the fact that there was no
00:32:15domain knowledge previously involved it
00:32:17was capable of solving several a fairy
00:32:19stars for all kinds of Atari games but
00:32:24there’s also yeah you could say it’s
00:32:27subset of reinforcement learning
00:32:29it’s called genetic algorithms most if
00:32:32you’ve heard about it basically it
00:32:35doesn’t it’s it’s a little bit different
00:32:37here you take one model and keep
00:32:39training that genetic algorithm works a
00:32:42little bit different there’s a nice
00:32:44example on the internet not this one yep
00:32:47I wonder how he trained by the way
00:32:52anyway this is an example for genetic
00:32:55algorithm Walker basically well it does
00:32:58every generation he froze had 10 models
00:33:02there’s based on how far someone can get
00:33:05so how much reward he gets in one model
00:33:09one run it determines who lives and dies
00:33:13and who may reproduce so for example
00:33:16I’ve configured this one from champions
00:33:18to copy to if you see it there so only
00:33:22if
00:33:22you win the game in your first two
00:33:24positions you just go on to the next
00:33:26round and the rest of them either get
00:33:30killed off or are being crossed over
00:33:33so basically properties from one mole
00:33:35will be combined with crops to other
00:33:37models and thus optimizing or make it in
00:33:41good worse depends and each time you
00:33:43keep going on and on and on and then
00:33:45keep track of the optimal scores so also
00:33:54there are a lot of algorithms to choose
00:33:57from so it’s really important to have a
00:34:00grasp on what you want to solve how you
00:34:02want to solve it and what you need to
00:34:05solve it so how is it applied in
00:34:11everyday life in short you can see
00:34:13several applications already running
00:34:14with it where the forecast for Amsterdam
00:34:16there’s a filtration system that
00:34:18actually can predict whether or not
00:34:20something is going to fill or something
00:34:23needs attention automatic translation so
00:34:27this is actually several problems in
00:34:29one’s one so you have image recognition
00:34:32Dave in translation they have to be well
00:34:40we display that translation on the image
00:34:42again so and this is actually cancer
00:34:46detection so on a very low level
00:34:49determining what’s probability that
00:34:51something may be out of place spam
00:34:55filters of course and a test lock car so
00:35:01how would you get started so I’m gonna
00:35:04be very short but there’s basically two
00:35:09ways to get start it is just used to pre
00:35:11train models like the fishing API I just
00:35:13show just use it in your application and
00:35:15well don’t reinvent the wheel for simple
00:35:19facts yeah there are people out there
00:35:21that did it before you and you probably
00:35:23are not going to do it much better so as
00:35:26I mentioned before there’s a Google
00:35:28Cloud ml IBM bluemix microfiche or etc
00:35:31etc or you can build your own and tweak
00:35:35with your own but
00:35:36then I would suggest first of all learn
00:35:38Python yes you can use most libraries
00:35:42with Java Scala I love those but
00:35:45basically the data science and machine
00:35:48learning areas everything is Python so
00:35:51every example you’ll see is bite and
00:35:53every notebook you’ll see spied so at
00:35:57the beginning just learn Python and if
00:35:59you’re comfortable enough with
00:36:01everything maybe then move on to another
00:36:04one other language in subtitle notebook
00:36:07really install it and try to online guys
00:36:12there plenty of them they’re very good
00:36:15and you will be fool if you don’t use it
00:36:19well for example machine learning
00:36:21libraries that you could use if you want
00:36:23to do it on your own are tensorflow
00:36:25actually TF learn or Kara’s torch which
00:36:31is newer
00:36:31Tianna peyten deep learning for J’s
00:36:33someone did do it
00:36:35Java and cough it to for enter several
00:36:38others but these are the memos for those
00:36:41who are wondering really what is ten
00:36:43flow in the basics isn’t graph based
00:36:47calculation framework so the libraries
00:36:51on top of its support machine learning
00:36:53but in essence is a graph based
00:36:55calculation framework which optimizes
00:36:56for parallelization and other things
00:36:59like running on the CPU keep you GPU
00:37:03parallelization or several processes etc
00:37:07well also not if anyone saw the talk
00:37:12yesterday about the data analysis yeah
00:37:15it is a very important aspect first of
00:37:19all you want to frame the problem I
00:37:22didn’t find what you want to solve for
00:37:24example with the housing data yeah what
00:37:27do you want to solve do you really want
00:37:28to know the prices or just you not want
00:37:30to know how expensive is it expensive or
00:37:33cheap so you have to adjust the data as
00:37:36well
00:37:37cleaning up filter it and adjust it
00:37:40where I need you need to look at a
00:37:43bigger picture basically what’s gonna
00:37:45come before and what’s coming after your
00:37:48machine
00:37:48so the reason why I predict housing
00:37:50prices is not because you’re such a fan
00:37:53of houses we usually usually usually
00:37:56it’s because there’s another process
00:37:58after you which really needs that
00:38:00information well third point check your
00:38:05assumptions because you have those and
00:38:08they’re usually wrong and visualized
00:38:12again in the pie to notebook if you have
00:38:15an ID form model if you have an idea of
00:38:16how the data says you do if you think
00:38:19your machine learning should work
00:38:21visualize it
00:38:22visualize every steps see how it outputs
00:38:25not don’t put it in your application at
00:38:28once the future of AI well as I saw it
00:38:34went quite rapidly last few years so
00:38:39rapid that’s unimaginable actually for
00:38:42example this is the police agent from –
00:38:46by 2030 Dubai is intending to have I
00:38:52believe 50% of police force replaced by
00:38:55these things
00:38:55however it’s no more than a walk-in
00:38:58kiosk at the moment on the other hand we
00:39:03have Boston Dynamics which is actually
00:39:06military ex-military contract from
00:39:08Google and these are no jokes and
00:39:14another thing about what Google recently
00:39:18released it for example two papers
00:39:21regarding
00:39:22relational networks so these are special
00:39:26networks that want we have plug ball in
00:39:28order
00:39:28neural networks which we want to use to
00:39:31answer question like this there’s a tiny
00:39:34rubber fing there’s the same color it’s
00:39:36a large cylinder what shape is it so
00:39:40tiny rubber feet the same color is this
00:39:43big thing so for us it’s already a few
00:39:47steps and actually they prove that they
00:39:50were capable of answering this question
00:39:52with quite better than human proficiency
00:39:56so yeah we can’t expect a lot more in
00:40:01the near future
00:40:02regarding this so here’s some resources
00:40:04also that you really should check out I
00:40:07really would suggest look look another
00:40:12one good hands-on is actually this one
00:40:15or real alien zero
00:40:18I probably pronounced it wrong but II
00:40:20wrote a whole book put this whole all
00:40:25these notebooks online on get up and
00:40:28give you really a good introduction into
00:40:31data science side kid tends to flow all
00:40:35those things it will take time but sorry
00:40:43it’s also the O’Reilly book he published
00:40:46a Riley book yeah the sidekick I don’t
00:40:50know remember the tank yes I don’t know
00:40:56what he said introduction but basically
00:40:58yes I can tell flow is this big thick
00:41:01it’s a good read no yes and the slides
00:41:09really represent this book as well so
00:41:11it’s really really important to look at
00:41:14this if you want so these are some other
00:41:17miscellaneous sources imagenet this is
00:41:20for example research for all kinds of
00:41:23images and computation basically to
00:41:25check how good your facial recognition
00:41:28is immature canadian-ish open AI is a
00:41:31well platform promotion open AI and they
00:41:34also have an open-air gym for competing
00:41:37for several problems and what I just
00:41:40showed you is a genetic algorithm well
00:41:42walkers which is basically simple
00:41:44genetic fun any questions