머신러닝,딥러닝/Andrew Ng 머신러닝 코세라 강의 노트

Week 2 Lecture ML : multiple features

mcdn 2020. 8. 6. 20:22
반응형

So concretely X2 subscript 3,  will refer to feature  number three in the  x factor which is equal to 2,right?  That was a 3 over there, just fix my handwriting.  So x2 subscript 3 is going to be equal to 2.
And so, the inner product that is theta transpose X is just equal to this. This gives us a convenient way to write the form of the hypothesis as just the inner product between our parameter vector theta and our theta vector X. And it is this little bit of notation, this little excerpt of the notation convention that let us write this in this compact form. So that's the form of a hypthesis when we have multiple features. And, just to give this another name, this is also called multivariate linear regression. 8분 15초부터 동영상을 재생하고 스크립트 따르기8:15 And the term multivariable that's just maybe a fancy term for saying we have multiple features, or multivariables with which to try to predict the value Y.

Multiple Features

Note: [7:25 - \theta^T is a 1 by (n+1) matrix and not an (n+1) by 1 matrix]

Linear regression with multiple variables is also known as "multivariate linear regression".

We now introduce notation for equations where we can have any number of input variables.

x(i)jx(i)mn=value of feature j in the ith training example=the input (features) of the ith training example=the number of training examples=the number of features

The multivariable form of the hypothesis function accommodating these multiple features is as follows:

h_\theta (x) = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \theta_3 x_3 + \cdots + \theta_n x_n

In order to develop intuition about this function, we can think about \theta_0 as the basic price of a house, \theta_1 as the price per square meter, \theta_2 as the price per floor, etc. x_1 will be the number of square meters in the house, x_2 the number of floors, etc.

Using the definition of matrix multiplication, our multivariable hypothesis function can be concisely represented as:

hθ(x)=[θ0θ1...θn]⎡⎣⎢⎢⎢x0x1⋮xn⎤⎦⎥⎥⎥=θTx

Remark: Note that for convenience reasons in this course we assume x_{0}^{(i)} =1 \text{ for } (i\in { 1,\dots, m } ).

This allows us to do matrix operations with theta and x. Hence making the two vectors '\theta' and x^{(i)} match each other element-wise (that is, have the same number of elements: n+1).]This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more.

 

 

Gradient Descent For Multiple Variables

Gradient Descent for Multiple Variables

The gradient descent equation itself is generally the same form; we just have to repeat it for our 'n' features:

}repeat until convergence:{θ0:=θ0α1mi=1m(hθ(x(i))y(i))x(i)0θ1:=θ1α1mi=1m(hθ(x(i))y(i))x(i)1θ2:=θ2α1mi=1m(hθ(x(i))y(i))x(i)2

In other words:

}repeat until convergence:{θj:=θjα1mi=1m(hθ(x(i))y(i))x(i)jfor j := 0...n

The following image compares gradient descent with one variable to gradient descent with multiple variables:

you can show mathematically, you can find a much more direct path to the global minimum rather than taking a much more convoluted path where you're sort of trying to follow a much more complicated trajectory to get to the global minimum. 2분 57초부터 동영상을 재생하고 스크립트 따르기2:57So, by scaling the features so that there are, the consumer ranges of values. In this example, we end up with both features, X one and X two, between zero and one.
So you want the range of  values, you know, can be  bigger than plus or smaller  than plus one, but just  not much bigger, like plus  100 here, or too  much smaller like 0.00 one over there.  Different people have different rules of thumb.  But the one that I use is  that if a feature takes  on the range of values from  say minus three the plus  3 how you should think that should  be just fine, but maybe  it takes on much larger values  than plus 3 or minus 3  unless not to worry and if  it takes on values from say minus one-third to one-third.
subtract the mean of the feature  and divide it by the range  of values meaning the max minus min.  And this sort of formula will  get your features, you know, maybe  not exactly, but maybe roughly  into these sorts of  ranges, and by the  way, for those of you that  are being super careful technically if  we're taking the range as max  minus min this five here will actually become a four.  So if max is 5  minus 1 then the range of  their own values is actually  equal to 4, but all of these  are approximate and any value  that gets the features into  anything close to these sorts of ranges will do fine.  And the feature scaling  doesn't have to be too exact,  in order to get gradient  descent to run quite a lot faster.

Gradient Descent in Practice I - Feature Scaling

Note: [6:20 - The average size of a house is 1000 but 100 is accidentally written instead]

We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.

The way to prevent this is to modify the ranges of our input variables so that they are all roughly the same. Ideally:

−1 ≤ x_{(i)} ≤ 1

or

−0.5 ≤ x_{(i)} ≤ 0.5

These aren't exact requirements; we are only trying to speed things up. The goal is to get all input variables into roughly one of these ranges, give or take a few.

Two techniques to help with this are feature scaling and mean normalization. Feature scaling involves dividing the input values by the range (i.e. the maximum value minus the minimum value) of the input variable, resulting in a new range of just 1. Mean normalization involves subtracting the average value for an input variable from the values for that input variable resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula:

x_i := \dfrac{x_i - \mu_i}{s_i}

Where μ_i is the average of all the values for feature (i) and s_i is the range of values (max - min), or s_i is the standard deviation.

Note that dividing by the range, or dividing by the standard deviation, give different results. The quizzes in this course use range - the programming exercises use standard deviation.

For example, if x_i represents housing prices with a range of 100 to 2000 and a mean value of 1000, then, x_i := \dfrac{price-1000}{1900}.

 

 

----

Concretely, here's the gradient descent update rule.  And what I want to do in this video is tell you  about what I think of as debugging, and some tips for  making sure that gradient descent is working correctly.  And second, I wanna tell you how to choose the learning rate alpha or  at least how I go about choosing it. 

iteration batch epoch 뜻 

 

Notice that the x axis is number of iterations.  Previously we where looking at plots of J(theta) where the x axis, where  the horizontal axis, was the parameter vector theta but this is not what this is.  Concretely, what this point is,  is I'm going to run gradient descent for 100 iterations.  And whatever value I get for theta after 100 iterations,  I'm going to get some value of theta after 100 iterations.  And I'm going to evaluate the cost function J(theta).  For the value of theta I get after 100 iterations,  and this vertical height is the value of J(theta).  For the value of theta I got after 100 iterations of gradient descent.  And this point here that corresponds to the value of J(theta) for  the theta that I get after I've run gradient descent for 200 iterations.

 

It's also possible to come up with automatic convergence test,  namely to have a algorithm try to tell you if gradient descent has converged.  And here's maybe a pretty typical example of an automatic convergence test.  And such a test may declare convergence if your cost function J(theta)  decreases by less than some small value epsilon,  some small value 10 to the minus 3 in one iteration.  But I find that usually choosing what this threshold is is pretty difficult.  And so in order to check your gradient descent's converge  I actually tend to look at plots like these, like this figure on the left,  rather than rely on an automatic convergence test. 
But if your learning rate is too big then if you start off there,  gradient descent may overshoot the minimum and send you there.  And if the learning rate is too big,  you may overshoot again and it sends you there, and so on.  Similarly sometimes you may also see J(theta) do something like this,  it may go down for a while then go up then go down for a while then go up go down for  a while go up and so on.  And a fix for something like this is also to use a smaller value of alpha.

 

What I actually do is try this range of values. 8분 6초부터 동영상을 재생하고 스크립트 따르기8:06And so on, where this is 0.001. I'll then increase the learning rate threefold to get 0.003. And then this step up, this is another roughly threefold increase from 0.003 to 0.01. And so these are, roughly, trying out gradient descents with each value I try being about 3x bigger than the previous value. So what I'll do is try a range of values until I've found one value that's too small and made sure that I've found one value that's too large. And then I'll sort of try to pick the largest possible value, or just something slightly smaller than the largest reasonable value that I found. And when I do that usually it just gives me a good learning rate for my problem. And if you do this too, maybe you'll be able to choose a good learning rate for your implementation of gradient descent.

Gradient Descent in Practice II - Learning Rate

Note: [5:20 - the x -axis label in the right graph should be \theta rather than No. of iterations ]

Debugging gradient descent. Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.

Automatic convergence test. Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 10^{−3}. However in practice it's difficult to choose this threshold value.

It has been proven that if learning rate α is sufficiently small, then J(θ) will decrease on every iteration.

To summarize:

If \alpha is too small: slow convergence.

If \alpha is too large: may not decrease on every iteration and thus may not converge.

sometimes by defining  new features you might actually get a better model. 
  One thing you could do is fit a quadratic model like this.  It doesn't look like a straight line fits this data very well.  So maybe you want to fit  a quadratic model like this  where you think the size, where  you think the price is a quadratic  function and maybe that'll  give you, you know, a fit  to the data that looks like that.  But then you may decide that your  quadratic model doesn't make sense  because of a quadratic function, eventually  this function comes back down  and well, we don't think housing  prices should go down when the size goes up too high.  So then maybe we might  choose a different polynomial model  and choose to use instead a  cubic function, and where  we have now a third-order term  and we fit that, maybe  we get this sort of  model, and maybe the  green line is a somewhat better fit  to the data cause it doesn't eventually come back down. 
which is that  if you choose your features  like this, then feature scaling  becomes increasingly important.  So if the size of the  house ranges from one to  a thousand, so, you know,  from one to a thousand square  feet, say, then the size  squared of the house will  range from one to one  million, the square of  a thousand, and your third  feature x cubed, excuse me  you, your third feature x  three, which is the size  cubed of the house, will range  from one two ten to  the nine, and so these  three features take on very  different ranges of values, and  it's important to apply feature  scaling if you're using gradient  descent to get them into  comparable ranges of values.

  So the square root function is  this sort of function, and maybe  there will be some value of theta  one, theta two, theta three, that  will let you take this model  and, for the curve that looks  like that, and, you know,  goes up, but sort of flattens  out a bit and doesn't ever  come back down.  And, so, by having insight into, in  this case, the shape of a  square root function, and, into  the shape of the data, by choosing  different features, you can sometimes get better models.

Features and Polynomial Regression

We can improve our features and the form of our hypothesis function in a couple different ways.

We can combine multiple features into one. For example, we can combine x_1 and x_2 into a new feature x_3 by taking x_1x_2.

Polynomial Regression

Our hypothesis function need not be linear (a straight line) if that does not fit the data well.

We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form).

For example, if our hypothesis function is h_\theta(x) = \theta_0 + \theta_1 x_1 then we can create additional features based on x_1, to get the quadratic function h_\theta(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_1^2 or the cubic function h_\theta(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_1^2 + \theta_3 x_1^3

In the cubic version, we have created new features x_2 and x_3 where x_2 = x_1^2 and x_3 = x_1^3.

To make it a square root function, we could do: h_\theta(x) = \theta_0 + \theta_1 x_1 + \theta_2 \sqrt{x_1}

One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.

eg. if x_1 has range 1 - 1000 then range of x_1^2 becomes 1 - 1000000 and that of x_1^3 becomes 1 - 1000000000

 

 

 let's take a  very simplified cost function  J of Theta, that's just the  function of a real number Theta.  So, for now, imagine that Theta  is just a scalar value or that Theta is just a row value.  It's just a number, rather than a vector. you may know that the way to  minimize a function is to  take derivatives and to  set derivatives equal to zero.  So, you take the derivative of J with respect to the parameter of Theta.  You get some formula which I am not going to derive,  you set that derivative  equal to zero, and this  allows you to solve for  the value of Theda that minimizes J of Theta. 

Calculus actually tells us  that, if you, that  one way to do so, is  to take the partial derivative of J, with respect to every parameter of Theta J in turn, and then, to set  all of these to 0.  If you do that, and you  solve for the values of  Theta 0, Theta 1,  up to Theta N, then,  this would give you that values  of Theta to minimize the cost  function J. 
  I'm going to take my  data set, so here are my four training examples.  In this case let's assume that,  you know, these four examples is all the data I have.  What I am going to do is take  my data set and add  an extra column that corresponds  to my extra feature, x0,  that is always takes  on this value of 1.  What I'm going to do is  I'm then going to construct  a matrix called X that's  a matrix are basically contains all  of the features from my  training data, so completely  here is my here are  all my features and we're  going to take all those numbers and  put them into this matrix "X", okay? 
Finally if you take  your matrix X and you take  your vector Y, and if you  just compute this, and set  theta to be equal to  X transpose X inverse times  X transpose Y, this would  give you the value of theta  that minimizes your cost function.  There was a lot  that happened on the slides and  I work through it using one specific example of one dataset.  Let me just write this  out in a slightly more general form  and then let me just, and later on in  this video let me explain this equation a little bit more.
The vector Y is obtained by  taking all all the labels,  all the correct prices of  houses in my training set, and  just stacking them up into  an M-dimensional vector, and  that's Y. Finally, having  constructed the matrix X  and the vector Y, we then  just compute theta as X'(1/X)  x X'Y.
그래 이 밑에 x 는 이 index가 맞지 교수님이 잘못 그림 저건 
In Octave X prime is  the notation that you use to denote X transpose.  And so, this expression that's  boxed in red, that's computing  X transpose times X.  pinv is a function for  computing the inverse of  a matrix, so this computes  X transpose X inverse,  and then you multiply that by  X transpose, and you multiply  that by Y. So you  end computing that formula  which I didn't prove,  but it is possible to  show mathematically even though I'm  not going to do so  here, that this formula gives you  the optimal value of theta  in the sense that if you set theta equal  to this, that's the value  of theta that minimizes the  cost function J of theta  for the new regression
So far, the balance seems to  favor normal the normal equation.  Here are some disadvantages of  the normal equation, and some advantages of gradient descent.  Gradient descent works pretty well,  even when you have a very large number of features.  So, even if you  have millions of features you  can run gradient descent and it will be reasonably efficient.  It will do something reasonable.  In contrast to normal equation, In, in  order to solve for the parameters  data, we need to solve for this term.  We need to compute this term, X transpose, X inverse.  This matrix X transpose X.  That's an n by n matrix, if you have n features. 

Normal Equation

Note: [8:00 to 8:44 - The design matrix X (in the bottom right side of the slide) given in the example should have elements x with subscript 1 and superscripts varying from 1 to m because for all m training sets there are only 2 features x_0 and x_1. 12:56 - The X matrix is m by (n+1) and NOT n by n. ]

Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In the "Normal Equation" method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration. The normal equation formula is given below:

\theta = (X^T X)^{-1}X^T y

There is no need to do feature scaling with the normal equation.

The following is a comparison of gradient descent and the normal equation:

Gradient DescentNormal Equation

Need to choose alpha No need to choose alpha
Needs many iterations No need to iterate
O (kn^2) O (n^3), need to calculate inverse of X^TX
Works well when n is large Slow if n is very large

With the normal equation, computing the inversion has complexity \mathcal{O}(n^3). So if we have a very large number of features, the normal equation will be slow. In practice, when n exceeds 10,000 it might be a good time to go from a normal solution to an iterative process.

 

 

In this video I want to talk about the Normal equation and non-invertibility. This is a somewhat more advanced concept, but it's something that I've often been asked about. And so I want to talk it here and address it here. But this is a somewhat more advanced concept, so feel free to consider this optional material.     So for those of you that know a bit more linear algebra  you may know that only some matrices are invertible and  some matrices do not have an inverse we call those non-invertible matrices.  Singular or degenerate matrices.
but Octave hast two functions for inverting matrices.  One is called pinv, and the other is called inv.  And the differences between these two are somewhat technical.  One's called the pseudo-inverse, one's called the inverse.  But you can show mathematically that so  long as you use the pinv function then this will actually compute  the value of data that you want even if X transpose X is non-invertible. 
The first cause is if somehow in your learning problem you have redundant  features.  Concretely, if you're trying to predict housing prices and if x1 is the size of  the house in feet, in square feet and x2 is the size of the house in square meters,  then you know 1 meter is equal to 3.28 feet Rounded to two decimals.  And so your two features will always satisfy the constraint x1  equals 3.28 squared times x2.  And you can show for those of you that are somewhat advanced in linear Algebra, but  if you're explaining the algebra you can actually show that if your two features  are related, are a linear equation like this.  Then matrix X transpose X would be non-invertable.  The second thing that can cause X transpose X to be non-invertable is if you  are training, if you are trying to run the learning algorithm with a lot of features.  Concretely, if m is less than or equal to n.  For example, if you imagine that you have m = 10 training examples  that you have n equals 100 features then you're trying to fit  a parameter back to theta which is, you know, n plus one dimensional.  So this is 101 dimensional,  you're trying to fit 101 parameters from just 10 training examples.     But commonly what we do then if m is less than n,  is to see if we can either delete some features or  to use a technique called regularization which is something that we'll talk about  later in this class as well, that will kind of let you fit a lot of parameters,  use a lot features, even if you have a relatively small training set.  But this regularization will be a later topic in this course. 

Normal Equation Noninvertibility

When implementing the normal equation in octave we want to use the 'pinv' function rather than 'inv.' The 'pinv' function will give you a value of \theta even if X^TX is not invertible.

If X^TX is noninvertible, the common causes might be having :

  • Redundant features, where two features are very closely related (i.e. they are linearly dependent)
  • Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization" (to be explained in a later lesson).

Solutions to the above problems include deleting a feature that is linearly dependent with another or deleting one or more features when there are too many features.

반응형