All Courses

Linear Regression With Multiple Variables

Anmol Sharma

2 years ago

Linear Regression With Multiple Variables | insideAIML 
Table of Content
  • Introduction
  • What is Linear Regression?
  • Linear Regression with Multiple Variables
  • Cost Function
  • Gradient Descent
  • Conclusion

Introduction

          Linear Regression is one of the  oldest  and  simplest  Machine  learning  algorithms. It is probably the first Machine learning algorithm everyone learns in their Machine learning journey. It is used for predicting continuous values using previous data. Example- house price prediction, weather forecast, stock price prediction and many other kinds of predictions. Linear Regression is the core idea for many Machine learning algorithms. In this article, we will learn Linear Regression with multiple variables and how to optimize the algorithms for better predictions. So, without wasting any time let’s begin the article.

What is Linear Regression?

          Linear Regression is a supervised  learning  algorithm  that  is used for regression problems. It determines the relationship between the dependent and the interdependent variables with the help of a best-fitting line; y = mx + c. It predicts a real number for given input variables. It uses the following equation:
yo = w1X1 + w2X2 + ... + wnXn + b + e --->eqn1
Here, X1, X2,..., Xn are the independent variables and yo is the dependent variable.
w1, w2,..., wn is the assigned weights, b is the bias and e is the error.
Take a look at the picture below.
Linear Regression | insideAIML
Here, b = bo, w1 = b1, and yo = yi
          Now, we have a basic idea of the Linear  Regression  equation.  Let’s  move on to Linear Regression with multiple variables.

Linear Regression with Multiple Variables

         The eqn1 is the general equation for Linear regression with n variable.The number of variables depends upon the data. In different cases, we have different numbers of variables(independent variables) on which the output variable depends(dependent variable). 
          For example: Suppose we have to  predict  the price of a house and the price depends on these variables: house_area, house_condition, house_floors and house_parking. Here the price is the dependent variable and house_area, house_condition, house_floors and house_parking are independent variables. Here, the number of independent variables is four.
The Linear regression equation for the above case would be.
yo = w1X1 + w2X2 + w3X3 + w4X4 + b + e
X1 for house_area, X2 for house_condition, X3 for house_floors and X4 for house_parking.
W1,w2,w3,w4 are the weights for X1,X2,X3,X4 respectively.
b is bias and e is the error.

Cost Function

          Cost functions determine how good our model is at making predictions for a given set of parameters(w&b). To train w and b we use the cost function. The values of w and b should be good enough such that the predicted value yo should be close to actual value y at least on training data.
The equation of Cost Function for LR with multiple variables would be as- 
Cost function | insideAIML
In the above equation, the two parameters J has are w and b it can also be written as J(w, b).
          Now we have the value of cost function for particular values of  w and  b. One next goal is to update the values of w and b that the value of the cost function is minimized.

Gradient Descent

          Gradient descent helps to learn w and b in such a manner that cost function is minimized. The cost function is of convex nature means there is only one global minima. Gradient descent tries to find out the global minima by updating the values of w and b in every iteration till global minima is achieved.
Take a look at the picture below.
Gradient Descent | insideAIML
          In the above image for the initial values of w and b, we are at the first cross that is far away from the global minima/least possible value of cost function. Now, gradient descent will keep updating the values of w and b such that the cross started moving downhill. When we reach the global minima, the values of w and b will be the final values of parameters used for training the linear regression model. This is how Gradient descent works.

Conclusion

          In this article, we learned Linear regression in Machine learning, LR with multiple variables, cost function and gradient descent in machine learning. We only considered the case of four variables in this article, in real-world predictions like weather forecasting or stock price prediction we have many variables on which the final output depends. We need to deal with these variables to make an efficient model. We also discovered how we can find optimal values of parameters w and b using cost function and gradient descent.
We hope you gain an understanding  of  what you were looking  for. Do reach out to us for queries on our, AI dedicated discussion forum and get your query resolved within 30 minutes.
  
Like the Blog, then Share it with your friends and colleagues to make this AI community stronger. 
To learn more about nuances of Artificial  Intelligence, Python Programming, Deep Learning, Data Science and Machine Learning, visit our insideAIML blog page.
Keep Learning. Keep Growing.

Submit Review