#### World's Best AI Learning Platform with profoundly Demanding Certification Programs

Designed by IITian's, only for AI Learners.

Designed by IITian's, only for AI Learners.

New to InsideAIML? Create an account

Employer? Create an account

Download our e-book of Introduction To Python

Exception Type: JSONDecodeError at /update/ Exception Value: Expecting value: line 1 column 1 (char 0) After creating super user I can't login to django admin account How to find all the adverbs and their positions using the re module? ImportError: cannot import name 'ZCRMRestClient' from 'zcrmsdk' How to change dataframe column values importing from csv file? Which skills required for data analysis using python? What is file hashing in python? how to import american sign language dataset in google colab Join Discussion

4.5 (1,292 Ratings)

547 Learners

Dec 15th (7:00 PM) 277 Registered

Anmol Sharma

a year ago

- Introduction
- What is Linear Regression?
- Linear Regression with Multiple Variables
- Cost Function
- Gradient Descent
- Conclusion

Linear Regression is one of the oldest and simplest Machine learning algorithms. It is probably the first Machine learning algorithm everyone learns in their Machine learning journey. It is used for predicting continuous values using previous data. Example- house price prediction, weather forecast, stock price prediction and many other kinds of predictions. Linear Regression is the core idea for many Machine learning algorithms. In this article, we will learn Linear Regression with multiple variables and how to optimize the algorithms for better predictions. So, without wasting any time let’s begin the article.

Linear Regression is a supervised learning algorithm that is used for regression problems. It determines the relationship between the dependent and the interdependent variables with the help of a best-fitting line; y = mx + c. It predicts a real number for given input variables. It uses the following equation:

Here, X1, X2,..., Xn are the independent variables and yo is the dependent variable.

w1, w2,..., wn is the assigned weights, b is the bias and e is the error.

Take a look at the picture below.

Image source - https://datascience.foundation

Here, b = bo, w1 = b1, and yo = yi

Now, we have a basic idea of the Linear Regression equation. Let’s move on to Linear Regression with multiple variables.

The eqn1 is the general equation for Linear regression with n variable.The number of variables depends upon the data. In different cases, we have different numbers of variables(independent variables) on which the output variable depends(dependent variable).

For example: Suppose we have to predict the price of a house and the price depends on these variables: house_area, house_condition, house_floors and house_parking. Here the price is the dependent variable and house_area, house_condition, house_floors and house_parking are independent variables. Here, the number of independent variables is four.

The Linear regression equation for the above case would be.

X1 for house_area, X2 for house_condition, X3 for house_floors and X4 for house_parking.

W1,w2,w3,w4 are the weights for X1,X2,X3,X4 respectively.

b is bias and e is the error.

Cost functions determine how good our model is at making predictions for a given set of parameters(w&b). To train w and b we use the cost function. The values of w and b should be good enough such that the predicted value yo should be close to actual value y at least on training data.

The equation of Cost Function for LR with multiple variables would be as-

In the above equation, the two parameters J has are w and b it can also be written as J(w, b).

Now we have the value of cost function for particular values of w and b. One next goal is to update the values of w and b that the value of the cost function is minimized.

Gradient descent helps to learn w and b in such a manner that cost function is minimized. The cost function is of convex nature means there is only one global minima. Gradient descent tries to find out the global minima by updating the values of w and b in every iteration till global minima is achieved.

Take a look at the picture below.

Image source- https://res.cloudinary.com

In the above image for the initial values of w and b, we are at the first cross that is far away from the global minima/least possible value of cost function. Now, gradient descent will keep updating the values of w and b such that the cross started moving downhill. When we reach the global minima, the values of w and b will be the final values of parameters used for training the linear regression model. This is how Gradient descent works.

In this article, we learned Linear regression in Machine learning, LR with multiple variables, cost function and gradient descent in machine learning. We only considered the case of four variables in this article, in real-world predictions like weather forecasting or stock price prediction we have many variables on which the final output depends. We need to deal with these variables to make an efficient model. We also discovered how we can find optimal values of parameters w and b using cost function and gradient descent.

We hope you gain an understanding of what you were looking for. Do reach out to us for queries on our, AI dedicated discussion forum and get your query resolved within 30 minutes.

Like the Blog, then Share it with your friends and colleagues to make this AI community stronger.

To learn more about nuances of Artificial Intelligence, Python Programming, Deep Learning, Data Science and Machine Learning, visit our insideAIML blog page.

Keep Learning. Keep Growing.