All Courses

Guide to Machine Learning Techniques

Anmol Sharma

3 years ago

Machine Learning Techniques | insideAIML
Table of Content
  • Introduction
  • Machine Learning Types
  • Machine Learning Techniques
  • Conclusion

Introduction

          Have you ever wondered how Machine learning solves problems in every domain? Nowadays, Machine learning is a hot topic.  Most of the gadgets or technologies that we use like mobile phones, social media, voice assistants, e-commerce all use ML techniques. Researchers and Engineers are discovering new techniques very frequently and the reason behind this is every technique solves a unique problem or is an upgrade of the former technique. In this article, we are going to discuss the most popular ML techniques. So, without wasting any time let’s get started.

Machine Learning Types

The three major types of Machine Learning are explained below. 
  • Supervised Learning
  • Unsupervised Learning
  • Reinforcement Learning

Supervised Learning

          Supervised learning means training the model with labeled data i.e., data with both input and
output labels.

Unsupervised Learning

          In the case of unsupervised learning, the model is trained using unlabelled data i.e., In this
case, we don’t know the output for a particular input value.

Reinforcement Learning

          Reinforcement learning makes the machine learn from the result of its actions i.e, the model
will learn whether to perform a particular operation or not by its experience.

Machine Learning Techniques

          There are so many ml techniques. We will learn about the most common ml techniques.
Below is the list of the most common ml techniques.
  • Regression
  • Classification
  • Clustering
  • Anomaly Detection
  • Ensemble Methods
  • Neural Networks and Deep Learning
  • Natural Language Processing

Regression

          Regression is  a  supervised  ML  technique. It is  a statistical  method  that  estimates  relationships among  dependent and independent variables. It tells us how the value of dependent variables changes when the value of the independent variable varies. The relationship is established with the help of the best-fitting line (y = mx +c; equation of line). It helps in predicting a numeric value. A regression model will predict the value Y for the given values of X.
Regression Equations:
  • Y = bo + b1X + e (Simple Linear Regression)
  • Y = bo + b1X1 + b2X2 + … + e (Multiple Linear Regression)
Here, Y -> Dependent Variable
X,X1,X2,...,Xn -> independent Variables
bo -> Intercept of line
b1,b2,...,bn -> slopes of line
e -> error 
Take a look at the picture below for a better understanding.
Simple Linear Regression | insideAIML
Image source - sthda
The above picture is a case of simple linear regression.
          Example: Suppose  you have to predict  the  value  of a  house and the  variables  you have  are  house_price, house_size, house_condition, and house_neighborhood. Here, house_price is the dependent variable, and the rest of the variables are independent because house_price depends upon the rest. 
The regression equation of this problem will look like as:
house_price = bo + b1*house_size + b2*house_condition + b3*house_neighborhood + e

Classification

          It is also a supervised ML technique. It is a technique that identifies that the given examples fall under which category. In classification, the output/prediction/dependent variable(Y) is categorical, and the interdependent variables(Xn) can be numerical and categorical.
          Example: We have a dataset of emails and we have to classify them as spam or not spam. Here, the Y(output) is a categorical variable i.e spam or not spam, and X(input variable) can be both categorical and numeric say email_sender(C), email_frequency(N). 
Take a look at the image below to understand the basic idea of Classification.
Classification | insideAIML
Image source- medium
Below are some of the common Classification Algorithms.
  • Logistic Regression
  • SVM
  • KNN
  • Naive-Bayes
  • Decision Trees
          All these algorithms follow different approaches but their aim is common i.e, to classify given examples/data points. Both machine learning classification and regression are supervised learning techniques.

Clustering

          Clustering is an unsupervised ML technique. A clustering model clusters/groups given unlabeled data points such that data points in a cluster are alike and unlike the datapoints of other groups. Clustering aims to find patterns in the unlabeled data points to form clusters. It is similar to the real world where we don’t have predefined solutions for every problem, we develop a particular method to find the optimal solution. There are a number of clustering types known to us to perform clustering.
Take a look at the picture below to see how clustering works.
Clustering | insideAIML
Image source-  Techofide
          Here, we have a basket of elements and we clustered elements based on color, shape, and size. You can see how alike elements are placed in a cluster and unlike are placed in different clusters.
Below are some of the common Clustering Algorithms.
  • K-means Clustering
  • Hierarchical Clustering
  • Mean-Shift Clustering
  • DBSCAN

Anomaly Detection

          Anomaly detection is the process of detecting items, events, or observations that differs from the majority of data by a huge margin. Such data points are called anomalies and the technique to detect anomalies is anomaly detection. We can build models based on historic data, social 
media information, and other sources of data to prevent fraud. 
Take a look at the picture below and understand how anomaly looks like.
Anomaly Detection | insideAIML
Image source- medium
Below are some of the common algorithms for Anamoly Detection using Machine learning.
  • SVM
  • KNN
  • K-means
  • LOF

Ensemble Methods

          Ensemble methods are one of the most efficient ML techniques. It involves grouping many predictive models that give the optimal model(final model), which has better prediction than any of the predictive models. It converts a group of weak learners into strong learners. Predictive models are the weak learners and the final model is the strong learner having high accuracy. They are a boost for the tree-based models. Ensemble methods are of the three types: Boosting, Bagging, Stacking.
The picture below describes the working of Ensemble methods.
Ensemble Methods
Image source - BD Tech Talks
Below are some of the common Ensemble Methods Algorithms.
  • Random Forest
  • XGBoost
  • Gradient Boosting
  • AdaBoost

Neural Networks and Deep Learning 

          Neural Network is a subset of Machine learning which focuses on finding the non-linear relationship between data which other Machine learning techniques fail to do. It is inspired by the structure of the human brain. Deep learning models train themselves to learn from data using Neural networks. Neural networks are considered the heart of Deep learning. Neural Networks have three layers: The input layer from where the model is fed with data(labeled or unlabeled), the hidden layer where computation takes place, and the output layer which provides the prediction of the model.
Below is the image of a simple Neural Network.
Simple Neural Network | insideAIML
Below are some of the common Deep Learning Algorithms.
  • Gradient Descent
  • Momentum
  • Adagrad
  • Ada Delta
  • Adam

Natural Language Processing

          Imagine if computers can completely understand human language, we have not achieved this stage yet but we can train computers to do certain tasks using NLP. NLP is a branch of Artificial Intelligence that prepares the text for Machine learning. It helps computers to understand, generate and process text. Chatbots, virtual assistants, auto-correction all are powered by NLP. 
          We need to deal with text data before training the Machine learning model on it. Scikit-learn NLTK library is the most popular NLP package which helps in preparing text for training the model. 
The image below represents the workflow of Natural Language Processing.
Workflow of Natural Language Processing| insideAIML
Image source - ResearchGate

Conclusion

          In the article, we learned about common ML techniques. We discovered machine learning classification and regression which are supervised ML techniques, clustering which is an unsupervised ML technique, anomaly detection using machine learning, ensemble methods, neural networks and NLP. We also discussed machine learning algorithms used for these ml techniques. These ML techniques are the core of these machine learning algorithms. 
We hope you gain an understanding of what you were looking for. Do reach out to us for queries on our, AI dedicated discussion forum and get your query resolved within 30 minutes.
      
Enjoyed reading this blog? Then why not share it with others. Help us make this AI community stronger. 
To learn more about such concepts related to Artificial Intelligence, visit our blog page.
You can also ask direct queries related to Artificial Intelligence, Deep Learning, Data Science and Machine Learning on our live discussion forum.
Keep Learning. Keep Growing. 

Submit Review