Market Basket Analysis or Association Rules or Affinity Analysis or Apriori Algorithm

First of all, if you are not familiar with the concept of Market Basket Analysis (MBA), Association Rules or Affinity Analysis and related metrics such as Support, Confidence and Lift, please read this article first.

Here is how we can do it in Python. We will look at two examples-

Example 1-

Data used for this example can be found here Retail_Data.csv

mba1mba2mba3

Example 2-

MBA4MBA5MBA6

Cheers!

Linear Discriminant Analysis ( LDA) with Scikit

Linear Discriminant Analysis (LDA) is similar to Principal Component Analysis (PCA) in reducing the dimensionality. However, there are certain nuances with LDA that we should be aware of-

  • LDA is supervised (needs categorical dependent variable) to provide the best linear combination of original variables while providing the maximum separation among the different groups. On the other hand, PCA is unsupervised
  • LDA can be used for classification also, whereas PCA is generally used for unsupervised learning
  • LDA doesn’t need the numbers of discriminant to be passed on ahead of time. Generally speaking the number of discriminant will be lower of the number of variables or number of categories-1.
  • LDA is more robust and can be conducted without even standardizing or normalizing the variables in certain cases
  • LDA is preferred for bigger data sets and machine learning

Let the action begin now-

lda1LDA2LDA3LDA4LDA5

Cheers!

Principal Component Analysis ( PCA) using Scikit

Principal Component Analysis ( PCA) is generally used as an unsupervised algorithm for reducing the data dimensions to address Curse of Dimensionality, detecting outliers, removing noise, speech recognition and other such areas.

The underlying algorithm in PCA is generally a linear algebra technique called Singular Value Decomposition (SVD). PCAs take the original data and create orthogonal components (uncorrelated components) that capture the information contained in the original data however with significantly less number of components.

Either the components themselves or  key loading of the components can be plugged in any further modeling work, rather than the original data to minimize information redundancy and noise.

There are three main ways to select the right number of components-

  1. Number of components should explain at least 80% of the original data variance or information [Preferred One]
  2. Eigen value of each PCA component should be more than or equal to 1. This means that they should express at least one variable worth of information
  3. Elbow or Scree method- look for the elbow in the percentage of variance explained by each components and select the components where an elbow or kink is visible.

You can use any one of the above or combination of the above to select the right number of components. It is very critical to standardize or normalize data before conducting PCA.

In the below case study we will use the first criterion shown above, i.e. 80% or more of the original data variance should be explained by the selected number of components.

PCA1PCA2PCA3PCA4PCA5PCA6

Decision Tree using Python Scikit

If you are not familiar with Decision Trees, please read this article first.

First let’s look at a very simple example on the Iris data-

Decision Tree in Python

Decision Tree in Python

Now let’s look at slightly more complex data-

Let’s first build a logistic regression model in Python using machine learning library Scikit. Please read here about the dataset and dummy coding.

clf1clf2clf3clf4clf5clf6clf7

dt1dt2dt3dt4

 

 

Cheers!

Categorical Variables Dummy Coding

Converting categorical variables into numerical dummy coded variable is generally a requirement in machine learning libraries such as Scikit as they mostly work on numpy arrays.

In this blog, let’s look at how we can convert bunch of categorical variables into numerical dummy coded variables using four different methods-

  1. Scikit learn preprocessing LabelEncoder
  2.  Pandas getdummies
  3. Looping
  4. Mapping

We will work with a dataset from IBM Watson blog as this has plenty of categorical variables. You can find the data here.  In this data, we are trying to build a model to predict “churn”, which has two levels “Yes” and “No”.

We will convert the dependent variable using Scikit LabelEncoder and the independent categorical variables using Pandas getdummies. Please note that LabelEncoder will not necessarily create additional columns, whereas the getdummies will create additional columns in the data. We will see that in the below example-

clf1clf2clf3clf4clf5clf6clf7

Here are few other ways to dummy coding-

dummy_coding1dummy_coding2dummy_coding3

Here is an excellent Kaggle Kernel for detailed feature engineering.

Cheers!

Hierarchical Clustering with Python

As highlighted in the article, clustering and segmentation play an instrumental role in Data Science. In this blog, we will show you how to build a Hierarchical Clustering with Python.

For this purpose, we will work with a R dataset called “Cheese”. Please install package called “Bayesm” in R and export this data set in csv format to be imported in Python. More on this dataset can be found here.

Let’s begin with the clustering in Python then. hclust1hclust2hclust3hclust4hclust5hclust6hclust7

hclust8

Cheers!

Python Machine Learning Linear Regression with Scikit- learn

Linear regression is one of the most fundamental machine learning technique in Python. For more on linear regression fundamentals click here. In this blog, we will build a regression model to predict house prices by looking into independent variables such as crime rate, % lower status population, quality of schools etc. We will be leveraging Scikit-learn library and in built data set called “Boston”.

Let’s now jump onto how to build a multiple linear regression model in Python.

Import packages and Boston dataset

Image 1- Importing Packages and Boston Dataset

Explore Boston Dataset

Image 2- Explore Boston Dataset

Creating Features and Labels and Running Correlations

Image 3- Creating Features and Labels and Running Correlations

Creating Features and Labels and Running Correlation Heatmap

Image 4- Creating Features and Labels and Running Correlation Heatmap

Test/Train Split, Linear Regression Model Fitting and Model Evaluation

Image 5- Test/Train Split, Linear Regression Model Fitting and Model Evaluation

Appending Predicted Data and Plotting the Errors

Image 6- Appending Predicted Data and Plotting the Errors

You can see from the above metrics that overall this plain vanilla regression model is doing a decent job. However, it can be significantly improved upon by either doing feature engineering such as binning, multicollinearity and heteroscedasticity fixes etc. or by leveraging more robust techniques such as Elastic Net, Ridge Regression or SGD Regression, Non Linear models.

Mean Squared Error (MSE)

Image 7- Mean Squared Error (MSE) Definition

Mean Absolute Percent Error (MAPE)

Image 8- Mean Absolute Percent Error (MAPE)

Model Evaluation Metrics

Fitting Linear Regression Model using Statmodels

Image 9- Fitting Linear Regression Model using Statmodels

OLS Regression Output

Image 10- OLS Regression Output

itting Linear Regression Model with Significant Variables

Image 11- Fitting Linear Regression Model with Significant Variables

Heteroscedasticity Consistent Linear Regression Estimates

Image 12- Heteroscedasticity Consistent Linear Regression Estimates

More details on the metrics can be found at the below links-

Wiki

Here is a blog with excellent explanation of all metrics

Cheers!