Machine Learning Engineer Nanodegree

Unsupervised Learning

Project: Creating Customer Segments

By Michael Eryan

Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.

Getting Started

In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.

The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.

Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.

In [3]:
# Python 3.6
# Import libraries necessary for this project
# ME: some additional stuff that I need is imported here

import sys
print ("\n Python version is:", sys.version_info, "\n")

import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames

from matplotlib import cm
import matplotlib.pyplot as plt
import seaborn as sns

import warnings
warnings.filterwarnings("ignore", category = UserWarning, module = "sklearn")

from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import export_graphviz

from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
from sklearn.metrics import silhouette_samples
import graphviz 

# Import supplementary visualizations code visuals.py
import visuals as vs

# Pretty display for notebooks
%matplotlib inline
 Python version is: sys.version_info(major=3, minor=6, micro=2, releaselevel='final', serial=0) 

In [5]:
# Load the wholesale customers dataset
try:
    data = pd.read_csv("customers.csv")
    data.drop(['Region', 'Channel'], axis = 1, inplace = True)
    print ("\n Wholesale customers dataset has {} samples with {} features.".format(*data.shape))
    print ("Each row is one customer's annual spending on various products in terms of monetary units (MU).")
except:
    print ("Dataset could not be loaded. Is the dataset missing?")
 Wholesale customers dataset has 440 samples with 6 features.
Each row is one customer's annual spending on various products in terms of monetary units (MU).

Data Exploration

In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.

Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.

In [6]:
# Display a description of the dataset

print ("\n List of features (products)")
print (data.dtypes)
print ("\n Quick summary of the data")
print (data.describe())
print ("\n Eyeball one row of the data")
print (data.head(n=1))
 List of features (products)
Fresh               int64
Milk                int64
Grocery             int64
Frozen              int64
Detergents_Paper    int64
Delicatessen        int64
dtype: object

 Quick summary of the data
               Fresh          Milk       Grocery        Frozen  \
count     440.000000    440.000000    440.000000    440.000000   
mean    12000.297727   5796.265909   7951.277273   3071.931818   
std     12647.328865   7380.377175   9503.162829   4854.673333   
min         3.000000     55.000000      3.000000     25.000000   
25%      3127.750000   1533.000000   2153.000000    742.250000   
50%      8504.000000   3627.000000   4755.500000   1526.000000   
75%     16933.750000   7190.250000  10655.750000   3554.250000   
max    112151.000000  73498.000000  92780.000000  60869.000000   

       Detergents_Paper  Delicatessen  
count        440.000000    440.000000  
mean        2881.493182   1524.870455  
std         4767.854448   2820.105937  
min            3.000000      3.000000  
25%          256.750000    408.250000  
50%          816.500000    965.500000  
75%         3922.000000   1820.250000  
max        40827.000000  47943.000000  

 Eyeball one row of the data
   Fresh  Milk  Grocery  Frozen  Detergents_Paper  Delicatessen
0  12669  9656     7561     214              2674          1338

Implementation: Selecting Samples

To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.

In [7]:
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [47, 327, 183]
# OK, so these cannot include the outliers that will be dropped!

# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print ("Chosen samples of wholesale customers dataset:")
print (samples)
print ("\nOrder of the samples is preserved: sample 0 is observation 47, 1 is 327, and 2 is 183.")
Chosen samples of wholesale customers dataset:
   Fresh   Milk  Grocery  Frozen  Detergents_Paper  Delicatessen
0  44466  54259    55571    7782             24171          6465
1    542    899     1664     414                88           522
2  36847  43950    20170   36534               239         47943

Order of the samples is preserved: sample 0 is observation 47, 1 is 327, and 2 is 183.

Question 1

Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.

  • What kind of establishment (customer) could each of the three samples you've chosen represent?

Hint: Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows:

  • Fresh: 12000.2977
  • Milk: 5796.2
  • Grocery: 7951.3
  • Detergents_paper: 2881.4
  • Delicatessen: 1524.8

Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be?

Answer 1:

47:

Supermarket. This customer trades in way above the mean amounts of Milk and Grocery but also Detergents_paper which makes it look like a supermarket. It looks like a one-stop destination to find everything you need to cook and do your chores on the weekend. Also has the highest amount of milk in the whole data set data. Perhaps it hosts a coffee shop or supplies other coffee shops or creameries.

327:

Convenience store. This customer has the smallest overall volume of purchases with all amounts below the means. Seems like a very small convenience store, or a gas station, maybe family owned.

183:

High end food market. This customer has way above average spending on most of the edibles and below average on Detergents_paper. Also, striking is the highest value for Delicatessen which makes me think this is a kind of high end food market or a triple dollar sign restaraunt catering to health conscious gourmans.

Overall, it is obvious there is great diversity among these customers. Given such diversity in spending, it would make sense that their delivery preferences are also very different.

Therefore, treating these customers the same way in terms of delivery times and frequencies would be unwise. Our wholesaler switched all customers from daily morning deliveries to three times a week evening deliveries. I can imagine this would be a problem for the supermarket and the high end food market described above but, probably, be OK with the convenience store.

Implementation: Feature Relevance

One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.

In the code block below, you will need to implement the following:

  • Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
  • Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
    • Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
  • Import a decision tree regressor, set a random_state, and fit the learner to the training data.
  • Report the prediction score of the testing set using the regressor's score function.
In [8]:
# ME: Dimensionality reduction
# Test a feature as a dependent variable of the other features in a regression tree
# Also can create a correlation matrix to see which feature can be excluded from further analysis because it does not explain any variation that is already explained by other features.
# Risk of this approach: potential interactions. It is possible that individually a feature does not seem to add value but combined with another feature, it might. 
# Because of this, PCA would be a better approach to reduce dimensionality.
In [9]:
# Correlation matrix first
#Pearson correlation heatmap 
corr = np.corrcoef(data.values.T)
heatm = sns.heatmap(corr,
                 cbar=True,
                 annot=True,
                 square=True,
                 fmt='.2f',
                 annot_kws={'size': 10},
                 yticklabels=list(data),
                 xticklabels=list(data))

plt.title("Correlation Heat Map")
plt.tight_layout()
plt.show()

ME: Notes about the correlation matrix.

Notice the highest correlation of Grocery and Detergents_Paper = 0.92. So, one of these two is a candidate for elimination. Also notice the runner-ups: Grocery's correlation with Milk is 0.73 but Detergent_Paper with Milk is 0.66. This makes me want to drop Grocery instead of Detergent_Paper because the latter adds more to explaining the variation in the data.
Let's also look at the scatterplots of these three before making the final decision.

In [10]:
# Scatterplot 
sns.pairplot(data[['Grocery','Milk','Detergents_Paper']], size=2.5)

plt.title("Scatterplot Matrix")
plt.tight_layout()
plt.show()

ME: Notes about the scatterplot

Yes, I see a little bit more diversity in the Detergents_Paper vs Milk scatterplot. So, looking at both the correlations and scatterplots, I choose to drop Grocery from features because it adds the least marginal explanatory power.

In [11]:
# In[]: Model Grocery as the target variable
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop('Grocery', axis=1)

# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Grocery'], test_size=0.25, random_state=3)

# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=3, max_depth = 3).fit(X_train, y_train)

# TODO: Report the score of the prediction using the testing set - this score is actually R^2
score = regressor.score(X_test, y_test)
print ("Score on test data for target variable = Grocery: ", round(score,2))
Score on test data for target variable = Grocery:  0.73

ME: discussion of the regression tree scoring.

Score=0.73 is a pretty high score, meaning the variation in Grocery can be reasonably well explained by the other features.

Thus, my earlier decision to drop this feature seems to have been justified as this feature adds the least to explaining variation in the data.

In [12]:
# Maybe print the actual decision tree chart - looks cute?
#Graph the actual tree to understand better 
tree_graph = export_graphviz(regressor,
                      out_file=None,
                      max_depth = 3,
                      impurity = True,
                      feature_names = list(X_train),
                      rounded = True,
                      filled= True )

graph = graphviz.Source(tree_graph)  
graph 

#looks cute but not as intuitive as a classification tree usually looks like
Out[12]:
Tree 0 Detergents_Paper <= 8319.5 mse = 95359421.838 samples = 330 value = 7894.873 1 Detergents_Paper <= 4103.0 mse = 27317667.015 samples = 303 value = 5846.528 0->1 True 8 Detergents_Paper <= 22120.5 mse = 283452649.311 samples = 27 value = 30881.852 0->8 False 2 Milk <= 16708.0 mse = 13183576.111 samples = 258 value = 4309.267 1->2 5 Milk <= 12436.5 mse = 17124249.376 samples = 45 value = 14660.156 1->5 3 mse = 8787875.562 samples = 253 value = 4031.265 2->3 4 mse = 33816788.16 samples = 5 value = 18376.2 2->4 6 mse = 12730011.354 samples = 36 value = 13490.75 5->6 7 mse = 7351014.395 samples = 9 value = 19337.778 5->7 9 Detergents_Paper <= 12505.5 mse = 36320362.563 samples = 23 value = 24884.043 8->9 12 Detergents_Paper <= 39464.5 mse = 308233861.688 samples = 4 value = 65369.25 8->12 10 mse = 15372032.245 samples = 14 value = 21574.429 9->10 11 mse = 25362908.889 samples = 9 value = 30032.333 9->11 13 mse = 77045497.556 samples = 3 value = 56232.333 12->13 14 mse = 0.0 samples = 1 value = 92780.0 12->14

Question 2

  • Which feature did you attempt to predict?
  • What was the reported prediction score?
  • Is this feature necessary for identifying customers' spending habits?

Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance.

Answer 2:

Based on the initial assessment of the correlation matrix and scatterplots I picked Grocery as a candidate for exclusion as a feature.

After modeling Grocery in a decision tree, it appeared that my choice was justified - a high R^2 score of 0.73 means that the remaining features explain most of the variation in Grocery, meaning that it does not add that much more useful variation to the data set.

Therefore, I conclude that among the other features Grocery is not really necessary because adds the least marginal value for identifying customers' spending habits and can be excluded from the model.

Visualize Feature Distributions

To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.

In [13]:
# Produce a scatter matrix for each pair of features in the data
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');

Question 3

  • Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk about the normality, outliers, large number of data points near 0 among others. If you need to sepearate out some of the plots individually to further accentuate your point, you may do so as well.
  • Are there any pairs of features which exhibit some degree of correlation?
  • Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict?
  • How is the data for those features distributed?

Hint: Is the data normally distributed? Where do most of the data points lie? You can use corr() to get the feature correlations and then visualize them using a heatmap(the data that would be fed into the heatmap would be the correlation values, for eg: data.corr()) to gain further insight.

Answer 3:

The diagonal plots show that the variables are not normally distributed but are skewed (right tailed) - most values are close to zero with a handful of outliers on the right. Scatter plots help us get a feel for the correlation strengths. (My favorite picture explaining correlation is here.)

Usually, in such cases log-transformation helps to normalize the data before modeling. Remember that log-transformation creates a "compressed" scale. My favorite example is in measuring distances in the cosmos. In this example, when using log of 10, each next equal sized interval on the graph indicates 10 times the distance of the previous interval. Thus, using log-transformating we will "compress" the long tails and make the distribution look more like normal.

Yes, as obvious from the scatterplot matrix and shown in the correlation matrix above, Grocery, Milk and Detergents_Paper are correlated with each other which means that they capture a lot of the same variation in the data and at least one of them can be excluded without losing much explanatory power while saving us a degree of freedom in the model.

Given that Detergents_Paper is slightly weaker correlated to Milk than is Grocery, it makes sense to keep Detergents_Paper but drop Grocery which adds the least marginal value to explaining the data.

This confirms my previous guess that Grocery is a good candidate for elimination. Grocery has a skewed distribution like the other variables would benefit from log-transformating before modeling.

Data Preprocessing

In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.

Implementation: Feature Scaling

If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.

In the code block below, you will need to implement the following:

  • Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
  • Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
In [14]:
# Implementation: Feature Scaling

# TODO: Scale the data using the natural logarithm
log_data = np.log(data)

# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
print ("\n Compare original and log transformed samples")
print ("\n Original samples")
print (samples)
print ("\n Log transformed samples")
print (log_samples)
print ("\n Remember that log is a mononotic transformation - the order of observations is preserved.")

# Produce a scatter matrix for each pair of newly-transformed features
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
# Notice now the diagonal plots look much more normal but still not quite. 
 Compare original and log transformed samples

 Original samples
   Fresh   Milk  Grocery  Frozen  Detergents_Paper  Delicatessen
0  44466  54259    55571    7782             24171          6465
1    542    899     1664     414                88           522
2  36847  43950    20170   36534               239         47943

 Log transformed samples
       Fresh       Milk    Grocery     Frozen  Detergents_Paper  Delicatessen
0  10.702480  10.901524  10.925417   8.959569         10.092909      8.774158
1   6.295266   6.801283   7.416980   6.025866          4.477337      6.257668
2  10.514529  10.690808   9.911952  10.505999          5.476464     10.777768

 Remember that log is a mononotic transformation - the order of observations is preserved.

Observation

After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).

Run the code below to see how the sample data has changed after having the natural logarithm applied to it.

In [15]:
# Display the log-transformed sample data
display(log_samples)
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 10.702480 10.901524 10.925417 8.959569 10.092909 8.774158
1 6.295266 6.801283 7.416980 6.025866 4.477337 6.257668
2 10.514529 10.690808 9.911952 10.505999 5.476464 10.777768
In [16]:
# In[]: Repeat correlations on the log transformed data
corr = np.corrcoef(log_data.values.T)
heatm = sns.heatmap(corr,
                 cbar=True,
                 annot=True,
                 square=True,
                 fmt='.2f',
                 annot_kws={'size': 10},
                 yticklabels=list(data),
                 xticklabels=list(data))

plt.title("Correlation Heat Map - for log-transformed data")
plt.tight_layout()
plt.show()

ME: observations about the correlation matrix for log-transformed variables

These correlations are more reliable because log-transformation reduced the effect of the outliers. Good to see that the relationships observed before still hold - Grocery was the right choice to exclude.

Implementation: Outlier Detection

Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.

In the code block below, you will need to implement the following:

  • Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
  • Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
  • Assign the calculation of an outlier step for the given feature to step.
  • Optionally remove data points from the dataset by adding indices to the outliers list.

NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.

In [17]:
# For each feature find the data points with extreme high or low values
outlier_factor=1.5 #times the interquartile range

for feature in log_data.keys():
    
    # TODO: Calculate Q1 (25th percentile of the data) for the given feature
    Q1 = np.percentile(log_data[feature],25)
    
    # TODO: Calculate Q3 (75th percentile of the data) for the given feature
    Q3 = np.percentile(log_data[feature],75)
    
    # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
    step = outlier_factor * ( Q3 - Q1 )
    
    # Display the outliers
    print ("\n Data points considered outliers for the feature '{}':".format(feature))
    display(log_data[ ~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))][feature]) #print just that feature
 Data points considered outliers for the feature 'Fresh':
65     4.442651
66     2.197225
81     5.389072
95     1.098612
96     3.135494
128    4.941642
171    5.298317
193    5.192957
218    2.890372
304    5.081404
305    5.493061
338    1.098612
353    4.762174
355    5.247024
357    3.610918
412    4.574711
Name: Fresh, dtype: float64
 Data points considered outliers for the feature 'Milk':
86     11.205013
98      4.718499
154     4.007333
356     4.897840
Name: Milk, dtype: float64
 Data points considered outliers for the feature 'Grocery':
75     1.098612
154    4.919981
Name: Grocery, dtype: float64
 Data points considered outliers for the feature 'Frozen':
38      3.496508
57      3.637586
65      3.583519
145     3.737670
175     3.951244
264     4.110874
325    11.016479
420     3.218876
429     3.850148
439     4.174387
Name: Frozen, dtype: float64
 Data points considered outliers for the feature 'Detergents_Paper':
75     1.098612
161    1.098612
Name: Detergents_Paper, dtype: float64
 Data points considered outliers for the feature 'Delicatessen':
66      3.295837
109     1.098612
128     1.098612
137     3.583519
142     1.098612
154     2.079442
183    10.777768
184     2.397895
187     1.098612
203     2.890372
233     1.945910
285     2.890372
289     3.091042
343     3.610918
Name: Delicatessen, dtype: float64
In [18]:
# OPTIONAL: Select the indices for data points you wish to remove
print (data.iloc[[65,66,75,128,154]])

#outliers  = [65,66,75,128,154]
#outliers  = [38,57,	65,	66,	75,	81,	86,	95,	96,	98,	128,	145,	154,	161,	171,	175,	193,	218,	264,	304,	305,	325,	338,	353,	355,	356,	357,	412,	420,	429,	439]
#very different PC's now - but still only 2 clusters
#also can drop Grocery and use new_data

outliers  = []

# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True) 
     Fresh   Milk  Grocery  Frozen  Detergents_Paper  Delicatessen
65      85  20959    45828      36             24231          1423
66       9   1534     7417     175              3468            27
75   20398   1137        3    4407                 3           975
128    140   8847     3823     142              1062             3
154    622     55      137      75                 7             8

Question 4

  • Are there any data points considered outliers for more than one feature based on the definition above?
  • Should these data points be removed from the dataset?
  • If any data points were added to the outliers list to be removed, explain why.

Hint: If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them.

Answer 4:

Yes, there were four data points which were outliers for more than one feature: 66 and 128 (Fresh and Delicatessen),75 (Grocery and Detergents_Paper, 154 (Milk, Grocery and Delicatessen).

Yes, I would recommend to remove these data points from the data set before modeling because these seem to be very unusual customers and we really want to get a good sense of our average customers.

So, I decided to remove them since they might do more harm than good during modeling by influencing the clusters.

Plus, I still have over four hundred observations in the data set so dropping four extreme cases is not such high loss of data.

Feature Transformation

In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.

Implementation: PCA

Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.

In the code block below, you will need to implement the following:

  • Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
  • Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
In [19]:
#ME: What I like about the PCA transformation is that I can pick the top two components and plot them. 

# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA() #will create as many components as their features
pca.fit(good_data)
Out[19]:
PCA(copy=True, iterated_power='auto', n_components=None, random_state=None,
  svd_solver='auto', tol=0.0, whiten=False)
In [20]:
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
#The components are in the order descending importance. Reviewed below.
In [21]:
# Generate PCA results chart
pca_results = vs.pca_results(good_data, pca)

ME: observations about the PCA plot

Very nice plot - showing each component/dimension and its ingredients. We can name the components after their main ingredients: e.g. PC1 is Detergents_Paper etc.

Notice how PCA also put Grocery in the last, least important component. So, we do not really have to remove it manually if we will be using only the top PC's for modeling.

Question 5

  • How much variance in the data is explained in total by the first and second principal component?
  • How much variance in the data is explained by the first four principal components?
  • Using the visualization provided above, talk about each dimension and the cumulative variance explained by each, stressing upon which features are well represented by each dimension(both in terms of positive and negative variance explained). Discuss what the first four dimensions best represent in terms of customer spending.

Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.

Answer 5:

  • The top two PC's: 0.4452 + 0.2641 = 0.7093 or 70.93% of variation. Great, can start by just using these two.
  • The top four PC's: 0.4452 + 0.2641 + 0.1221 + 0.1005 = 0.2226 = 0.9319 or 93.19% of the variation.

So what do the first four PC's represent?

My speculations:

  • PC1: Detergents_Paper, Grocery and Milk (same direction) - Stuff from the chores list for housekeeping?
  • PC2: Fresh, Frozen, Delicatessen (same direction) - Edible stuff.
  • PC3: Fresh - Delicatessen (opposite directions): Expiration date dimension? Makes sense that Fresh and Delicatessen go in the opposite direction as this PC increases.
  • PC4: Frozen - Delicatessen (opposite directions): Maybe also related to expiration date if some of the Delicatessen are perishable?

Interesting how Delicatessen has a significant presence in three of the PC's which suggests there might be great variety within this group of products.
Perhaps, there are both fresh/perishable and frozen/shelf-stable delicatessen? I would expect this category to get torn between clusters.

Observation

Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.

In [22]:
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Dimension 1 Dimension 2 Dimension 3 Dimension 4 Dimension 5 Dimension 6
0 -4.3780 -3.9972 -0.2061 0.6713 0.5351 0.0226
1 2.1253 2.8804 1.5074 -0.8263 0.1195 -0.2476
2 -0.4585 -5.3459 2.6856 -0.0173 2.1850 -0.2688

ME reminder: indices = [47, 327, 183]

For 47 the first two PC's are most extreme - which makes sense as I labeled this store as a "supermarket" and, indeed, it has edible and housekeeping stuff.

For 327 the most extreme is PC2 and the others are not so far behind. So, I guess it makes sense that I called it "convenience" store.

For 183 the most extreme are PC2 and PC3: which covers more of the Fresh and Delicatessen features. Not sure how this supports my label of "high end food market."

Implementation: Dimensionality Reduction

When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.

In the code block below, you will need to implement the following:

  • Assign the results of fitting PCA in two dimensions with good_data to pca.
  • Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
  • Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
In [23]:
# TODO: Apply PCA by fitting the good data with only two dimensions - me: top two
pca = PCA(n_components=2)
pca.fit(good_data)

# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)

# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)

# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])   

Observation

Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.

In [24]:
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
print ("The numbers are the same because we are just keeping the top two dimensions after using all the features to create all the dimensions.")
Dimension 1 Dimension 2
0 -4.3780 -3.9972
1 2.1253 2.8804
2 -0.4585 -5.3459
The numbers are the same because we are just keeping the top two dimensions after using all the features to create all the dimensions.
In [25]:
# Plot these first two
pd.plotting.scatter_matrix(reduced_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
#Notice the near normal distribution of both dimensions - they cover the variation in the data very well.     

Visualizing a Biplot

A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.

Run the code cell below to produce a biplot of the reduced-dimension data.

In [26]:
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Out[26]:
<matplotlib.axes._subplots.AxesSubplot at 0x2094bb9d208>

Observation

Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.

From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?

ME: observations about the PCA biplot

The red arrows show the projection of the original feature axis from the hyperplane onto the two dimensions (PC1 and PC2).

Grocery, Milk, Detergents_Paper are most strongly (inversely) correlated with Dimension 1. This was also obvious from the "pca_results" chart (multi-colored bars above).

Frozen, Fresh and Delicatessen are most strongly (inversely) correlated with Dimension 2.

Again we can see that there is something quite different about Delicatessen. From the "pca_results" chart for Dimension 2 I would not have guessed that it is different from Fresh and Frozen.

Clustering

In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.

Question 6

  • What are the advantages to using a K-Means clustering algorithm?
  • What are the advantages to using a Gaussian Mixture Model clustering algorithm?
  • Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?

Hint: Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset.

Answer 6:

Advantages to using K-Means clustering

K-means is a simple and efficient algorithm which has made it very popular. It is a kind of prototype-based clustering in that each cluster has a representative centroid point which represents the average of all the other points in that cluster (not necessarily a real data point).

An important trait on K-means is that the clusters are assumed to be spherical. Also the number of clusters is a hyper-parameter that the user must specify in advance. The user has methods to help find the optimal number of clusters though.

The trick with K-means is that the initial placement of the centroid matters which is why we will be using the K-means++ algorithm (already default in the sklearn). This option places the initial centroids as far from each other as possible which leads to more consistent results among the iterations.

K-means is a hard clustering algorithm though - a point can belong to only once cluster. This may not be appropriate if we actually care to know the probabilities of any point belonging to the cluster. Which is why soft clustering algorithms like "fuzzy K-means" have been invented.

Advantages to using Gaussian Mixture Model clustering

GMM clustering is a more generalized algorithm for clustering. Its advantage over K-means is that it allows elliptical clusters and soft clustering that shows the probabilities of each point belonging to multiple clusters. It is, however, more complex and more difficult to interpret than K-means.

Source: http://scikit-learn.org/stable/modules/mixture.html

Chosen algorithm

K-means++ because it is always a good place to start. Even if some data points might fit into multiple clusters based on their features we examined above, I still want to get a general sense of the hard clusters in the data. I can always try more advanced methods later if my requirements are not met.

Implementation: Creating Clusters

Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.

In the code block below, you will need to implement the following:

  • Fit a clustering algorithm to the reduced_data and assign it to clusterer.
  • Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
  • Find the cluster centers using the algorithm's respective attribute and assign them to centers.
  • Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
  • Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
    • Assign the silhouette score to score and print the result.
In [27]:
# TODO: Apply your clustering algorithm of choice to the reduced data 
v_clusters=3 #my initial pick

clusterer = KMeans(n_clusters=v_clusters, 
            init='k-means++', #which is already default
            n_init=10, # 10 random starting points
            max_iter=300, # re-centering iterations
            tol=1e-04,
            random_state=3)

# TODO: Predict the cluster for each data point
preds = clusterer.fit_predict(reduced_data)  

# TODO: Find the cluster centers
centers = clusterer.cluster_centers_

# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)

# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)    
In [28]:
print("\n Centroids: ",centers)
print("\n Predicted clusters for the samples: ", sample_preds)
print("\n Silhouette score: for ", v_clusters," clusters is", round(score,2))

print ("\nHmm, my imagined convenience store is different from the other two which are in the same cluster. Let's see what is going on.")
 Centroids:  [[-2.02777106  2.31292772]
 [ 1.77574097  0.02617815]
 [-1.59883471 -1.21888719]]

 Predicted clusters for the samples:  [2 1 2]

 Silhouette score: for  3  clusters is 0.39

Hmm, my imagined convenience store is different from the other two which are in the same cluster. Let's see what is going on.
In [29]:
# Using the silhouette score - need to turn the above code into a loop to test multiple scenarios
for i in range(2,11):
    clusterer = KMeans(n_clusters=i, 
                init='k-means++', #which is already default
                n_init=10, # 10 random starting points
                max_iter=300, # re-centering iterations
                tol=1e-04,
                random_state=3)
    
    # TODO: Predict the cluster for each data point
    preds = clusterer.fit_predict(reduced_data)  
    
    # TODO: Find the cluster centers
    centers = clusterer.cluster_centers_
    
    # TODO: Predict the cluster for each transformed sample data point
    sample_preds = clusterer.predict(pca_samples)
    
    # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
    score = silhouette_score(reduced_data, preds) 
    print ("For n_clusters=",i,"silhouette score=",round(score,2)) 
For n_clusters= 2 silhouette score= 0.42
For n_clusters= 3 silhouette score= 0.39
For n_clusters= 4 silhouette score= 0.33
For n_clusters= 5 silhouette score= 0.35
For n_clusters= 6 silhouette score= 0.36
For n_clusters= 7 silhouette score= 0.36
For n_clusters= 8 silhouette score= 0.36
For n_clusters= 9 silhouette score= 0.36
For n_clusters= 10 silhouette score= 0.35

Question 7

  • Report the silhouette score for several cluster numbers you tried.
  • Of these, which number of clusters has the best silhouette score?

Answer 7:

See the silhouette scores for scenarios with 2-10 clusters above.

Highest silhouette score: 0.42 for the 2 cluster scenario - this is our winner.

Cluster Visualization

Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.

In [30]:
# ME: Hypothetical - scenario with max number of clusters: 10 - obviously not the winner scenario!

# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
In [31]:
# ME: Using within-clsuter SSE (distortion) statistic - to see the "elbow"
distortions = []
for i in range(1, 11):
    clusterer = KMeans(n_clusters=i, 
                init='k-means++', 
                n_init=10, 
                max_iter=300,
                tol=1e-04,
                random_state=3)
    clusterer.fit(reduced_data)
    distortions.append(clusterer.inertia_)
    
plt.plot(range(1, 11), distortions, marker='o')
plt.xlabel('Number of clusters')
plt.ylabel('Distortion/SSE')
plt.tight_layout()
plt.show()

print('K-means++ Distortion/SSE: ',distortions[:3])

print ("\nThe only noticeable elbow is at 2, after that it is really smooth. Concurs with sillhoute score.")
K-means++ Distortion/SSE:  [3451.5978063396501, 1968.4888204647798, 1460.2589624826132]

The only noticeable elbow is at 2, after that it is really smooth. Concurs with sillhoute score.
In [32]:
#ME: Predicting the cluster based on optimal number of clusters: 2 - this is the Winner scenario.
v_clusters=2
clusterer = KMeans(n_clusters=v_clusters, 
            init='k-means++', #which is already default
            n_init=10, # 10 random starting points
            max_iter=300, # re-centering iterations
            tol=1e-04,
            random_state=3)

preds = clusterer.fit_predict(reduced_data)  

centers = clusterer.cluster_centers_

sample_preds = clusterer.predict(pca_samples)

score = silhouette_score(reduced_data, preds)   

vs.cluster_results(reduced_data, preds, centers, pca_samples)
#not really pretty
In [33]:
# ME: Bonus - Sillhoute plots
cluster_labels = np.unique(preds)
n_clusters = cluster_labels.shape[0]
silhouette_vals = silhouette_samples(reduced_data, preds, metric='euclidean')
y_ax_lower, y_ax_upper = 0, 0
yticks = []

for i, c in enumerate(cluster_labels):
    c_silhouette_vals = silhouette_vals[preds == c]
    c_silhouette_vals.sort()
    y_ax_upper += len(c_silhouette_vals)
    color = cm.jet(float(i) / n_clusters)
    plt.barh(range(y_ax_lower, y_ax_upper), c_silhouette_vals, height=1.0, 
             edgecolor='none', color=color)

    yticks.append((y_ax_lower + y_ax_upper) / 2.)
    y_ax_lower += len(c_silhouette_vals)
    
silhouette_avg = np.mean(silhouette_vals)
plt.axvline(silhouette_avg, color="red", linestyle="--") 

plt.yticks(yticks, cluster_labels + 1)
plt.ylabel('Cluster')
plt.xlabel('Silhouette coefficient')

plt.tight_layout()
plt.show()

ME: about the silhoutte plots

Silhoutte plots show how tightly grouped the samples are within the clusters. If a sample's silhoutte coefficient is close to zero it means it is an outlier. The dotted line shows the average coefficient.

In our case both clusters look decent but not great. There are a few samples that are close to zero - meaning they are outliers and do not fit well into the clusters. The average coefficient is pretty low at 0.41. Based on this, I would say we have fair clustering.

Reference: Raschka, Sebastian, and Vahid Mirjalili. Python Machine Learning: Machine Learning and Deep Learning with Python, Scikit-Learn, and TensorFlow. Packt, 2017.

Implementation: Data Recovery

Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.

In the code block below, you will need to implement the following:

  • Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
  • Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
In [34]:
# Find the prototypes in the MU's: transform centroids from PC's to log-transformed and then to MU's
# We want to know how the centroids look like in terms of MU's - real world values. 

# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)

# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)

# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)

print ("\nYes, the centroids are noticeably far apart from each other. ")
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
Segment 0 3570.0 7749.0 12463.0 900.0 4567.0 966.0
Segment 1 8994.0 1909.0 2366.0 2081.0 290.0 681.0
Yes, the centroids are noticeably far apart from each other. 

Question 8

  • Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project(specifically looking at the mean values for the various feature points). What set of establishments could each of the customer segments represent?

Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent.

In [35]:
#refresh memory - just look at medians
data.median()
Out[35]:
Fresh               8504.0
Milk                3627.0
Grocery             4755.5
Frozen              1526.0
Detergents_Paper     816.5
Delicatessen         965.5
dtype: float64

Answer 8:

I am comparing to the median values of the features:

  • Segment 0 has higher than median Milk, Grocery, Detergents_Paper, below median for Fresh and Frozen, and almost exactly the median value for Delicatessen. This looks like a supermarket, with everything you need to complete your chores.
  • Segment 1 has slightly higher than median Fresh and Frozen and everything else below the median - so, more of a fresh produce store.

Like I said before, these two clusters are not perfect but still give us a good sense of customer segmentation.

Question 9

  • For each sample point, which customer segment from Question 8 best represents it?
  • Are the predictions for each sample point consistent with this?*

Run the code block below to find which cluster each sample point is predicted to be.

In [36]:
# Display the predictions
for i, pred in enumerate(sample_preds):
    print("Sample point", i, "predicted to be in Cluster", pred)

print ("\nOriginal sample observations")
print (samples)
# Were my guesses on mark?
Sample point 0 predicted to be in Cluster 0
Sample point 1 predicted to be in Cluster 1
Sample point 2 predicted to be in Cluster 1

Original sample observations
   Fresh   Milk  Grocery  Frozen  Detergents_Paper  Delicatessen
0  44466  54259    55571    7782             24171          6465
1    542    899     1664     414                88           522
2  36847  43950    20170   36534               239         47943

Answer 9:

  • Sample 0: obs 47 (my guess: Supermarket) is in cluster 0 which I called "supermarket") - so, my guess was correct.
  • Samples 1 and 2: obs 327 (my guess: Convenience store) and 183 (my guess: High end food market) respectively are in cluster 1 which I called "fresh produce store," so my guess was almost correct.

I guess it kind of does make sense that the Convenience store and High end food market have more in common than they do with a supermarket.

Conclusion

In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.

Question 10

Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively.

  • How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?*

Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?

Answer 10:

On A/B testing.

The treatment is the introduction of the cheaper evening delivery 3 days a week instead of the current morning delivery 5 days a week.

First, I would form a hypothesis something like this one:

  • H0: Customers in both clusters would react similarly to the treatment
  • H1: First cluster (supermarkets) would react less negatively than the second cluster (everyone else).

I would create test cells in each cluster by picking random samples from each cluster. Say, at least 30 observations from each cluster for the total of about 10-20% of our customer base to avoid disrupting the business too much.

Next, I would assign the treatment to the test cells, record the feedback and conduct statistical tests measuring the difference in reaction among the two clusters.

Whichever way we collect and quantify customers' reaction will determine how we measure the differences between each clusters's test and control cells and also between the two test cells. If the meaure is a continuous variable, we could conduct T-tests or if the measure is a discrete variable, we will need a non-parametetric test like the Sign test.

We might also consider more detailed qualitative analysis based on the customer feedback.

Question 11

Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.

  • How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?

Hint: A supervised learner could be used to train on the original customers. What would be the target variable?

Answer 11:

If we have the estimated spending values for all the six features, we can predict their segments either via the unsupervised model by itself or by using both the unsupervised and supervised models.

To use the unsupervised model, we can transform the estimated values by logarithm and then PC before plugging into our K-means algorithm to find out to which clusters (customer segments) these new clients belong.

If we want to get really fancy, then we can use the assigned cluster numbers from our previously trained unsupervised model as the "engineered" target variable and build a model using a supervised learning algorithm like Decision Tree or Logistic regression.

We would train it as we usually do by splitting our historical data into training and test sets. Once we have the model, we can then score the data of the new customers using this supervised model and estimate the probability of belonging to a customer segment.

Either way, as long as we have reasonably accurate spending estimates for the new customers we should be able to predict their delivery preferences and cater to them accordingly.

Visualizing Underlying Distributions

At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.

Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.

In [37]:
# In[]: Visualizing Underlying Distributions 
# Channels are the business types! and there are only 2! LoL
vs.channel_results(reduced_data, outliers, pca_samples)
#numbered points are my samples
In [38]:
# ME: Based on my 2 clusters
vs.cluster_results(reduced_data, preds, centers, pca_samples)
#numbered points are the centroids
#So, my clusters are pretty good actually, but the data is just too messy. Could use more real life labeling. 

Question 12

  • How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers?
  • Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution?
  • Would you consider these classifications as consistent with your previous definition of the customer segments?

Answer 12:

K-means algorithm did very well by picking just two clusters which compare pretty well to the underlying distribution of the data. There are a few customers who are in the wrong cluster because their spending behavior is unusual for their channel.

Yes, there are customer customer that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes'. The two clusters that K-means predicted overlap pretty well with the original distribution shown on 'Channel'. Using silhoutte scores helped us identify the correct number of clusters - 2 and K-means did a job of assigning data points to these clusters.

These two channels are not consisitent with what I have initially guessed by eye-balling the data. I was actually surprised to find out that the original labels had only two segments: Retailer vs Hotel/Restaraunt/Cafe. I guess the Retailer correctly as Supermarket. I guessed incorrectly that the second segment was actually two distinct segments: convenience stores and high end food stores.

Perhaps whoever assigned the original labels started with separate segments for Hotel/Restaraunt/Cafe but then based on cluster analysis decided to aggregate them together because they had more in common with each other than the Retailer segment.

K-means created the cluster 0 which overlaps pretty well with Retailer channel and cluster 1 with Hotel/Restaraunt/Cafe channel.

PC1 (mostly derived from Detergent_Paper, Grocery, Milk and Delicatessen ) splits the data into two segments pretty well meaning that the variation along these dimensions best explains the defining differences betweent the two segments.

Overall, I would say this cluster analysis would be very useful for the supplier in implementing a new delivery strategies to improve their profitability and meet the needs of their customers at the same time.

The End.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.