HVT Scoring Cells with Layers using scoreLayeredHVT

Zubin Dowlaty, Srinivasan Sudarsanam, Somya Shambhawi, Vishwavani

Created Date: 2023-05-16
Modified Date: 2025-10-14

1. Abstract

The HVT package offers a suite of R functions designed to construct topology preserving maps for in-depth analysis of multivariate data. It is particularly well-suited for datasets with numerous records. The package organizes the typical workflow into several key stages:

  1. Data Compression: Long datasets are compressed using Hierarchical Vector Quantization (HVQ) to achieve the desired level of data reduction.

  2. Data Projection: Compressed cells are projected into one and two dimensions using dimensionality reduction algorithms, producing embeddings that preserve the original topology. This allows for intuitive visualization of complex data structures.

  3. Tessellation: Voronoi tessellation partitions the projected space into distinct cells, supporting hierarchical visualizations. Heatmaps and interactive plots facilitate exploration and insights into the underlying data patterns.

  4. Scoring: Test dataset is evaluated against previously generated maps, enabling their placement within the existing structure. Sequential application across multiple maps is supported if required.

2. Notebook Requirements

This chunk verifies the installation of all the necessary packages to successfully run this vignette, if not, installs them and attach all the packages in the session environment.

list.of.packages <- c("plyr", "dplyr", "reactable", "kableExtra", "geozoo",
                      "plotly", "purrr", "data.table", "gridExtra", "tidyr")

new.packages <-list.of.packages[!(list.of.packages %in% installed.packages()[, "Package"])]
if (length(new.packages))
  install.packages(new.packages, dependencies = TRUE, repos='https://cloud.r-project.org/')
invisible(lapply(list.of.packages, library, character.only = TRUE))

3. Example : HVT with the Torus dataset

In this section, we will see how we can use the package to visualize multidimensional data by projecting them to two dimensions using Sammon’s projection and further used for Scoring.

Data Understanding

First of all, let us see how to generate data for torus. We are using a library geozoo for this purpose. Geo Zoo (stands for Geometric Zoo) is a compilation of geometric objects ranging from three to 10 dimensions. Geo Zoo contains regular or well-known objects, eg cube and sphere, and some abstract objects, e.g. Boy’s surface, Torus and Hyper-Torus.

Here, we will generate a 3D torus (a torus is a surface of revolution generated by revolving a circle in three-dimensional space one full revolution about an axis that is coplanar with the circle) with 12000 points.

Raw Torus Dataset

The torus dataset includes the following columns:

Lets, explore the torus dataset containing 12000 points. For the sake of brevity we are displaying first 6 rows.

set.seed(240)
# Here p represents dimension of object, n represents number of points
torus <- geozoo::torus(p = 3,n = 12000)
torus_df <- data.frame(torus$points)
colnames(torus_df) <- c("x","y","z")
torus_df <- torus_df %>% round(4)
displayTable(head(torus_df))
x y z
-2.6282 0.5656 -0.7253
-1.4179 -0.8903 0.9455
-1.0308 1.1066 -0.8731
1.8847 0.1895 0.9944
-1.9506 -2.2507 0.2071
-1.4824 0.9229 0.9672

Now let’s have a look at structure of the torus dataset.

str(torus_df)
## 'data.frame':    12000 obs. of  3 variables:
##  $ x: num  -2.63 -1.42 -1.03 1.88 -1.95 ...
##  $ y: num  0.566 -0.89 1.107 0.19 -2.251 ...
##  $ z: num  -0.725 0.946 -0.873 0.994 0.207 ...

Data distribution

This section displays four objects.

Variable Histograms: The histogram distribution of all the features in the dataset.

Box Plots: Box plots for all the features in the dataset. These plots will display the median and Interquartile range of each column at a panel level.

Correlation Matrix: This calculates the Pearson correlation which is a bivariate correlation value measuring the linear correlation between two numeric columns. The output plot is shown as a matrix.

Summary EDA: The table provides descriptive statistics for all the features in the dataset.

It uses an inbuilt function called edaPlots to display the above-mentioned four objects.

edaPlots(torus_df)
edaPlots(torus_df, output_type = "histogram")

edaPlots(torus_df, output_type = "boxplot")

edaPlots(torus_df, output_type = "correlation")

Train - Test Split

Let us split the torus dataset into train and test. We will randomly select 80% of the torus dataset as train and remaining as test.

smp_size <- floor(0.80 * nrow(torus_df))
set.seed(279)
train_ind <- sample(seq_len(nrow(torus_df)), size = smp_size)
torus_train <- torus_df[train_ind, ]
torus_test <- torus_df[-train_ind, ]

Training Dataset

Now, lets have a look at the selected training dataset containing (9600 data points). For the sake of brevity we are displaying first six rows.

rownames(torus_train) <- NULL
displayTable(head(torus_train))
x y z
1.7958 -0.4204 -0.9878
0.7115 -2.3528 -0.8889
1.9285 1.2034 0.9620
1.0175 0.0344 -0.1894
-0.2736 1.1298 -0.5464
1.8976 2.2391 0.3545

Now lets have a look at structure of the training dataset.

str(torus_train)
## 'data.frame':    9600 obs. of  3 variables:
##  $ x: num  1.796 0.712 1.929 1.018 -0.274 ...
##  $ y: num  -0.4204 -2.3528 1.2034 0.0344 1.1298 ...
##  $ z: num  -0.988 -0.889 0.962 -0.189 -0.546 ...

Data Distribution

edaPlots(torus_train)
edaPlots(torus_train,output_type = "histogram")

edaPlots(torus_train, output_type = "boxplot")

edaPlots(torus_train, output_type = "correlation")

Testing Dataset

Now, lets have a look at testing dataset containing(2400 data points).For the sake of brevity we are displaying first six rows.

rownames(torus_test) <- NULL
displayTable(head(torus_test))
x y z
-2.6282 0.5656 -0.7253
2.7471 -0.9987 -0.3848
-2.4446 -1.6528 0.3097
-2.6487 -0.5745 0.7040
-0.2676 -1.0800 -0.4611
-1.1130 -0.6516 -0.7040

Now lets have a look at structure of the testing dataset.

str(torus_test)
## 'data.frame':    2400 obs. of  3 variables:
##  $ x: num  -2.628 2.747 -2.445 -2.649 -0.268 ...
##  $ y: num  0.566 -0.999 -1.653 -0.575 -1.08 ...
##  $ z: num  -0.725 -0.385 0.31 0.704 -0.461 ...

Data Distribution

edaPlots(torus_test)
edaPlots(torus_test,output_type = "histogram")

edaPlots(torus_test, output_type = "boxplot")

edaPlots(torus_test, output_type = "correlation")



4. Map A : Base Compressed Map

Let us try to visualize the compressed Map A from the diagram below.

Figure 1: Data Segregation with highlighted bounding box in red around compressed map A

Figure 1: Data Segregation with highlighted bounding box in red around compressed map A

This package can perform vector quantization using the following algorithms -

For more information on vector quantization, refer the following link.

The trainHVT function constructs highly compressed hierarchical Voronoi tessellations. The raw data is first scaled and this scaled data is supplied as input to the vector quantization algorithm. The vector quantization algorithm compresses the dataset until a user-defined compression percentage rate is achieved using a parameter called quantization error which acts as a threshold and determines the compression percentage. It means that for a given user-defined compression percentage we get the ‘n’ number of cells, then all of these cells formed will have a quantization error less than the threshold quantization error.

Let’s try to comprehend the trainHVT first before moving ahead.

trainHVT(
  data,
  min_compression_perc,
  n_cells,
  depth,
  quant.err,
  normalize,
  distance_metric = c("L1_Norm", "L2_Norm"),
  error_metric = c("mean", "max"),
  quant_method = c("kmeans", "kmedoids"),
  dim_reduction_method = c("sammon" , "tsne" , "umap")
  scale_summary = NA,
  diagnose = FALSE,
  hvt_validation = FALSE,
  train_validation_split_ratio = 0.8,
  tsne_perplexity,tsne_theta,tsne_verbose,
  tsne_eta,tsne_max_iter,
  umap_n_neighbors,umap_min_dist
)

Each of the parameters of trainHVT function have been explained below:

The output of trainHVT function (list of 7 elements) have been explained below with an image attached for clear understanding.

NOTE: Here the attached image is the snapshot of output list generated from map A which can be referred later in this section

Figure 2: The Output list generated by trainHVT function.

Figure 2: The Output list generated by trainHVT function.

We will use the trainHVT function to compress our data while preserving essential features of the dataset. Our goal is to achieve data compression upto atleast 80%. In situations where the compression ratio does not meet the desired target, we can explore adjusting the model parameters as a potential solution. This involves making modifications to parameters such as the quantization error threshold or increasing the number of cells and then rerunning the trainHVT function again.

As this is already done in HVT Vignette: please refer for more information.

Model Parameters

set.seed(240)
torus_mapA <- trainHVT(
  torus_train,
  n_cells = 500,
  depth = 1,
  quant.err = 0.1,
  normalize = FALSE,
  distance_metric = "L2_Norm",
  error_metric = "max",
  quant_method = "kmeans",
  dim_reduction_method = "sammon"
)

Let’s check the compression summary for torus.

summary(torus_mapA)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 500 448 0.9 n_cells: 500 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

We successfully compressed 90% of the data using n_cells parameter as 500, the next step involves performing data projection on the compressed data. In this step, the compressed data will be transformed and projected onto a lower-dimensional space to visualize and analyze the data in a more manageable form.

As per the manual, torus_mapA[[3]] gives us detailed information about the hierarchical vector quantized data. torus_mapA[[3]][['summary']] gives a nice tabular data containing no of points, Quantization Error and the codebook.

The datatable displayed below is the summary from torus_mapA showing Cell.ID, Centroids and Quantization Error for each of the 500 cells. For the sake of brevity, we are displaying only the first 20 rows.

displayTable(torus_mapA[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 25 133 0.0754 -0.9156 -0.7427 0.5679
1 1 2 19 145 0.0634 -0.2122 -1.1651 -0.5760
1 1 3 14 174 0.0406 -1.0559 -0.0056 0.3274
1 1 4 9 491 0.0539 2.1556 1.8618 0.5252
1 1 5 18 199 0.0778 -1.6661 1.5286 -0.9607
1 1 6 18 306 0.0820 1.7298 -1.1539 0.9915
1 1 7 24 85 0.0818 -2.3941 0.6431 -0.8653
1 1 8 15 164 0.0430 -0.9748 -0.2892 0.1846
1 1 9 16 458 0.0660 1.9507 1.3286 0.9284
1 1 10 22 413 0.0954 -0.0517 2.6839 -0.7177
1 1 11 11 495 0.0627 1.9150 2.1799 0.4220
1 1 12 13 30 0.0524 -1.7647 -1.7214 0.8813
1 1 13 10 317 0.0481 -0.6928 1.8956 0.9970
1 1 14 23 27 0.0867 -2.4987 -0.8635 -0.7539
1 1 15 17 358 0.0658 1.7985 -0.4216 -0.9854
1 1 16 16 209 0.0648 -1.4374 1.2731 -0.9923
1 1 17 10 479 0.0628 1.3518 2.5057 0.5270
1 1 18 12 295 0.0469 -0.0451 1.0310 0.2486
1 1 19 33 203 0.0691 0.6004 -1.2914 0.8125
1 1 20 16 465 0.0711 2.5842 0.6383 0.7389

Now let us understand what each column in the above table means:

All the columns after this will contain centroids for each cell. They can also be called a codebook, which represents a collection of all centroids or codewords.

Now let’s try to understand plotHVT function. The parameters have been explained in detail below:

plotHVT <-(hvt.results,line.width,color.vec, 
           centroid.size, centroid.color, 
           child.level, hmap.cols, 
           cell_id, cell_id_size,
           cell_id_position, quant.error.hmap,
           separation_width, layer_opacity,
           dim_size, plot.type = '2Dhvt') 

Let’s plot the Voronoi tessellation for layer 1 (map A).

plotHVT(torus_mapA,plot.type = "2Dhvt") 
Figure 3: The Voronoi Tessellation for layer 1 (map A) shown for the 500 cells in the dataset ’torus’

Figure 3: The Voronoi Tessellation for layer 1 (map A) shown for the 500 cells in the dataset ’torus’

4.1 Heatmaps

Now let’s plot the Voronoi Tessellation with the heatmap overlaid for all the features in the torus dataset for better visualization and interpretation of data patterns and distributions.

The heatmaps displayed below provides a visual representation of the spatial characteristics of the torus dataset, allowing us to observe patterns and trends in the distribution of each of the features (x,y,z). The sheer green shades highlight regions with higher values in each of the heatmaps, while the indigo shades indicate areas with the lowest values in each of the heatmaps. By analyzing these heatmaps, we can gain insights into the variations and relationships between each of these features within the torus dataset.

  plotHVT(torus_mapA,hmap.cols = "x",plot.type = '2Dheatmap') 
Figure 4: The Voronoi Tessellation with the heat map overlaid for variable ’x’ in the ’torus’ dataset

Figure 4: The Voronoi Tessellation with the heat map overlaid for variable ’x’ in the ’torus’ dataset

  plotHVT(torus_mapA,hmap.cols = "y",plot.type = '2Dheatmap') 
Figure 5: The Voronoi Tessellation with the heat map overlaid for variable ’y’ in the ’torus’ dataset

Figure 5: The Voronoi Tessellation with the heat map overlaid for variable ’y’ in the ’torus’ dataset

  plotHVT(torus_mapA, hmap.cols = "z",plot.type = '2Dheatmap') 
Figure 6: The Voronoi Tessellation with the heat map overlaid for variable ’z’ in the ’torus’ dataset

Figure 6: The Voronoi Tessellation with the heat map overlaid for variable ’z’ in the ’torus’ dataset


5. Map B : Compressed Novelty Map

Let us try to visualize the Map B from the diagram below.

Figure 7: Data Segregation with highlighted bounding box in red around map B

Figure 7: Data Segregation with highlighted bounding box in red around map B

In this section, we will manually figure out the novelty cells from the plotted torus_mapA and store it in identified_Novelty_cells variable.

Note: For manual selecting the novelty cells from map A, one can enhance its interactivity by adding plotly elements to the code. This will transform map A into an interactive plot, allowing users to actively engage with the data. By hovering over the centroids of the cells, a tag containing segment child information will be displayed. Users can explore the map by hovering over different cells and selectively choose the novelty cells they wish to consider. Added an image for reference.

Figure 8: Manually selecting novelty cells

Figure 8: Manually selecting novelty cells

The removeNovelty function removes the identified novelty cell(s) from the training dataset (containing 9600 datapoints) and stores those records separately.

It takes input as the cell number (Segment.Child) of the manually identified novelty cell(s) and the compressed HVT map (torus_mapA) with 500 cells. It returns a list of two items: data with novelty, and data without novelty.

NOTE: As we are using torus dataset here, the identified novelty cells given are for demo purpose.

identified_Novelty_cells <<- c(273,44,61,486,185,425)   #as a example
output_list <- removeNovelty(identified_Novelty_cells, torus_mapA)
data_with_novelty <- output_list[[1]]
data_without_novelty <- output_list[[2]]

Let’s have a look at the data with novelty(containing 115 records).

novelty_data <- data_with_novelty
novelty_data$Row.No <- row.names(novelty_data)
novelty_data <- novelty_data %>% dplyr::select("Row.No","Cell.ID","Cell.Number","x","y","z")
colnames(novelty_data) <- c("Row.No","Cell.ID","Segment.Child","x","y","z")
displayTable(novelty_data, limit = 115)
Row.No Cell.ID Segment.Child x y z
1 424 44 2.7839 -1.0776 -0.1712
2 424 44 2.8089 -1.0384 0.1027
3 424 44 2.8404 -0.9040 0.1952
4 424 44 2.7834 -1.0866 0.1544
5 424 44 2.8208 -0.9473 0.2193
6 424 44 2.7804 -1.0582 -0.2226
7 424 44 2.8795 -0.8408 0.0226
8 424 44 2.7738 -1.1262 -0.1121
9 424 44 2.7538 -1.1860 -0.0569
10 424 44 2.8513 -0.9218 -0.0828
11 424 44 2.8754 -0.8550 0.0168
12 424 44 2.8450 -0.8996 0.1792
13 424 44 2.8239 -0.9397 0.2172
14 424 44 2.7871 -1.0527 -0.2026
15 424 44 2.7875 -1.1082 -0.0220
16 424 44 2.7661 -1.1507 0.0905
17 34 61 -0.3149 -2.9384 0.2958
18 34 61 -0.3078 -2.9675 0.1812
19 34 61 -0.1469 -2.9921 0.0927
20 34 61 -0.3766 -2.9762 0.0092
21 34 61 -0.0344 -2.9993 0.0303
22 34 61 -0.2807 -2.9525 0.2592
23 34 61 -0.3967 -2.9725 0.0484
24 34 61 -0.2519 -2.9034 0.4049
25 34 61 -0.3169 -2.9822 0.0443
26 34 61 -0.1057 -2.9757 0.2107
27 34 61 0.0958 -2.9784 0.1994
28 34 61 -0.3598 -2.9046 0.3757
29 34 61 -0.5300 -2.9485 0.0921
30 34 61 -0.2574 -2.9769 0.1544
31 34 61 -0.4312 -2.9677 0.0486
32 34 61 0.0796 -2.9885 0.1440
33 34 61 -0.2803 -2.9049 0.3957
34 34 61 -0.4258 -2.9397 0.2417
35 34 61 -0.3847 -2.9574 0.1871
36 34 61 -0.1814 -2.9475 0.3027
37 34 61 -0.4657 -2.9341 0.2396
38 34 61 -0.2817 -2.9829 0.0871
39 34 61 -0.3100 -2.9449 0.2759
40 34 61 -0.0367 -2.9262 0.3764
41 34 61 -0.0928 -2.9950 0.0848
42 75 185 -2.8203 0.9904 -0.1467
43 75 185 -2.8178 1.0260 0.0499
44 75 185 -2.7501 1.1484 -0.1977
45 75 185 -2.8307 0.8870 -0.2570
46 75 185 -2.9216 0.6631 -0.0905
47 75 185 -2.7794 1.1095 -0.1211
48 75 185 -2.8862 0.7563 -0.1801
49 75 185 -2.7889 1.0811 -0.1333
50 75 185 -2.8045 1.0304 0.1555
51 75 185 -2.8893 0.7432 -0.1815
52 75 185 -2.8085 1.0402 -0.1003
53 75 185 -2.7684 1.1089 -0.1877
54 75 185 -2.8008 1.0713 -0.0508
55 75 185 -2.8734 0.8593 -0.0420
56 75 185 -2.8926 0.7896 0.0560
57 75 185 -2.8014 1.0351 0.1638
58 75 185 -2.8382 0.9661 -0.0614
59 75 185 -2.7733 1.1066 -0.1675
60 75 185 -2.8765 0.8519 -0.0099
61 75 185 -2.9258 0.6607 -0.0332
62 75 185 -2.8318 0.9591 0.1427
63 439 273 2.9450 -0.5316 0.1218
64 439 273 2.9041 -0.7280 0.1098
65 439 273 2.9111 -0.6332 0.2030
66 439 273 2.9095 -0.6207 0.2223
67 439 273 2.8605 -0.7913 0.2510
68 439 273 2.9184 -0.6856 -0.0661
69 439 273 2.8971 -0.7568 0.1061
70 439 273 2.8758 -0.6541 0.3144
71 439 273 2.9496 -0.4882 0.1430
72 439 273 2.9188 -0.6454 0.1457
73 439 273 2.9351 -0.5220 0.1932
74 439 273 2.8530 -0.8358 0.2313
75 439 273 2.8969 -0.5663 0.3069
76 439 273 2.8809 -0.8085 0.1250
77 439 273 2.8340 -0.8588 0.2755
78 460 425 0.5660 2.9195 0.2270
79 460 425 0.4825 2.9331 -0.2327
80 460 425 0.2922 2.9667 0.1938
81 460 425 0.7219 2.8642 0.3005
82 460 425 0.5100 2.9548 0.0551
83 460 425 0.5103 2.9319 0.2180
84 460 425 0.6264 2.9337 -0.0202
85 460 425 0.4241 2.9696 -0.0208
86 460 425 0.4568 2.9565 -0.1292
87 460 425 0.4127 2.9640 0.1212
88 460 425 0.2388 2.9833 0.1195
89 460 425 0.4408 2.9674 0.0030
90 460 425 0.5544 2.9221 0.2254
91 460 425 0.3024 2.9847 0.0031
92 460 425 0.3711 2.9462 0.2453
93 460 425 0.4730 2.9532 0.1347
94 19 486 -0.9027 -2.8262 0.2552
95 19 486 -0.7470 -2.9053 0.0186
96 19 486 -0.9246 -2.8381 0.1728
97 19 486 -0.9065 -2.8593 0.0313
98 19 486 -0.7323 -2.9085 -0.0371
99 19 486 -1.0349 -2.7844 0.2410
100 19 486 -1.1207 -2.7825 0.0230
101 19 486 -1.0549 -2.7973 0.1442
102 19 486 -0.8786 -2.8665 -0.0609
103 19 486 -0.9398 -2.7706 0.3783
104 19 486 -0.8161 -2.8680 0.1897
105 19 486 -1.0239 -2.8185 -0.0510
106 19 486 -0.9253 -2.7881 0.3475
107 19 486 -0.9820 -2.8178 0.1782
108 19 486 -0.8810 -2.8624 0.1005
109 19 486 -0.7873 -2.8533 0.2804
110 19 486 -1.0393 -2.7889 0.2167
111 19 486 -0.5913 -2.9309 0.1414
112 19 486 -0.9948 -2.8299 0.0252
113 19 486 -0.7686 -2.8947 -0.1001
114 19 486 -0.9815 -2.8025 0.2455
115 19 486 -0.7111 -2.8678 0.2977

5.1 Voronoi Tessellation with highlighted novelty cell

The plotNovelCells function is used to plot the Voronoi tessellation using the compressed HVT map (torus_mapA) containing 500 cells and highlights the identified novelty cell(s) i.e 6 cells (containing 115 records) in red on the map.

plotNovelCells(identified_Novelty_cells, torus_mapA,line.width = c(0.4),centroid.size = 0.01)
Figure 9: The Voronoi Tessellation constructed using the compressed HVT map (map A) with the novelty cell(s) highlighted in red

Figure 9: The Voronoi Tessellation constructed using the compressed HVT map (map A) with the novelty cell(s) highlighted in red

We pass the dataframe with novelty records (115 records) to trainHVT function along with other model parameters mentioned below to generate map B (layer2)

Model Parameters

colnames(data_with_novelty) <- c("Cell.ID","Segment.Child","x","y","z")
data_with_novelty <- data_with_novelty[,-1:-2]
mapA_scale_summary = torus_mapA[[3]]$scale_summary
torus_mapB <- trainHVT(data_with_novelty,
                  n_cells = 11,   
                  depth = 1,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")
summary(torus_mapB)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 11 10 0.91 n_cells: 11 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 91% of the cells have hit the quantization threshold error. Since we are successfully able to attain the desired compression percentage, so we will not further subdivide the cells

The datatable displayed below is the summary from map B (layer 2) showing Cell.ID, Centroids and Quantization Error for each of the 11 cells.

displayTable(torus_mapB[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 6 6 0.0497 -0.0341 -2.9882 0.1270
1 1 2 7 2 0.0672 -0.9392 -2.8448 -0.0046
1 1 3 9 10 0.0970 0.4600 2.9390 0.1984
1 1 4 7 11 0.0621 0.4633 2.9571 -0.0488
1 1 5 15 8 0.0820 2.8993 -0.6751 0.1789
1 1 6 21 9 0.1010 -2.8324 0.9469 -0.0663
1 1 7 11 5 0.0514 -0.3795 -2.9641 0.1212
1 1 8 16 7 0.0831 2.8101 -1.0120 0.0205
1 1 9 8 4 0.0730 -0.2520 -2.9278 0.3358
1 1 10 9 1 0.0481 -0.9761 -2.8015 0.2422
1 1 11 6 3 0.0622 -0.7308 -2.8890 0.1484


6. Map C : Compressed Map without Novelty

Let us try to visualize the compressed Map C from the diagram below.

Figure 10:Data Segregation with highlighted bounding box in red around compressed map C

Figure 10:Data Segregation with highlighted bounding box in red around compressed map C

6.1 Iteration 1

With the Novelties removed, we construct another hierarchical Voronoi tessellation map C layer 2 on the data without Novelty (containing 9485 records) and below mentioned model parameters.

Model Parameters

torus_mapC <- trainHVT(dataset  = data_without_novelty,
                  n_cells = 10,
                  depth = 2,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")

Now let’s check the compression summary for HVT (torus_mapC) where n_cell was set to 15. The table below shows no of cells, no of cells having quantization error below threshold and percentage of cells having quantization error below threshold for each level.

summary(torus_mapC)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 10 0 0 n_cells: 10 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans
2 100 0 0 n_cells: 10 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 0% of the cells have hit the quantization threshold error in level 1 and 0% of the cells have hit the quantization threshold error in level 2

6.2 Iteration 2

Since, we are yet to achive atleast 80% compression at depth 2. Let’s try to compress again using the below mentioned set of model parameters and the data without novelty (containing 9485 records).

Model Parameters

torus_mapC <- trainHVT(data_without_novelty,
                  n_cells = 46,    
                  depth = 2,
                  quant.err = 0.1,
                  normalize = FALSE,
                  distance_metric = "L2_Norm",
                  error_metric = "max",
                  quant_method = "kmeans",
                  dim_reduction_method = "sammon")
summary(torus_mapC)
segmentLevel noOfCells noOfCellsBelowQuantizationError percentOfCellsBelowQuantizationErrorThreshold parameters
1 46 0 0 n_cells: 46 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans
2 2116 1748 0.83 n_cells: 46 quant.err: 0.1 distance_metric: L2_Norm error_metric: max quant_method: kmeans

As it can be seen from the table above, 0% of the cells have hit the quantization threshold error in level 1 and 83% of the cells have hit the quantization threshold error in level 2.

The datatable displayed below is the summary from map C (layer2). showing Cell.ID, Centroids and Quantization Error.

displayTable(data =torus_mapC[[3]][['summary']])
Segment.Level Segment.Parent Segment.Child n Cell.ID Quant.Error x y z
1 1 1 183 567 0.3054 -1.5739 2.3989 -0.0019
1 1 2 236 355 0.3085 -1.5176 -1.2760 -0.9229
1 1 3 162 88 0.2925 -1.7876 -2.2656 0.0079
1 1 4 167 1865 0.2886 2.5078 -1.4110 -0.1993
1 1 5 183 874 0.3030 0.4550 -2.6700 -0.5138
1 1 6 251 1120 0.2282 -0.1585 1.0003 -0.0305
1 1 7 194 1576 0.2510 1.3877 -0.0061 0.7561
1 1 8 196 1208 0.3194 -0.4306 2.5131 -0.7020
1 1 9 189 2042 0.2972 1.7211 2.2262 0.3548
1 1 10 273 609 0.2847 -1.1913 -0.1942 0.6043
1 1 11 248 1320 0.2812 0.2437 1.4647 0.8136
1 1 12 257 1537 0.2358 1.2573 0.0921 -0.6336
1 1 13 187 602 0.3037 -0.1334 -2.4336 0.7936
1 1 14 207 331 0.3187 -2.2996 1.3756 -0.5613
1 1 15 154 2118 0.2917 2.6593 1.1769 0.0397
1 1 16 288 1465 0.2804 0.5899 1.2696 -0.7664
1 1 17 148 2003 0.2886 2.7567 -0.1572 0.4931
1 1 18 269 886 0.2929 -0.9237 1.3992 -0.8903
1 1 19 170 153 0.3206 -2.5259 -0.5698 -0.7073
1 1 20 243 1251 0.2330 0.8259 -0.6708 0.3379

Let’s plot the Voronoi tessellation for layer 2 (map C)

plotHVT(torus_mapC,
        line.width = c(0.2,0.1), 
        color.vec = c("navyblue","steelblue"),
        centroid.size = 0.1,
        child.level = 2, 
        plot.type = '2Dhvt')
Figure 11: The Voronoi Tessellation for layer 2 (map C) shown for the 928 cells in the dataset ’torus’ at level 2

Figure 11: The Voronoi Tessellation for layer 2 (map C) shown for the 928 cells in the dataset ’torus’ at level 2

6.3 Heatmaps

Now let’s plot all the features for each cell at level two as a heatmap for better visualization.

The heatmaps displayed below provides a visual representation of the spatial characteristics of the torus dataset, allowing us to observe patterns and trends in the distribution of each of the features (x,y,z). The sheer green shades highlight regions with higher values in each of the heatmaps, while the indigo shades indicate areas with the lowest values in each of the heatmaps. By analyzing these heatmaps, we can gain insights into the variations and relationships between each of these features within the torus dataset.

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "x",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 
Figure 12: The Voronoi Tessellation with the heat map overlaid for feature `x` in the ’torus’ dataset

Figure 12: The Voronoi Tessellation with the heat map overlaid for feature x in the ’torus’ dataset

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "y",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 
Figure 13: The Voronoi Tessellation with the heat map overlaid for feature `y` in the ’torus’ dataset

Figure 13: The Voronoi Tessellation with the heat map overlaid for feature y in the ’torus’ dataset

  plotHVT(
  torus_mapC,
  child.level = 2,
  hmap.cols = "z",
  line.width = c(0.2,0.1),
  color.vec = c("navyblue","steelblue"),
  centroid.size = 0.1,
  plot.type = '2Dheatmap') 
Figure 14: The Voronoi Tessellation with the heat map overlaid for feature `z` in the ’torus’ dataset

Figure 14: The Voronoi Tessellation with the heat map overlaid for feature z in the ’torus’ dataset

We now have the set of maps (map A, map B & map C) which will be used to score, which map and cell each test record is assigned to.

7. Scoring

Now once we have built the model, let us try to score using our testing dataset (containing 2400 data points) which cell and which layer each point belongs to.

The scoreLayeredHVT function is used to score the testing dataset using the scored set of maps. This function takes an input - a testing dataset and a set of maps (map A, map B, map C).

Now, Let us understand the scoreLayeredHVT function.

scoreLayeredHVT(data,
                hvt_mapA,
                hvt_mapB,
                hvt_mapC,
                child.level = 1,
                mad.threshold = 0.2,
                normalize = TRUE,
                distance_metric="L1_Norm",
                error_metric="max",
                yVar)

Each of the parameters of scoreLayeredHVT function has been explained below:

Before that, the approach of scoreLayeredHVT function is to use scoreHVT function to score the test data against the given results of trainHVT which is referred as ‘map’ here. Hence the scoreLayeredHVT scores the test dataset against map A, B & C and further process and merge the final output. So the arguments used in scoreHVT is important here for smooth execution of function.

When normalize is set to TRUE, the scoreHVT function has an inbuilt feature to standardize the testing dataset based on the mean and standard deviation of the training dataset from the trainHVT results.

The function score based on the HVT maps - map A, map B and map C, constructed using trainHVT function. For each test record, the function will assign that record to Layer1 or Layer2. Layer1 contains the cell ids from map A and Layer 2 contains cell ids from map B (novelty map) and map C (map without novelty).

Scoring Algorithm

The Scoring algorithm recursively calculates the distance between each point in the testing dataset and the cell centroids for each level. The following steps explain the scoring method for a single point in the test dataset:

  1. Calculate the distance between the point and the centroid of all the cells in the first level.
  2. Find the cell whose centroid has minimum distance to the point.
  3. Check if the cell drills down further to form more cells.
  4. If it doesn’t, return the path. Or else repeat steps 1 to 4 till we reach a level at which the cell doesn’t drill down further.

Note : The Scoring algorithm will not work if some of the variables used to perform quantization are missing. In the testing dataset, we should not remove any features.

validation_data <- torus_test
new_score <- scoreLayeredHVT(
    data=validation_data,
    hvt_mapA = torus_mapA,
    hvt_mapB = torus_mapB,
    hvt_mapC = torus_mapC,
    normalize = FALSE )

Let’s see which cell and layer each point belongs to and check the Mean Absolute Difference for each of the 2400 records.

summary(new_score)
Row_Number Row.Number act_x act_y act_z Layer1.Cell.ID Layer2.Cell.ID pred_x pred_y pred_z diff
1 1 -2.6282 0.5656 -0.7253 A85 C153 -2.5258976 -0.5697529 -0.7072982 0.4185524
2 2 2.7471 -0.9987 -0.3848 A425 C1865 2.5077850 -1.4109928 -0.1993299 0.2790259
3 3 -2.4446 -1.6528 0.3097 A3 C64 -2.4619927 -1.3983722 0.3219391 0.0946865
4 4 -2.6487 -0.5745 0.7040 A41 C287 -2.0844505 -0.4682857 0.9330797 0.2998478
5 5 -0.2676 -1.0800 -0.4611 A157 C815 -0.1826176 -1.4024576 -0.7633130 0.2365510
6 6 -1.1130 -0.6516 -0.7040 A126 C695 -0.8306652 -0.6299318 -0.2497557 0.2527491
7 7 2.0288 1.9519 0.5790 A491 C2042 1.7210566 2.2261741 0.3547593 0.2687527
8 8 -2.4799 1.6863 -0.0470 A140 C331 -2.2995517 1.3755594 -0.5613184 0.3351357
9 9 -0.4105 -1.1610 -0.6398 A119 C815 -0.1826176 -1.4024576 -0.7633130 0.1976176
10 10 -0.2545 -1.6160 -0.9314 A83 C815 -0.1826176 -1.4024576 -0.7633130 0.1511706
11 11 1.1500 0.3945 -0.6205 A352 C1537 1.2572988 0.0921132 -0.6335630 0.1409162
12 12 -1.2557 -1.1369 0.9520 A67 C436 -1.1822271 -1.5123679 0.9261489 0.1582640
13 13 -0.5449 -2.6892 -0.6684 A43 C352 -0.8252530 -2.4340675 -0.7088662 0.1919839
14 14 2.9093 0.7222 -0.0697 A478 C2118 2.6593221 1.1768851 0.0397240 0.2713623
15 15 2.3205 1.2520 -0.7711 A476 C1908 1.8601725 1.2505847 -0.8798926 0.1901785
16 16 1.4772 -0.5194 -0.9008 A298 C1646 1.8050471 -0.8284412 -0.9447865 0.2269582
17 17 -1.3176 -2.6541 0.2690 A11 C88 -1.7876407 -2.2655926 0.0079136 0.3732115
18 18 1.0687 0.1211 -0.3812 A316 C1537 1.2572988 0.0921132 -0.6335630 0.1566495
19 19 -0.9632 0.3283 -0.1866 A195 C807 -0.9247605 0.4324310 -0.1556399 0.0578435
20 20 2.5616 0.4634 0.7976 A465 C1891 2.1489362 0.5766913 0.9229370 0.2170973
hist(new_score[["actual_predictedTable"]]$diff, 
     breaks = 30, col = "blue", main = "Mean Absolute Difference",
     xlab = "Difference")
Figure 16: Mean Absolute Difference

Figure 16: Mean Absolute Difference


8. Executive Summary

9. References

  1. Topology Preserving Maps

  2. Vector Quantization

  3. K-means

  4. Sammon’s Projection

  5. Voronoi Tessellations