Last updated: 2020-12-04

Checks: 2 0

Knit directory: fa_sim_cal/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 9506fb0. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    .tresorit/
    Ignored:    renv/library/

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/idea.Rmd) and HTML (docs/idea.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd 9506fb0 Ross Gayler 2020-12-04 wflow_publish(“analysis/idea.Rmd”)
Rmd 9201161 Ross Gayler 2020-12-04 Just testing something
html 9201161 Ross Gayler 2020-12-04 Just testing something
html 9f40051 Ross Gayler 2020-12-02 Build site.
Rmd 0379680 Ross Gayler 2020-12-02 End of day
html 5f37c79 Ross Gayler 2020-11-30 Build site.
html c2e37f3 Ross Gayler 2020-11-30 Initial ndex.Rmd
Rmd 2a722d0 Ross Gayler 2020-11-29 end of day
html 2a722d0 Ross Gayler 2020-11-29 end of day

This document explains the central idea behind the project.

Problem setting

The problem concerns entity resolution - determining whether multiple records, each derived from some entity, refer to the same entity. For concreteness, we consider a database lookup use case. That is, given a query record (corresponding to some entity) and a dictionary of records (corresponding to unique entities) we want to find the dictionary record (if any) that corresponds to the same entity as the query record.

We introduce some more formal notation before considering the implications of the problem setting.

There is a universe of entities, \(e \in E\). For example, the entities might be persons. Each entity has a unique identity, \(id(e)\), that is not accessible to us.

There is a dictionary (database) of records, \(d \in D\), each corresponding to an entity. Overloading the meaning of \(id()\), we denote the identity of the entity corresponding to a dictionary record as \(id(d)\). This identity of the entity corresponding to a dictionary record is not available to us.

We assume that the dictionary records correspond to unique entities, \(id(d_i) = id(d_j) \iff i = j\). In general, the dictionary \(D\) only correspond to a subset of the universe of entities \(E\).

There is a set of query records, \(q \in Q\). Once again, overloading the meaning of \(id()\), we denote the identity of the entity corresponding to a query record as \(id(q)\). The set of queries \(Q\) is assumed to be representative of the queries that will be encountered in practice.

Each dictionary record is assumed to be the result of applying some observation process to an entity, \(d_i = obs_d(e_i)\). Likewise, each query record is assumed to be the result of applying some observation process to an entity, \(q_j = obs_q(e_j)\). The observations are usually taken to be tuples of values, e.g. \((name, address, age)\). This is not strictly necessary, but is convenient and will be adopted here. Note that the dictionary and query observation functions are different and may have different codomains. For convenience, we only consider the case where both observation functions have the same codomain.

If the identities were accessible to us we could define the lookup function \(lookup(q, D) = \{ d \in D : id(d) = id(q) \}\), which is guaranteed to return either a singleton set or the empty set. The key component of this definition is the identity predicate (\(id(d) = id(q)\)). Conceptually, this is evaluated for every \(d \in D\) to find the required \(d\). Given the assumption that the \(d \in D\) are unique, the predicate will be true for either exactly one \(d\), or none in the case that the sought entity is not represented in the dictionary \(D\).

Unfortunately, the identities are not accessible to us to use in \(lookup()\). Instead, we are forced to define the lookup function in terms of the observation values, which are not guaranteed to uniquely identify the entities. As was the case with the predicate above this can be define in terms of some function \(f(q, d)\) of a query record and a dictionary record. The interesting characteristics of this problem arise from attempting to use the observation values as a proxy for identity.

Note that the lookup process can be described with respect to a single query \(q\). We aim to define \(lookup()\) to be as accurate as possible for every specific query \(q\). The set of queries \(Q\) is only relevant in so far as we will summarise the performance of \(lookup()\) over \(Q\) in order to make claims about the expected performance over queries.

Probability of identity

Given that we don’t have access to identity, the general approach taken in this field is to assess the compatibility of each dictionary record with the query record, where \(compat(q_i, d_j)\) is defined in terms of the observed values \(q_i\) and \(d_i\). (Remember, \(q_i = obs_q(e_i)\) and \(d_j = obs_d(e_j)\).) This quantifies the extent to which the query and dictionary values are compatible with being observed from the same entity. Ideally, we want the compatibility value be high (say, 1) when the query and dictionary records are referring to the same entity and the compatibility value to be low (say, 0) when the query and dictionary records are referring to different entities.

To illustrate the point made earlier that the codomains of \(obs_d()\) and \(obs_q()\) do not have to be identical and to emphasise the nature of \(compat()\), consider a problem domain where the entities are steel spheres. \(obs_d()\) measures the diameter of a sphere, and \(obs_q()\) measures the mass of a sphere. If the density of steel is fixed it is clearly possible to tell when a dictionary and query record refer to the same sphere.

Now consider the case where the density of steel varies across spheres, and/or some spheres are of similar size and there is measurement error. The compatibility value may be high for some combinations of query and dictionary records that refer to different spheres, or the compatibility value may be low for query and dictionary records that refer to the same spheres. That is, the lookup process is now fallible and the result of the lookup process is uncertain.

This is equivalent to stating that the compatibility value can no longer be interpreted as a crisp indicator of identity. Now the best we can do is require that the compatibility be able to take intermediate values, while being higher when the query and dictionary records are likely to be referring to the same entity. Probability is the natural language for quantifying our uncertainty about the identity of the records. We want to know \(P(id(q) = id(d) \mid compat(q, d))\). The compatibility and probability are two distinct quantities, because although the compatibility carries information about the identity of the records it is not constrained to obey the rules of probability. Translating from compatibility to probability makes that information more useful because we can use the laws of probability to draw inferences.

Similarity

Many entity resolution problems have query and dictionary observation functions defined with the same codomain, such as a tuple of strings. For example, the entities might be persons and the attributes (strings) might be: given_name, family_name, address. Some string similarity metric is used as the compatibility function for each attribute, comparing the corresponding attribute values from the query and dictionary records. The attribute similarities are then combined somehow to yield a record similarity value, which is used as the compatibility.

If similarity were defined in terms of exact equality between strings, records with identical values would be treated as compatible and records with any difference would be treated as incompatible. This would be less than ideal if there was measurement error in the observations (e.g. typographical and transcription errors).

The justification for using string similarity metrics is that they accommodate such transcription errors. However, this is only a heuristic justification. There is no guarantee that all the most likely transcription errors yields the highest similarity metric values. Each string similarity metric implicitly embodies an error generation mechanism and there is no guarantee that any of them precisely correspond to the actual observation generating process.

Calibrated similarity

We believe that the probability of identity (\(id(q) = id(d)\)) is more useful than the similarity (or, more generally, compatibility). This is because the probability is already in the form most useful for decision-making, whereas the decision-making implications of some arbitrary similarity metric are unknown.

Ideally, we want \(P(id(q) = id(d) \mid q, d)\). Given a small fixed universe of entities, and an oracle for it would be possible to calculate the conditional probability by exhaustive enumeration. That’s generally not possible, so we try to come up with a compatibility function that captures the structure of the relationship between the query and dictionary records and probability of identity. We also aim for the variance of conditional probability over the space of record pairs to be maximised. The construction of the compatibility function can be viewed as a type of feature engineering in a regression model predicting the probability of identity from only one predictor: the compatibility. (Treating this as a regression problem implies that we have an identity oracle providing the outcome for estimating the model.)

Reverting to using similarity as the compatibility function, what we want to do here is to find the relationship between similarity and probability of identity: \(P(id(q) = id(d) \mid sim(q, d))\). This is a calibration of the similarity. It yields a function that allows us to translate any similarity into the corresponding probability.

We would expect the probability to be a monotone function of the similarity, but given the heuristic nature of the similarity metrics that is not guaranteed. We can use any functional form that is scientifically justified. In general we will only assume that the relation ship is smooth.

Subpopulation calibration

Combining calibrated similarity

Frequency-aware similarity

Construal as an interaction

Curried similarity functions?

A test citation: (Lange and Naumann 2011)

References

Lange, Dustin, and Felix Naumann. 2011. “Frequency-Aware Similarity Measures: Why Arnold Schwarzenegger Is Always a Duplicate.” In, 243–48. New York, New York, USA: ACM Press. https://doi.org/10.1145/2063576.2063616.