Last updated: 2020-12-04

Checks: 1 1

Knit directory: fa_sim_cal/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


The R Markdown file has unstaged changes. To know which version of the R Markdown file created these results, you’ll want to first commit it to the Git repo. If you’re still working on the analysis, you can ignore this warning. When you’re finished, you can run wflow_publish to commit the R Markdown file and build the HTML.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version 9f40051. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    .tresorit/
    Ignored:    renv/library/

Unstaged changes:
    Modified:   analysis/idea.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/idea.Rmd) and HTML (docs/idea.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
html 9f40051 Ross Gayler 2020-12-02 Build site.
Rmd 0379680 Ross Gayler 2020-12-02 End of day
html 5f37c79 Ross Gayler 2020-11-30 Build site.
html c2e37f3 Ross Gayler 2020-11-30 Initial ndex.Rmd
Rmd 2a722d0 Ross Gayler 2020-11-29 end of day
html 2a722d0 Ross Gayler 2020-11-29 end of day

This document explains the central idea behind the project.

Problem setting

The problem concerns entity resolution - determining whether multiple records, each derived from some entity, refer to the same entity. For concreteness, we consider a database lookup use case. That is, given a query record (corresponding to an entity) and a dictionary of records (corresponding to unique entities) we want to find the dictionary record (if any) that corresponds to the query record.

We introduce some more formal notation before considering the implications of the problem setting.

There is a universe of entities, \(e \in E\). For example, the entities might be persons. Each entity has a unique identity, \(id(e)\), that is not accessible to us.

There is a dictionary (database) of records, \(d \in D\), each corresponding to an entity. Overloading the meaning of \(id()\), we denote the identity of the entity corresponding to a dictionary record as \(id(d)\). This identity of the entity corresponding to a dictionary record is not available to us.

We assume that the dictionary records correspond to unique entities, \(id(d_i) = id(d_j) \iff i = j\). In general, the dictionary \(D\) only correspond to a subset of the entities.

There is a set of query records, \(q \in Q\). Once again, overloading the meaning of \(id()\), we denote the identity of the entity corresponding to a query record as \(id(q)\). The set of queries \(Q\) is assumed to be representative of the queries that will be encountered in practice.

Each dictionary record is assumed to be the result of applying some observation process to an entity, \(d_i = obs_d(e_i)\). Likewise, each query record is assumed to be the result of applying some observation process to an entity, \(q_j = obs_q(e_j)\). The observations are usually taken to be tuples of values, e.g. \((name, address, age)\). This is not strictly necessary, but is convenient and will be adopted here. Note that the dictionary and query observation functions are different and may have different codomains. For convenience, we only consider the case where both observation functions have the same codomain.

If the identities were accessible to us we could define the lookup function \(lookup(q, D) = \{ d \in D : id(d) = id(q) \}\), which is guaranteed to return either a singleton set or the empty set. Unfortunately, the identities are not accessible to us to use in \(lookup()\). Instead, we are forced to define the lookup function in terms of the observation values, which are not guaranteed to uniquely identify the entities. The interesting characteristics of this problem arise from attempting to use the observation values as a proxy for identity.

Note that the lookup process can be described with respect to a single query \(q\). We aim to define \(lookup()\) to be as accurate as possible for every specific query \(q\). The set of queries \(Q\) is only relevant in so far as we will summarise the performance of \(lookup()\) over \(Q\) in order to make claims about the expected performance over queries.

Probability of identity

Given that we don’t have access to identity, the general approach taken in this field is to assess the compatibility of each pair of query and dictionary records, where \(compat(q_i, d_j)\) is defined in terms of the observed values \(q_i\) and \(d_i\). (Remember, \(q_i = obs_q(e_i)\) and \(d_j = obs_d(e_j)\).)

Similarity

Calibrated similarity

Subpopulation calibration

Combining calibrated similarity

A test citation: (Lange and Naumann 2011)

References

Lange, Dustin, and Felix Naumann. 2011. “Frequency-Aware Similarity Measures: Why Arnold Schwarzenegger Is Always a Duplicate.” In, 243–48. New York, New York, USA: ACM Press. https://doi.org/10.1145/2063576.2063616.