Skip to main content
  • Book
  • © 2010

An Introduction to Duplicate Detection

Part of the book series: Synthesis Lectures on Data Management (SLDM)

Buy it now

Buying options

eBook USD 19.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 27.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

This is a preview of subscription content, log in via an institution to check for access.

Table of contents (6 chapters)

  1. Front Matter

    Pages i-ix
  2. Data Cleansing: Introduction and Motivation

    • Felix Naumann, Melanie Herschel
    Pages 1-11
  3. Problem Definition

    • Felix Naumann, Melanie Herschel
    Pages 13-22
  4. Similarity Functions

    • Felix Naumann, Melanie Herschel
    Pages 23-42
  5. Duplicate Detection Algorithms

    • Felix Naumann, Melanie Herschel
    Pages 43-59
  6. Evaluating Detection Success

    • Felix Naumann, Melanie Herschel
    Pages 61-68
  7. Conclusion and Outlook

    • Felix Naumann, Melanie Herschel
    Pages 69-70
  8. Back Matter

    Pages 71-77

About this book

With the ever increasing volume of data, data quality problems abound. Multiple, yet different representations of the same real-world objects in data, duplicates, are one of the most intriguing data quality problems. The effects of such duplicates are detrimental; for instance, bank customers can obtain duplicate identities, inventory levels are monitored incorrectly, catalogs are mailed multiple times to the same household, etc. Automatically detecting duplicates is difficult: First, duplicate representations are usually not identical but slightly differ in their values. Second, in principle all pairs of records should be compared, which is infeasible for large volumes of data. This lecture examines closely the two main components to overcome these difficulties: (i) Similarity measures are used to automatically identify duplicates when comparing two records. Well-chosen similarity measures improve the effectiveness of duplicate detection. (ii) Algorithms are developed to perform on very large volumes of data in search for duplicates. Well-designed algorithms improve the efficiency of duplicate detection. Finally, we discuss methods to evaluate the success of duplicate detection. Table of Contents: Data Cleansing: Introduction and Motivation / Problem Definition / Similarity Functions / Duplicate Detection Algorithms / Evaluating Detection Success / Conclusion and Outlook / Bibliography

Authors and Affiliations

  • Hasso Plattner Institute, Potsdam, Germany

    Felix Naumann

  • University of Tübingen, Germany

    Melanie Herschel

About the authors

Felix Naumann studied mathematics, economy, and computer sciences at the University of Technology in Berlin. After receiving his diploma in 1997 he joined the graduate school "Distributed Information Systems" at Humboldt University of Berlin. He completed his Ph.D. thesis on "Quality-driven Query Answering" in 2000. In 2001 and 2002 he worked at the IBM Almaden Research Center on topics around data integration. From 2003-2006 he was an assistant professor of information integration at the Humboldt University of Berlin. Since 2006 he has held the chair for information systems at the Hasso Plattner Institute at the University of Potsdam in Germany. He is Editor-in-Chief of the Information Systems journal. His research interests are in the areas of information integration, data quality, data cleansing, text extraction, and-of course-data profiling. He has given numerous invited talks and tutorials on the topic of the book. Melanie Herschel finished her studies of information technology atthe University of Cooperative Education in Stuttgart in 2003. She then joined the data integration group at the Humboldt University of Berlin (2003–2006), and continued her research on data cleansing and data integration at the Hasso Plattner Institute at the University of Potsdam in Germany (2006–2008). She completed her Ph.D. thesis on “Duplicate Detection in XML Data” in 2007. In 2008, she worked at the IBM Almaden Research Center, concentrating her research on data provenance. Since 2009, she pursues research on data provenance and query analysis at the database systems group at the University of Tübingen in Germany. Besides her publications and invited talks on duplicate detection and data cleansing, Melanie Herschel has also been a member of several program committees and has chaired and organized a workshop on data quality.

Bibliographic Information

Buy it now

Buying options

eBook USD 19.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 27.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access