XQuery/Finding Duplicate Documents

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Motivation[edit | edit source]

You have a set of documents that may contain identical copies of some documents. You want to detect and remove all duplicate documents.

Method[edit | edit source]

There are two approaches. One is to write a "compare" function that will compare each element and node in the list to each other item. The problem with this approach is that it will run in roughly n-squared time which can be very slow for larger data sets. This "pairwise compare" is best when you have a very small set of data.

In this example will create a unique "hash" value for each document. You can then remove all documents that have the same hash value. Hashing is critical because once a hash is created it can be stored and used for future comparisons. You thus have a very consistent way of finding out if a new document is unique. Comparing hashes of documents give the users a very quick answer to the question - have we seen this document before?

About Hash Functions[edit | edit source]

A hash function takes an input string and performs a mathematical function on it which results in a fairly short string which is - in practical terms - unique, making the probability that two different documents should have the same hash value very, very low. For example, if you calculate a million hashes per second, a duplicate would only occur once every hundred billion years. This is good enough for most business applications. The key aspect is that if a single character is different in a document, the hash will be totally different.

The way we use hash functions is to use an XQuery function such as

  util:hash($input, $algorithm) (see eXist documentation)

where $input is your input document and $algorithm is a string that identifies which hash function you will use. Common algorithms include 'md5', 'sha1' and 'sha256'. Hash functions always return a single string called a hash value, hash code, hash sum, checksum or simply a hash.

Here is a sample of how to use the md5 hash function.

xquery version "1.0";
let $input := 'Hello World!'
let $hash := util:hash($input, 'md5')
return
<results>
  The MD5 hash for '{$input}'={$hash}
</results>

returns:

<results>
  The MD5 hash for 'Hello World!'=ed076287532e86365e841e92bfc50d8c
</results>

Timing Your Hash Function[edit | edit source]

MD5 is a favorite version of a hash algorithm because it is very fast and always returns consistent length strings that are easy to store, use as REST parameters and comparing values. Here is a sample XQuery that calculates a hash for the entire XML file of the play "Hamlet" which is around 7,842 lines of XML.

xquery version "1.0";
let $file-path := '/db/shakespeare/plays/hamlet.xml'
let $input := doc($file-path)/PLAY
let $start-time := util:system-time()
let $hash := util:hash($input, 'md5')
let $end-time := util:system-time()
return
<results>
  <input-file>{$file-path}</input-file>
  <hash>{$hash}</hash>
  <execution-time>{(($end-time - $start-time) div xs:dayTimeDuration('PT1S'))  * 1000} milliseconds</execution-time>
</results>

This program uses a "system timer" and returns the following result:

<results>
   <input-file>/db/shakespeare/plays/hamlet.xml</input-file>
   <hash>00f1bf99e42afb434dca712f9625aeac</hash>
   <execution-time>15 milliseconds</execution-time>
</results>

The example above, when run multiple times either returns 0 or 15 milliseconds on my local system. This shows that the time is reflecting a disk I/O, so the hash runs very quickly, usually under 10 milliseconds even for large files on a slow computer.

Note that a hash function is very sensitive to the ways that documents are "serialized" and the default behavior may note be what you expect. The default hash only work on the string() value or the element "content" of a document. Note that the string() value of an XML node does not include the attributes or element names of an document. This is not an error, since hash functions are designed to work on strings, not XML documents. The following has two files that have identical content but different element names:

xquery version "1.0";
let $input1 := <x a="1">Hello <b>World!</b></x>
let $input2 := <y b="2">Hello <i>World!</i></y>
let $hash1 := util:hash($input1, 'md5')
let $hash2 := util:hash($input2, 'md5')
return
<results>
  <compare>
    <input1>{$input1}</input1>
    <input2>{$input2}</input2>
    <hash1>{$hash1}</hash1>
    <hash2>{$hash2}</hash2>
    {if ($hash1=$hash2)
       then "same"
       else "different"
     }</compare>
</results>

returns the following:

<results>
    <input1>
        <x a="1">Hello <b>World!</b>
        </x>
    </input1>
    <input2>
        <y b="2">Hello <b>World!</b>

        </y>
    </input2>
    <hash1>ed076287532e86365e841e92bfc50d8c</hash1>
    <hash2>ed076287532e86365e841e92bfc50d8c</hash2>
    <compare>same</compare>
</results>

Carefully Define Document Equality[edit | edit source]

When we ask the question: "Do I already have a copy of this document?" we first need to define what a duplicate document means. In this case we will define it as an XML document that has exactly the same attribute and element in exactly the same order. Technically XML may have attributes in different order and they might be considered the same or they might be considered duplicate documents. The number of spaces before and after element may or may not be significant. Whatever your method you should define the concept of "sameness" carefully for you situation.

Create a Consistent Serialization for Each Document[edit | edit source]

We would like to have a function that turns each XML document into a single string that removes all indentation. This is sometimes called the "canonical" version on an XML document. The following function creates a single string for each XML document:

If you are running eXist 1.5 the util module has a "serialize" function that will convert an entire XML document to a single string.

 let $string-version-of-xml-file := util:serialize($input-xml-file, ())

In the following example the files are identical except for the name of a single attribute.

xquery version "1.0";
let $input1 := <x a="1">Hello <b>World!</b></x>
let $input2 := <y b="1">Hello <b>World!</b></y>
let $hash1 := util:hash(util:serialize($input1, ()), 'md5')
let $hash2 := util:hash(util:serialize($input2, ()), 'md5')
return
<results>
  
    <input1>{$input1}</input1>
    <input2>{$input2}</input2>
    <hash1>{$hash1}</hash1>
    <hash2>{$hash2}</hash2>
    <compare>
     {if ($hash1=$hash2)
       then "same"
       else "different"
     }</compare>
</results>

If your system does not have a serialize function you can create your own using a simple recursive function.

Converting an XML Document to a String[edit | edit source]

The following function can be used to copy an XML file in a consistent format.

declare function local:copy($element as element()) {
  element {node-name($element)}
    {$element/@*,
     for $child in $element/node()
        return if ($child instance of element())
          then local:copy($child)
          else $child
    }
};

The advantage of this is that all attributes will come out in a consistent order.

This function can also be modified to add and remove element or attributes that are not relevant for your comparison. See: Identity Transform with XQuery

Comparing All Files In A Collection[edit | edit source]

The following program will calculate the hash for all files in a collection. It will then report which hash values are the same.

xquery version "1.0";

let $title := 'Find Duplicate Documents'

let $input-collection := '/db/test/hash/docs'
let $file-names := xmldb:get-child-resources($input-collection)

let $hashes :=
<hashes>
   {for $file in $file-names
        let $file-path := concat($input-collection, '/', $file)
        let $doc := doc($file-path)
        let $string-of-doc := util:serialize($doc, ())
        let $md5-hash-of-string-of-doc := util:hash($string-of-doc, 'md5')
        return
        <file>
           <file-name>{$file}</file-name>
           <hash>{$md5-hash-of-string-of-doc}</hash>
        </file>
   }
</hashes>

return
<results>
  <title>{$title}</title>
  <input-collection>{$input-collection}</input-collection>
  {$hashes}
  {for $file1 in $file-names
    let $hash-for-file1 := $hashes/file[file-name=$file1]/hash/text()
    return
    <result>
       <file>{$file1}</file>
       <same-as>{
          for $file2 in $file-names
          let $hash-for-file2 := $hashes/file[file-name=$file2]/hash/text()
          return
            if ($file1 ne $file2 and $hash-for-file1 = $hash-for-file2)
             then $file2
             else 
               ()
          }
       </same-as>
    </result>
   }
</results>

Sample Output[edit | edit source]

<results>
    <title>Find Duplicate Documents</title>
    <input-collection>/db/test/hash/docs</input-collection>
    <hashes>
        <file>
            <file-name>1.xml</file-name>
            <hash>a1facfc0b442306b3cd3d29dd3494108</hash>
        </file>
        <file>
            <file-name>2.xml</file-name>
            <hash>70b78ba529fdcb44ea22b02257385ea0</hash>
        </file>
        <file>
            <file-name>3.xml</file-name>          
            <hash>df13cee105164d50eeda912943391d0b</hash>
        </file>
        <file>
            <file-name>4.xml</file-name>
            <hash>a1facfc0b442306b3cd3d29dd3494108</hash>
        </file>
    </hashes>
    <result>
        <file>1.xml</file>
        <same-as>4.xml</same-as>
    </result>
    <result>
        <file>2.xml</file>
        <same-as/>
    </result>
    <result>
        <file>3.xml</file>
        <same-as/>
    </result>
    <result>
        <file>4.xml</file>
        <same-as>1.xml</same-as>
    </result>
</results>