As part of the evaluation framework i'm developing for OPF and Scape I've been working on gathering a corpora of files to run experiments against.
Although Govdocs1 would seem like a good place to start there are a few problems:
1) It's too big, 1 Million Files is just showing off.
2) It's full of repeats! There are over 700,000 PDF files.
3) Running experiments on 1 Million files that is full of repeats generates too much data (yes there is such a thing)
So I went on a mission to reduce the corpora in size that I explain here.
In order to reduce the corpora in size I am relying on the ground truth data, which is the results of funning the File Identification Toolset (FI-Tools) over the corpora. Now the ground truth data may not be correct but I am relying on it being consistantly wrong such that the size of the corpora can still be easily reduced. We shall hope to find out later if it is wrong.
Stage 1 – Irradicate all Free Variables (Mr Bond)
The ground truth data also pulls out many of the charecteristics of each file. Since we are only interesting in the identification data, lots of data can be removed.
Properties to remove:
- Last Saved
- Last Printed
- Number of Pages
- Image Size
- File Name (for now)
- File Size (for now)
- other charecterics…
Properties to keep
- Version (& related information)
- Valid File Extensions
- Accuracy of Identification
- Creating Program (or library)
- Description Index (serial code)
- Extension Valid (Y/N)
Stage 2 – Sort-id
This is an easy stage, run:
#sort -u data.txt > limit.txt
This gives us 4653 unique identifications made up of 87 different extensions. Of the 4653 identifications:
Only 20 extensions have more than 20 different identification types, probably down to the lacking number of files in the govdocs selection. However it is still shocking to see that PDFs can be created in 3337 different ways. Considering other formats have never changed (text) we have 20 or so versions fo PDF (including PDFa) and loads of creation libraries. By trying to solve the problem have we actually made it worse?
At this point we could just stop and select 4653 files, one of each type of identification.
Stage 3 – Select some Files
The final stage is to actually select some files of each of the 4653 types of identification.
It was decided to select 10 of each type of identification where possible.
If it wasn't possible to select 10 of each type then however many were available were selected.
Where more than 10 were available the following selection policy applies:
- Select the largest in filesize
- Select the smallest in filesize
- Select 8 random others.
Stage 4 – Publish
Further to this i'll also push up the code that does all this.