Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compression artifacts in GT #33

Open
bertsky opened this issue Jun 19, 2019 · 10 comments
Open

compression artifacts in GT #33

bertsky opened this issue Jun 19, 2019 · 10 comments
Assignees
Labels
groundtruth Groundtruth quality issues

Comments

@bertsky
Copy link
Contributor

bertsky commented Jun 19, 2019

Another report on GT issues (not assets):

In …

…images show clear signs of JPEG compression, with notable artifacts around sharp contrast like graphemes. ImageMagick identifies them as TIFF with 200 PPI (or 72 PPI or no resolution tag at all), without compression, and without any crs or exif tags and with very few tiff tags (e.g. no software or artist).

(In contrast, "good" images in other workspaces are identified as TIFF with 300 PPI without compression with full aux, crs, xmp, exif and tiff tags, which list the camera model, exposure settings, the true date stamp – somewhere in 2011 – and that it was created with Adobe Photoshop Lightroom. Sometimes, they are also TIFF with 300 PPI without compression without those tags but listing IrfanView or PROView or OmniScan or multidotscan as creator software.)

I found this because I had trouble binarizing such images: I would always get too many (un)connected components, regardless of threshold settings.

@tboenig I'd say this is the most urgent issue so far.

@tboenig
Copy link
Contributor

tboenig commented Jun 24, 2019

Hi @bertsky,

Thank you very much.
In order to understand GroundTruth I have to look at the background of the creation of the data.
The GroundTruth data is based on the German Text Archive. The data was written manually on the basis of very legible and high-resolution images. The quality of the images should offer the transcriber a high magnification, so that he can capture the text 100% in full text.

The listed objects come from different libraries.
Because these libraries did not provide the German Text Archive with TIFF files, the JPG files provided had to be used. Even in the case of queries addressed to some libraries, no TIFF files could be provided for the titles mentioned. The DTA project was also unable to afford the costs for subsequent digitisation. See for example:
https://www.sub.uni-goettingen.de/fileadmin/media/texte/benutzung/Preisliste_Reproductions_20150306.pdf

Even today, no TIFF images can simply be downloaded.

TIFF header:
The files were previously JPG files, so there can be no correct header available, which might correspond to the guidelines: https://www.slub-dresden.de/fileadmin/groups/slubsite/SLUBArchiv/SLUBArchiv_Hanreichung_TIFF_v1.3.pdf.
As far as I know, there is no uniform rule for libraries which header data to use. For this reason, heterogeneity must always be expected.

Why are there such data in GroundTruth?
It is not unrealistic that such data, despite all due care, are stored in the libraries and have to be converted into full text. The goal of OCR-D should be that the programs and algorithms are so stable that they can handle the artifacts easily.

However, we know that training requires the best data, which should be available in a very large number and variety. We are still trying to increase the number of training data.

@bertsky
Copy link
Contributor Author

bertsky commented Jun 24, 2019

Thanks @tboenig for this thorough investigation and explanation!

If those files are there to stay, and for good reasons too, then I recommend at least marking them as degenerate in the GT repos (or even splitting GT into a "good" and a "robust" set).

Also, under these circumstances, I think we should give binarization a closer look (effective DPI, artifacts).

@tboenig
Copy link
Contributor

tboenig commented Jun 24, 2019

@bertsky splitting GT into a "good" and a "robust" set
That's a really good idea. I'll see how I implement it.

@kba
Copy link
Member

kba commented Jun 25, 2019

@tboenig will provide those lists and we will evaluate how to integrate automated checks (image characterization) into workspace validation in core.

@cneud
Copy link
Member

cneud commented Oct 17, 2019

I strongly opt for keeping the above part of assets for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).

@kba
Copy link
Member

kba commented Oct 23, 2019

I strongly opt for keeping the above part of assets for testing purposes as this well reflects real-life scenarios for which the OCR-D stack should be made robust (what @tboenig said).

@bertsky was referring to the GT we offer for training not the assets repo itself.

@bertsky
Copy link
Contributor Author

bertsky commented Nov 1, 2019

What's the status of the work on a good vs robust split of GT data?

And related but independent, those datasets which have wrong resolution metadata (e.g. praetorius_syntagma02_1619_teil2 and glauber_opera01_1658 reporting 72 DPI, whereas they are in fact 600 DPI), shouldn't their header information be corrected at least? (Remember, we now rely on pixel density – where annotated – in core and other processors.)

(The images in the 2 mentioned bags also contain a digital footer added to the scan – this is clearly wrong, isn't it?)

dfgviewer

@bertsky
Copy link
Contributor Author

bertsky commented Nov 1, 2019

(Remember, we now rely on pixel density – where annotated – in core and other processors.)

To illustrate, this is what happens during ocrd-cis-ocropy-dewarp in a sensible preprocessing pipeline:

OCR-D-IMG-DEWARP_0001_TextRegion_1479909781070_10_tl_27

Thus, because

  • 600 actual DPI got interpreted as 72 reported DPI),
  • the region was deemed too large for line segmentation in ocrd-cis-ocropy-resegment,
  • the GT line segmentation (which has large overlaps) was applied unchanged,
  • intruders from the neighbouring lines interfered with center line estimation,
  • dewarping actually warps (deteriorates) the line images even more.

@kba
Copy link
Member

kba commented Nov 1, 2019

@tboenig being the GT guru should answer this.

Pragmatically, I would relax the requirements on pixel density since we just cannot rely on image metadata for this. Unfortunately. c.f. OCR-D/spec#129 and OCR-D/core#339

@bertsky
Copy link
Contributor Author

bertsky commented Nov 2, 2019

Thanks @kba for addressing this quickly. This is a real problem for our workflows – for preprocessing (as can be seen above) just like segmentation and OCR (e.g. Tesseract's DPI variable).

I am a bit surprised by your stance, though. When @wrznr and I brought this up on the last developer workshop, we encouraged module projects to make their components DPI-aware/relative. Why was there no objection at the time?

However, if you want to do it this way, please do it better. I took the liberty to add reviews on both your spec PR (for a better definition of exceptions) and core PR (for a more manageable reaction). I know it's much more work, but I believe we risk loosing big time in overall achievable quality if we just let this slip through.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
groundtruth Groundtruth quality issues
Projects
None yet
Development

No branches or pull requests

4 participants