-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine tuning training for a mix language tessdata #15
Comments
I also have the same question for jpn.traindeddata and japanese.traindeddata, what is the differences between them and how can i fine tune for japanese.traindeddata |
Unpack the traineddata (Hebrew or Japanese). Run dawg2wordlist to get the input wordlist files, in case you want to change them. You may need to add 'Hebrew' or 'Japanese" as valid language codes in training/language_specific.sh and create subfolders for them under langdata with the unpacked files. Alternately, you can modify heb or jpn langdata folders with the new files and train using Hebrew or Japanese best traineddata for extracting the lstm model to continue from. See |
@Shreeshrii thanks for replaying. I'm not sure i understand the answers to original
|
The training was done by @theraysmith @ Google. I only know based on what he has posted in these forums. Please see tesseract-ocr/tessdata#62 (comment) where he has explained about models for 'scripts' vs 'languages'. |
I am going to fine tune one of the tesseract_best traineddata file with new fonts, but I am not sure how many pages should I use for training, or how many iterations, and not to impact badly existing traineddata file, is there any recommendations abut that parameters ?? |
@kotebeg Please see https://github.com/tesseract-ocr/tesseract/wiki/TrainingTesseract-4.00#fine-tuning-for-impact Use the tessearct-ocr google group for asking questions. |
If understand correctly, traineddata files that starts with a capital letter are "mixed languages" traineddata (e.g Hebrew = heb+eng).
Was is produced by combining "heb" and "eng" traineddata files or was it trained from scratch on a mix language data?
Is there anything i should do differently if i want to do a fine tune training the "Hebrew" traineddata compared to the "heb" traineddata?
The text was updated successfully, but these errors were encountered: