VideoHelp Forum
+ Reply to Thread
Results 1 to 22 of 22
Thread
  1. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    flotrack and twitter post a lot of these scores. Is there any external software (gui or command line) that can ocr this so i can pull the names to read and search them ?



    source: https://twitter.com/FloTrack/status/655397567258345472/photo/1
    Quote Quote  
  2. Member
    Join Date
    Apr 2003
    Location
    United States
    Search Comp PM
    If you have a scanner then maybe it came with software that does ocr?
    I downloaded the document you linked to and OCR'd it with with Omnipage 18 (purchased on Amazon a couple of years ago) and it did a good job. I saved as a searchable pdf and it worked well.

    Brainiac
    Quote Quote  
  3. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    yeah, i have several scanners and software suites for them, but they want a scanner connected to it before it can even start up. I have 4 usb connections and all are taken up.

    I also tried downloading a few via google search, but my browser pops up a message of malicious software, so i cancel those. Anyway. I will keep searching. If I have to, then I will search for an OCR component for delphi xe7 and see if I can make my own custom drag/drop ocr.
    Quote Quote  
  4. Best software ocr http://www.abbyy.com/finereader/
    You can give a chance to freeware (and open source) https://en.wikipedia.org/wiki/Tesseract_%28software%29 at https://code.google.com/p/tesseract-ocr/


    I would advise to read https://code.google.com/p/tesseract-ocr/wiki/ImproveQuality - source preprocessing is one of the keys to successful OCR. (so in your case, change color to grayscale, adjust levels, some basic operation such as dilate, erode may improve but you need sufficient resolution for this - i.e. approx 300DPI) http://www.fmwconcepts.com/imagemagick/textcleaner/index.php , https://www.cs.auckland.ac.nz/courses/compsci773s1c/lectures/ImageProcessing-html/topic4.htm

    Forget to add one thing - for best results you need to train your OCR engine and best engines always provide such feature as training.
    Last edited by pandy; 18th Oct 2015 at 06:10.
    Quote Quote  
  5. I would change it to black and white, not greyscale, change the contrast first, and then just try a copy and paste into Excel. Export to PDF first may help.
    Quote Quote  
  6. Originally Posted by Nelson37 View Post
    I would change it to black and white, not greyscale, change the contrast first, and then just try a copy and paste into Excel. Export to PDF first may help.
    Grayscale allow to keep fine information - binarization can be (and usually it is ) performed by OCR engine itself - sometimes on adaptive way.
    Quote Quote  
  7. +1 for pandy. "Grayscale allow to keep fine information" That always seems to work best both for OCR stuff and PDF copies. (For me anyways) But it will increase the file size. Black and white seems to produce smaller files but grayscale looks better.
    Extraordinary claims require extraordinary evidence -Carl Sagan
    Quote Quote  
  8. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    Well, at the moment, there is no easy way to make this image black/white or grey, yet. I thought the Tesseract ocr command line tool had a parameter for that. My mistake. I guess you have to pre-propcess separately first, and then feed into the command line ocr. I'll have to find a fast way to bw or grey an image and feed that into the ocr.

    I played around with the tesseract ocr tool but received horrible results so far. I want to be able to quickly copy/paste/ocr an item that I might snip from the web from time to time, or when i come across a good score card or stats on tracflow (and other sites that posts these sort of things) and ocr them quickly, bring them into notepad or excel or database and maybe work further on the data. I will keep trying for a while longer. Thanks for all the suggestions.
    Quote Quote  
  9. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    ok, i managed to make a small color-to-grey scale image converter app via a copy/paste feature. It used a luminance formula. I think the ocr processed better. Not sure what others here have gotten. Anyway. I will continue making adjustments and improvements.

    Code:
    Prim:Time Timing - Cunmicxor License Hy-Tek's MEET MANAGER |0:57 AM 1
    Lullslv 2 D] Pre-Nationals - I0/I7/Z015
    E.P. Tom’ Sawyer sme Park
    Resulls
    
    Name Year School Finals 1>o1nts 
    
    Results β€” women
    
    1 Erin mark 03 Colorado 20:00.5 1
    2 Hannah Everson SR Air Force 20:01.2 2
    3 che1sea Elaase 51: Tennessee 20:02.5
    4 Sharon Lokedi rs: Kansas 20:00.3 3
    5 Hannah Mclntuxff so utah 20:09.0 4
    6 Erin Finn JR Michigan 20:09.1 5
    7 nataiie Schudrowilcz so Brown 20:09.4 5
    0 wsver1y Nee: sk oregan 20:11.5 7
    9 Sarah Fe-any so utan 20:12.1 0
    10 chriatina 14e1ian 01: Stony Brook 20:13.5
    1}. Letitia Saaylnan SR Coastal Carolina 20:15.1 9
    12 Marta Freitas 512 Miss state 20:15.2 :0
    13 Shannon osika 51: Mlchlgan 20:15.5 11
    14 Rhianwedd Price .11: Miss state 20:17.7 12
    15 1<ait1yn senner so Colorado 20:22.4 13
    15 sarah Reitez 012 Eastern Washington 20:25.9 14
    17 Marissa Williams nz California 20:27.1 15
    1s Maddie 1414:: s12 ce1eraaa 20:27.0 1e
    19 Grace Barnett JR Clemson 20:27.9 17
    20 Rachel Walny ax Bowling sreen 20:20.1 10
    21 Maria Larsen FR Florida 20:29.0 19
    22 Cali Roper JR Rice 20:29.2 20
    23 Laura Rose Donegan sk New Hampshire 20:29.6 21
    24 Kashley carter ma Utah state 20:30.5 22
    25 Maria McDaniel so western mchigan 20:31.3
    26 Vanessa Fraser so Stanford 20:32.1 23
    27 Kristen Busch sx Bradley 20:32.7 24
    20 Siofra Cleirigh auttner so Vlllanmla 20:33.2 25
    29 Many Grabill 511 Oregon 20:33.3 2e
    30 Karissa schweizer so Missouri 20:33.4 27
    31 Katy Kunc s0 Kentucky 20:34.5 20
    32 srittany 1-retbar JR Oklahoma 20:34.0 29
    33 Andrea Keklak sx Georgetown 20:35.2 30
    34 Angel 1=1cciri11o 00 Villanova 20:35.5 31
    35 Gina Serena so Michigan 20:37.4 32
    36 Hope Schmelzle JR Purdue 20:30.5 33
    37 ssrah Baxter na oregon 20:39.9 34
    30 Jaimie 1=he1an so mchiqan 20:40.1 35
    33 Makena Morley ma Montana 20:40.3
    410 Taylor spiuane JR Cornell 20:40.4 36
    l1'S6P_h1e chase .10 Stanford 20:41.0 37
    12 1-1a,1β€˜1β€˜ey whetten JR Weber State 20:41.7 30
    4.3 '7.3lllie stokes 50 weber state 20:41.7 39
    I4 Peyton 13115 170 ca1 Poly 20:41.41 40
    45 Jeraam Mcbermitt so Eastern Michigan 20:42.2 41
    ' β€˜'5 Baily Durgin sn conneetieut 20:43.2 42
    1: $7 K35 dam. nasker so Purdue 20:43.3 43
    Quote Quote  
  10. try this
    Image Attached Thumbnails results.pdf  

    Image Attached Files
    Quote Quote  
  11. Lone soldier Cauptain's Avatar
    Join Date
    Jan 2006
    Location
    Brazil
    Search Comp PM
    Look this Vhelp.

    Make using Acrobat DC Pro




    Claudio
    Image Attached Files
    Quote Quote  
  12. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    @ blud7, (and Cauptain) that is excellent. I don't think that I can achieve that with tesseract. But thanks for proving that it can be done.

    As mentioned previously, using the code in my custom app, (in post # 9, above) to convert to grey scale works better for the recognition. But I would like to try to improve upon it.

    Does anyone know how I can convert this image to two-color black and white or b/w ?

    I want to give that a try.

    Right now, I am looking for the formula to do this. I don't know gimp or any other external app. I want to use my own custom app for this. And as part of a mini-suite, I am using delph 6, 7 and xe7 for the custom image conversion process. i.e.,

    I have made the following functions assigned to these buttons in my app so far, since post # 9:

    [copy] . [paste] . [convert2grey] . [save to png] . [run ocr]

    Code:
    [copy]         - to clipboard
    [paste]        - from clipboard (from webpage, etc) 
    [convert2grey] - the image 
    [save to png]  - [run ocr]
    But next, I would like to add another function button for [convert2two-color].

    I am using this code for the grey scale conversion, below. I couldn't find my original code from my own image processing routines I collected and used in my own apps over the years, so i googled and found this one and it worked for this exercise.

    Code:
    f: http://stackoverflow.com/questions/13583451/how-to-use-scanline-property-for-24-bit-bitmaps
    s: How to use ScanLine property for 24-bit bitmaps?
    
    This luminance value is then assigned to each color component of the iterated pixel:
    
    type
      PRGBTripleArray = ^TRGBTripleArray;
      TRGBTripleArray = array[0..4095] of TRGBTriple;
    
    procedure GrayscaleBitmap(ABitmap: TBitmap);
    var
      X: Integer;
      Y: Integer;
      Gray: Byte;
      Pixels: PRGBTripleArray;
    begin
      // iterate bitmap from top to bottom to get access to each row's raw data
      for Y := 0 to ABitmap.Height - 1 do
      begin
        // get pointer to the currently iterated row's raw data
        Pixels := ABitmap.ScanLine[Y];
        // iterate the row's pixels from left to right in the whole bitmap width
        for X := 0 to ABitmap.Width - 1 do
        begin
          // calculate luminance for the current pixel by the mentioned formula
          Gray := Round((0.299 * Pixels[X].rgbtRed)   +
                        (0.587 * Pixels[X].rgbtGreen) + 
                        (0.114 * Pixels[X].rgbtBlue));
          // and assign the luminance to each color component of the current pixel
          Pixels[X].rgbtRed   := Gray;
          Pixels[X].rgbtGreen := Gray;
          Pixels[X].rgbtBlue  := Gray;
        end;
      end;
    end;
    note: the above is pascal (delphi) source code, not c/c++

    looking for code examples for [convert2two-color].

    I saw this, but couldn't understand it. But am still searching...
    http://www.imagemagick.org/Usage/color_basics/
    Last edited by vhelp; 21st Oct 2015 at 21:36.
    Quote Quote  
  13. Hi,

    Just to confirm Pandy's advice: I've been using "Abbyy Fine Reader" for years (under Windows XP & others): it always worked and still works very well, including an old version I got (bundled) with a scanner driver, ages ago. But, except that old version ("OEM" or so), it's not free.

    Of course, like any OCR, it's expecting: 300 dpi images, rather than 72, 96 and other resolutions; or 600 dpi when the original characters of the image are very tiny;

    and greyscale mode? In some cases, yes, but not indispensable, most of the time.

    BTW: black and white mode works too (image saved to TIF(F), LZW compression), as long as the scanned characters are big enough. Otherwise, I also confirm that greyscale (and moreover, usual RGB mode) retains more graphical information than B&W ("bitmap" mode), thus preventing (more) OCR errors.



    The ideal source image being: well contrasted black characters on clean white backgound (or white chars. on clean black bkg.), source images often need to be "cleaned", before OCR-ing.

    Unless the text is almost of the same color as the background (or on composite / complex bkg.), or not contrasted enough (against its bkg.), not only the "cleaning phase" is quite easy, but "Abbyy Fine Reader" (for example) usually accepts even low / poor contrast pictures. But not a pile of mud!


    For instance, on the initial image posted above: black chars on medium light blue background, I tested: "Abbyy" was still able to proceed, but returned some OCR errors — and warned, as usual: Increase the resolution to 300 dpi to yield better results...

    Note that a 300 dpi resolution is not indispensable, as long as the image, even at 72 dpi (dots per inch) only, is... big (!), in width and height (in pixels).

    Here, the posted image resolution is 72 dpi only... Why not, but its width and height are only (respectively): 10.667 Χ 14.222 inches, or 768 Χ 1024 pixels...

    Use any version of "Photoshop" — not free *, OK —, including very old, to increase (Image > Image size) the resolution to 300 dpi; OR the overall size in pixels, by ~400 to 500% (300 / 72 = 4.17).

    * Except some 3.x to 4.x or 5.x free "Photoshop" LIGHT versions for Windows PCs, that I remember of and keep, which came bundled with printer drivers (on CD) — those light versions being way "advanced" enough for such basic modifications.

    "Photoshop" is probably the most convenient AND EASY tool, to the next and often necessary (but ususally quick) "operations": increase contrast, and convert to greyscale.

    But many others, including free softwares of course, will do: "Paint.NET" (free), or the very lightweight "Fotografix" (v. 1.5 or above), free also. And "The Gimp", of course. (I also tried "FastStone Viewer 4.8", free, on this, but couln't get enough contrast in one pass. "ACDSee", commercial, did better...).

    Besides Brightness/contrast — and often 1st of all: one interesting command, with "Photohop": Image > Adjustments > Replace color, rids the document of one of its color, more or less, and selectively (adjustable threshold). In many cases, when Brightness/contrast won't clean well enough, that "Replace color" option saves a lot of time.

    And it means that, in general, you should keep the image in its RGB (red greeen blue) or CMYK (cyan magenta yellow black) mode or state until you're done with more "advanced" retouching...

    ... and, THEN only, convert it to greyscale (not the other way around).


    Also remember that "Abbyy Fine Reader" accepts RGB images anyway. In fact, converting to greyscale an image that's clean enough already is not very useful; more helpful on complex images or, rare / worst case: characters altered by several contained colors if not "rainbow effects" (in each one of them / happens, sometimes, giving most OCRs a hard time).

    Now, unless I misunderstood something here above (bottom posts), I wouldn't try to automatize brightness/contrast modifications, as any new scanned image is usually quite different from any previous or next one (unless, of course, working on a bunch of pictures that are very similar in contrast, brightness, etc.).


    .
    Quote Quote  
  14. Well, why not Avisynth? Imagemagick is fine, Gimp can be also used, some batch conversion can be performed trough Xnview, side to this sometimes dilate/erode operators can be useful.
    Quote Quote  
  15. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    What software was used to ocr the image in post # 10 ?

    If I can use command line tools (in special cases) I would not mind that. I can still use my custom app.

    For instance, on the initial image posted above: black chars on medium light blue background, I tested: "Abbyy" was still able to proceed, but returned some OCR errors — and warned, as usual: Increase the resolution to 300 dpi to yield better results...

    Note that a 300 dpi resolution is not indispensable, as long as the image, even at 72 dpi (dots per inch) only, is... big (!), in width and height (in pixels).
    .
    .
    Use any version of "Photoshop" — not free *, OK —, including very old, to increase (Image > Image size) the resolution to 300 dpi; OR the overall size in pixels, by ~400 to 500% (300 / 72 = 4.17).
    What about converting the image to 300 DPI then ?
    I don't understand the DPI calculations and how they relate to resizing an image. So, I would need to know how to do that. If I can do that through pascal code, that would make the feature seamless for this project.
    But I would be willing to consider an outside program to do that if necessary. But I'm trying to keep all the features within the custom project app I am developing.
    So, for this scenario (once I figure out how to change an images DPI) I would copy to clipboard the image on a webpage. The text could be small or large. I could send that to a custom DPI_Resizer() routine before processing further.


    "Abbyy Fine Reader"
    Is a deal breaker. I looked this software up, and rejected it, because it was too personal with the requirements for dowloading. I don't see the need to have all that info to download something. There are many software suites great and small, that don't require all this, just a download link and you're done.

    Now, unless I misunderstood something here above (bottom posts), I wouldn't try to automatize brightness/contrast modifications, as any new scanned image is usually quite different from any previous or next one (unless, of course, working on a bunch of pictures that are very similar in contrast, brightness, etc.).
    I can add a feature to adjust the image in real time (with slider controls), run it (or parts of it--the image) through the ocr to fine tune a scan.

    Well, why not Avisynth? Imagemagick is fine, Gimp can be also used, some batch conversion can be performed trough Xnview, side to this sometimes dilate/erode operators can be useful.
    I could probably use avisynth, too. But I don't know how to use its image processing functions (ie, turn color into 2-color) plus, I don't believe you can use it in conjunction with the custom app unless avisynth has a command line interface. I don't think it does. I would've known about it long ago then.

    Thanks for the pointers and suggestions.
    Quote Quote  
  16. Member Verify's Avatar
    Join Date
    Oct 2008
    Location
    United States
    Search Comp PM
    Microsoft Office Notebook has a built-in OCR capability - it converts text images to Word text.
    Just scan as an image and use Notebook to convert or save as an image from the web-site (has worked well for me).
    Andrew Jackson: "It's a poor mind that can only think of one way to spell a word."
    Quote Quote  
  17. Originally Posted by TreeTops View Post
    +1 for pandy. "Grayscale allow to keep fine information" That always seems to work best both for OCR stuff and PDF copies. (For me anyways) But it will increase the file size. Black and white seems to produce smaller files but grayscale looks better.
    Up until this very moment I've lived my life under the assumption black and white and grayscale were just different ways of expressing the same thing.

    Irfanview has a "convert to grayscale" function, but how do you go about converting an image to black and while correctly? Sorry to sidetrack the thread, but I don't know.
    Quote Quote  
  18. Originally Posted by vhelp View Post

    What about converting the image to 300 DPI then ?
    I don't understand the DPI calculations and how they relate to resizing an image. So, I would need to know how to do that. If I can do that through pascal code, that would make the feature seamless for this project.
    But I would be willing to consider an outside program to do that if necessary. But I'm trying to keep all the features within the custom project app I am developing.
    So, for this scenario (once I figure out how to change an images DPI) I would copy to clipboard the image on a webpage. The text could be small or large. I could send that to a custom DPI_Resizer() routine before processing further.
    Maybe Waifu2x or xxEDI? - never tried as i've performed my last OCR like 15 years ago...
    DPI calculations are simple 300 pixels per each inch so approximately 2500x3500 pixels page

    Originally Posted by vhelp View Post
    "Abbyy Fine Reader"
    Is a deal breaker. I looked this software up, and rejected it, because it was too personal with the requirements for dowloading. I don't see the need to have all that info to download something. There are many software suites great and small, that don't require all this, just a download link and you're done.
    But IMHO this is the best OCR i've even use... after learning (usually 3 - 4 pages) it provide almost 100% correct results...

    Originally Posted by vhelp View Post
    Now, unless I misunderstood something here above (bottom posts), I wouldn't try to automatize brightness/contrast modifications, as any new scanned image is usually quite different from any previous or next one (unless, of course, working on a bunch of pictures that are very similar in contrast, brightness, etc.).
    I can add a feature to adjust the image in real time (with slider controls), run it (or parts of it--the image) through the ocr to fine tune a scan.

    Well, why not Avisynth? Imagemagick is fine, Gimp can be also used, some batch conversion can be performed trough Xnview, side to this sometimes dilate/erode operators can be useful.
    I could probably use avisynth, too. But I don't know how to use its image processing functions (ie, turn color into 2-color) plus, I don't believe you can use it in conjunction with the custom app unless avisynth has a command line interface. I don't think it does. I would've known about it long ago then.

    Thanks for the pointers and suggestions.
    My experience is: high DPI - 300 at least but 600 seem to be overkill, correct postprocessing - good results with median, sometimes https://en.wikipedia.org/wiki/Closing_%28morphology%29 is recommended.

    Binarization/thresholding is pretty simple - just limit levels (autolevels?) at and expand everything above threshold to white and all bellow threshold to black, chroma part set to 128 so nullified - avisynth-centric perspective.
    Quote Quote  
  19. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    found this but still no formula or code snippet for color2two-color conversion. I think its about the term. I am not sure what the proper search term is in order to plug into google in order to get the hits. anyway. still searching...

    http://www.mehdiplugins.com/english/finethreshold.htm
    Quote Quote  
  20. Originally Posted by vhelp View Post
    found this but still no formula or code snippet for color2two-color conversion. I think its about the term. I am not sure what the proper search term is in order to plug into google in order to get the hits. anyway. still searching...

    http://www.mehdiplugins.com/english/finethreshold.htm
    It will produce grayscale (it apply threshold then slightly blurring results)... consider this as antialiasing .
    Quote Quote  
  21. Member vhelp's Avatar
    Join Date
    Mar 2001
    Location
    New York
    Search Comp PM
    update.. i finally figured out how to convert to two color through pascal code.
    Quote Quote  
  22. .
    Hi

    Originally Posted by vhelp View Post
    What about converting the image to 300 DPI then ? I don't understand the DPI calculations and how they relate to resizing an image.
    Although you have tons of Web pages on that suject, depending on what you need to do, understanding that point in depth is not essential, or at least, all its aspects. When OCR-ing, what you want is: as few recognition errors as possible, of course.
    Now, if you are programming something, there might be some misunderstanding, here. Anyway, I cannot help in that case.
    All I can remind you — in case you're not sure yet —, is that, especially if the characters are (very) tiny, you should scan at 300 dpi. Or 600 dpi, if not more, with extremely tiny characters. And that takes longer than scanning at 150, or worse, at 96 or 72 dpi.
    [ 300 dpi is also the most standard professional printing resolution. ]
    So, there is nothing special to understand neither to calculate, here. If you still want or need to calculate, it's also very simple: just consider that your scanned image overall quality will be (72 / 300 =) 24 % only at 72 dpi, of what it would be at 300 dpi.

    Or the other way around: at 300 dpi, the scanned image quality is (300 / 72 =) 4.17 times (417 %) better than what it would be at 72 dpi.
    What do I mean by "better quality"?
    HERE, it's quality in terms of: what OCR programs are expecting. They all want PRECISION, not the kind of blurry "pile of mud" you get at 72 dpi (dots per inch), especially when the characters drown in their background — and that happens, for instance, when they are grey... on just a little lighter or darker grey background.
    In short, enough graphical contrast between the characters themselves and their background is essential. It's not because humans can still read, that OCR programs are "satisfied"...

    Here, I only mentioned the resolution (in dpi) at which you scan — because it's the most important choice. Do you want less recognition errors? Scan at 300 dpi. Or, if you're scanning ultra tiny characters, may be on a postage stamp or very small image, scan at 600, if not more dpi.
    One more example? Scanning 24 Χ 36 mm SLIDES would require such resolutions as 2400 dpi, or better: 3200 dpi. And even (sometimes) 6400 dpi — depending on the maximum resolution at which your scanner is able to work (+ a backlight in all cases, to scan slides).


    Now, you may also grab, download, etc., any image that you didn't scan by yourself. Too bad: general public scanner brands often set the default resolution of their products at 150 dpi only (because it's faster than at 300 dpi, and produces lighter files, in megabytes). And sometimes, even worse: 96 if not 72 dpi only factory settings...

    What most end users do? As you can expect, since they don't know much, if anything (...), about resolution, they just scan without modifying any of the settings — forgetting this: the resolution requirement for acceptable SCREEN display of images is always more optimistic than for PRINTING...

    It's when they PRINT their scanned images, that they complain, disappointed. But, as long as they display them on medium and small sized screens only, even scanned at 72 to 100 dpi, those images look good enough...

    In "conclusion", when you find scanned images on the Web, they will often be at 72 dpi only (+ 72 and 96 dpi are the standard resolutions applied to pictures inserted to HTML pages...).

    "OK", can blowing their resolution to 300 dpi ("Photoshop" : Image > Image size, and replace 72 with 300) be useful?
    1. At least, OCR programs don't complain anymore.

    2. But: what actually happens, when you type 300 instead of the native 72 dpi resolution?

    "Photoshop" or else just "interpolates" pixels. To put it simple, consider that it inserts pixels, in order, on a line segment of one inch, to give you the 300 graphical dots you want.

    Where does it find all those extra dots (pixels)? By copying adjacent pixels and averaging their respective Red, Green and Blue values.

    And what do you get? A bigger image for sure, but blurry, since no algorithm is able to create, to invent anything from scratch. To reduce the blurry effect (somewhat), when increasing the resolution, select the "trick" (option) labelled "Bicubic"...

    So far, you can conclude that feeding OCR programs with small images but precisely focused, OR with big images but blurry, makes no difference, as to improve recognition accuracy (= as few errors as possible).

    So, the only advantage, here, is: no complaint, by the OCR program...

    BUT: once you've increased the resolution, what you should do is apply a "Sharpen" filter, to the whole image — and, if needed: twice or more times.


    Also, among many retouching manipulations, one of the most basic ones consists (of course) in increasing the contrast, between the characters and (or against) their background.

    Depending on the graphical quality (in terms of contrast, mostly) of the original document, you may as well: NOT expect the least "miracle", OR save yourself a whole lot of time!


    A little more, to understand resolution with no calculation (while, or now that we are at it...).
    Except for some rare A3 models, the majority of scanners are sized: 297 mm Χ 215 or 216 mm (therefore, the 297 Χ 210 mm "A4" format is included).

    Scan... even empty, at 72 dpi, and save the resulting image. Now, scan again, even empy but the same exact "thing or emptyness" and, this time, save at 300 dpi.

    Now, compare the weights in kilo or megabytes of those two results. Why is one of the two images heavier in megabytes than the other?..

    Moreover, should you print both images, they would fill the same paper surface. So, what the difference could be, between those two images?


    If you want quite obvious test results, avoid very composite images: try to scan some smooth shade of any color, for instance. You will see that the print you get from the 300 dpi image is smoother, more continuous, than the one you get from the 72 dpi one — and that, eventhough your printer driver "interpolates" pixels too (without letting you know).

    Therefore, here, we can say that the 300 dpi scanned result is of (far) better quality than the 72 dpi one — ALTHOUGH, again, the actual size in inches, or centimeters or millimeters, is the same.
    That's what's meant by sufficient RESOLUTION : more information, MORE PIXELS, TO DESCRIBE (to render) the same thing, even printed or displayed at same absolute size.


    Or, if you want even simpler a demo: take a photo of anything, but totally out of focus, and then, the same subject and framing + lighting again but, this time, well focused.

    Would you ask anyone to recognize or identify, READ with no error, a small detail, may be the distant tiny license plate of a car, out of a very blurry image?..

    Only, our brain is trained to associate ideas and shapes, in order to rebuild mentally what's missing, within certain limits of course, and depending on our culture + many other parameters. Not the OCR programs!.. until you train them (some can be trained... if not most of them, nowadays).

    That's why OCR program fuss on precision, that you can call — here... — resolution. And that's why, if you don't want to waste too much time, YOU also should fuss on resolution and good contrast; i.e.: on quality.



    Originally Posted by vhelp View Post
    "Abbyy Fine Reader" iIs a deal breaker. I looked this software up, and rejected it, because it was too personal with the requirements for dowloading.
    I'm using an old 8.x version of "Abby Fine Reader", that happens NOT to bug the user with all that you report. Because, just like you, I can't stand those new and crazy + discouraging lousy marketing "trends"! Same with even older version 6.x, that came bundled with a scanner I once bought or found.

    Of course, I agree with: as long as it does a good job, i.e. whatever the tool!

    But, after testing some OCR programs, I agree even more with:

    Originally Posted by pandy View Post
    But IMHO this is the best OCR i've even use... after learning (usually 3 - 4 pages) it provides almost 100% correct results...
    Same here. Every time I use my old "Abbyy 8.x", unless I feed it with very lousy images or scans, the result is very good... In fact, better than with other OCRs I've tested (long ago).

    __________________


    Originally Posted by vhelp View Post
    I can add a feature to adjust the image in real time (with slider controls), run it (or parts of it—the image) through the ocr to fine tune a scan.
    Here, I don't only agree, I would recommend that: in the 1st place! Otherwise, how do you expect to manage, to correct brightness and contrast problems, that are not only very diverse, but often bad enough to prevent any accepable result?..


    Originally Posted by hello_hello View Post
    Up until this very moment I've lived my life under the assumption black and white and grayscale were just different ways of expressing the same thing.
    They ARE! But, as you know, depending on the WAY you express something, it can — sometimes — be understood quite differently. That's precisely how fussy OCR programs can be...


    Just zoom up to 800 or 1600 % on the edge of any character, on a B&W ("bitmap mode") scan, and then on the edge of the same character, on a greyscale scan, this time. Of course, the scanned (source) image should be the same, and scanned at the same resolution (in dpi).


    On the B&W image, there are of course no shades of grey, at the junction between the character itself and the background, usually white. The edges of the character interrupt brutally (no tansition)

    What difference does that make, when it comes to recognize automatically — = using an algorithm — all those characters?

    When they are big enough, no difference. So, you should rather scan in B&W mode: it's faster, and the resulting images are much lighter in kilobytes: just save them to TIF(F) format: LZW compressed (avoid "zip" and other compression schemes, that TIF(F) provides).


    When the characters are tiny, in B&W mode, their edges appear very pixellized. In greyscale mode also, but much less, since those edges are somewhat blurred, ranging more or less smoothly from black (or very dark) to grey, and to white – to the whiteness of the background.


    Now — in B&W mode —, those edges being harshly pixellized, if the characters are tiny, the SHAPES of the edges interfere too much with the overall shapes of the characters themselves. In a word, the very standard shapes OCR programs are expecting, for each character, are not accurate enough, anymore. That results in more recognition errors.


    Predicting recognition difficulties is absolutely impossible without a concrete example at hand. One thing is sure, though: if OCR-ing B&W images yield too many errors, try to scan the same images in greyscale mode.


    If the B&W very tiny characters images come from the Web and cause many OCR errors, convert them to greyscale (or RGB) and apply a VERY SLIGHT blur effect to them. That will round their curves back.

    It won't work every time but, if you find the right setting (amount of blur effect) to apply, say, to a whole bunch of images, you will save a lot of time.

    I know it sounds as a paradox, after advising (earlier) to use well focused, rather crisp images; but, again, I'm talking about very tiny chars., right here.

    In fact, scanning and/or OCR-ing many pages being tedious, the more time you spend choosing the best mode (B&W or greyscale, or even RGB in a lot of cases) + fine tuning the brightness, contrast and sharpness or blurriness, the better.



    .
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!