Re: Graphic to Text Interpretation
There are a lot of documentation about OCR
(optical character recognition) on the web. However, most of them are all about maths and maybe a bit harder for an average level (like mine
According the reduced number of glyphs you have to decode (9), their location (in cell center), their orientation (horizontally aligned), a simpler approach may interest you: Bayesian theorem
(or fuzzy logic
The principle is to obtain a (enough high) probability that your glyph is (let's say) rather a '5' than a '9'.
The Bayesian theorem
can tell how probable
is an event in a context, according previous statistics done on this event in same context.
It is used for instance in photography where camera should recognize the type of photography (portrait or landscape or macro). Given statistics (metrics on focal, light distribution, contrast, etc) of hundred of landscape portrait and macro pictures, the camera feeds the Bayesian algorithm with current metrics and obtain the most probable type of current photo. The beauty of this algorithm is how silly it is (a 'landscape' means nothing for it) and how accurate are its predictions.
The same algorithm is used in anti spam software. They too do not understand the email content but can predict (and learn from user) what is a spam.
I hope this can inspire you:
You may get some metrics on values scanned for a suduku cell (let's say array of 12x12 spots with values from 0 to 255), like average value, symmetry, gravity center, etc, and use the results to find the most probable value.
ie: if there is an vertical symmetry, its probably '0' or '8'.
or if most of dots are on the right, its probably a 3 or 9
Anyway you have a very interesting challenge
Keep us inform of your solution (and problems
Visit my project RainBot v0.11
on source forge, a 6 wheels robot featuring A*
path finding, motors & sensors emulation
, small font
& sorted list
libraries, using Xander
's drivers for HT Compass
, and documented with doxygen