skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: ICDAR 2019 CROHME + TFD: Competition on Recognition of Handwritten Mathematical Expressions and Typeset Formula Detection
We summarize the tasks, protocol, and outcome for the 6th Competition on Recognition of Handwritten Mathemat- ical Expressions (CROHME), which includes a new formula detection in document images task (+ TFD). For CROHME + TFD 2019, participants chose between two tasks for recog- nizing handwritten formulas from 1) online stroke data, or 2) images generated from the handwritten strokes. To compare LATEX strings and the labeled directed trees over strokes (label graphs) used in previous CROHMEs, we convert LATEX and stroke-based label graphs to label graphs defined over symbols (symbol-level label graphs, or symLG). More than thirty (33) participants registered for the competition, with nineteen (19) teams submitting results. The strongest formula recognition results were produced by the USTC-iFLYTEK research team, for both stroke-based (81%) and image-based (77%) input. For the new typeset formula detection task, the Samsung R&D Institute Ukraine (Team 2) obtained a very strong F-score (93%). System performance has improved since the last CROHME - still, the competition results suggest that recognition of handwritten formulae remains a difficult structural pattern recognition task.  more » « less
Award ID(s):
1717997
PAR ID:
10124327
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the International Conference on Document Analysis and Recognition
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. o fill a gap in online educational tools, we are working to support search in lecture videos using formulas from lecture notes and vice versa. We use an existing system to convert single-shot lecture videos to keyframe images that capture whiteboard contents along with the times they appear. We train classifiers for handwritten symbols using the CROHME dataset, and for LATEX symbols using generated images. Symbols detected in video keyframes and LATEX formula images are indexed using Line-of-Sight graphs. For search, we lookup pairs of symbols that can 'see' each other, and connected pairs are merged to identify the largest match within each indexed image. We rank matches using symbol class probabilities and angles between symbol pairs. We demonstrate how our method effectively locates formulas between typeset and handwritten images using a set of linear algebra lectures. By combining our search engine Tangent-V) with temporal keyframe metadata, we are able to navigate to where a query formula in LATEX is first handwritten in a lecture video. Our system is available as open-source. For other domains, only the OCR modules require updating. 
    more » « less
  2. We present a new visual parsing method based on convolutional neural networks for handwritten mathematical formulas. The Query-Driven Global Graph Attention (QD- GGA) parsing model employs multi-task learning, and uses a single feature representation for locating, classifying, and relating symbols. First, a Line-Of-Sight (LOS) graph is computed over the handwritten strokes in a formula. Second, class distributions for LOS nodes and edges are obtained using query-specific feature filters (i.e., attention) in a single feed-forward pass. Finally, a Maximum Spanning Tree (MST) is extracted from the weighted graph. Our preliminary results show that this is a promising new approach for visual parsing of handwritten formulas. Our data and source code are publicly available. 
    more » « less
  3. Del Bimbo, Alberto; Cucchiara, Rita; Sclaroff, Stan; Farinella, Giovanni M; Mei, Tao; Bertini, Marc; Escalante, Hugo J; Vezzani, Roberto. (Ed.)
    This work summarizes the results of the second Competition on Harvesting Raw Tables from Infographics (ICPR 2020 CHART-Infographics). Chart Recognition is difficult and multifaceted, so for this competition we divide the process into the following tasks: Chart Image Classification (Task 1), Text Detection and Recognition (Task 2), Text Role Classification (Task 3), Axis Analysis (Task 4), Legend Analysis (Task 5), Plot Element Detection and Classification (Task 6.a), Data Extraction (Task 6.b), and End-to-End Data Extraction (Task 7). We provided two sets of datasets for training and evaluation of the participant submissions. The first set is based on synthetic charts (Adobe Synth) generated from real data sources using matplotlib. The second one is based on manually annotated charts extracted from the Open Access section of the PubMed Central (UB PMC). More than 25 teams registered out of which 7 submitted results for different tasks of the competition. While results on synthetic data are near perfect at times, the same models still have room to improve when it comes to data extraction from real charts. The data, annotation tools, and evaluation scripts have been publicly released for academic use. 
    more » « less
  4. This work summarizes the results of the first Competition on Harvesting Raw Tables from Infographics (ICDAR 2019 CHART-Infographics). The complex process of automatic chart recognition is divided into multiple tasks for the purpose of this competition, including Chart Image Classification (Task 1), Text Detection and Recognition (Task 2), Text Role Classification (Task 3), Axis Analysis (Task 4), Legend Analysis (Task 5), Plot Element Detection and Classification (Task 6.a), Data Extraction (Task 6.b), and End-to-End Data Extraction (Task 7). We provided a large synthetic training set and evaluated submitted systems using newly proposed metrics on both synthetic charts and manually-annotated real charts taken from scientific literature. A total of 8 groups registered for the competition out of which 5 submitted results for tasks 1-5. The results show that some tasks can be performed highly accurately on synthetic data, but all systems did not perform as well on real world charts. The data, annotation tools, and evaluation scripts have been publicly released for academic use. 
    more » « less
  5. Visual statistical learning (VSL), the unsupervised learning of statistical contingencies across time and space, may play a key role in efficient and predictive encoding of the perceptual world. How VSL capabilities vary as a function of ongoing task demands is still poorly understood. VSL is modulated by selective attention and faces interference from some secondary tasks, but there is little evidence that the types of contingencies learned in VSL are sensitive to task demands. We found a powerful effect of task on what is learned in VSL. Participants first completed a visual familiarization task requiring judgments of face gender (female/male) or scene location (interior/exterior). Statistical regularities were embedded between stimulus pairs. During a surprise recognition phase, participants showed less recognition for pairs that had required a change in response key (e.g., female followed by male) or task (e.g., female followed by indoor) during familiarization. When familiarization required detection of "flicker" or "jiggle" events unrelated to image content, there was weaker, but uniform, VSL across pair types. These results suggest that simple task manipulations play a strong role in modulating the distribution of learning over different pair combinations. Such variations may arise from task and response conflict or because the manner in which images are processed is altered. 
    more » « less