Notice of retraction
Vol. 32, No. 8(2), S&M2292

Print: ISSN 0914-4935
Online: ISSN 2435-0869
Sensors and Materials
is an international peer-reviewed open access journal to provide a forum for researchers working in multidisciplinary fields of sensing technology.
Sensors and Materials
is covered by Science Citation Index Expanded (Clarivate Analytics), Scopus (Elsevier), and other databases.

Instructions to authors
English    日本語

Instructions for manuscript preparation
English    日本語

Template
English

Publisher
 MYU K.K.
 Sensors and Materials
 1-23-3-303 Sendagi,
 Bunkyo-ku, Tokyo 113-0022, Japan
 Tel: 81-3-3827-8549
 Fax: 81-3-3827-8547

MYU Research, a scientific publisher, seeks a native English-speaking proofreader with a scientific background. B.Sc. or higher degree is desirable. In-office position; work hours negotiable. Call 03-3827-8549 for further information.


MYU Research

(proofreading and recording)


MYU K.K.
(translation service)


The Art of Writing Scientific Papers

(How to write scientific papers)
(Japanese Only)

Copyright(C) MYU K.K.
Published in advance: January 25, 2021

Robust Recognition of Chinese Text from Cellphone-acquired Low-quality Identity Card Images Using Convolutional Recurrent Neural Network [PDF]

Jianmei Wang, Ruize Wu, and Shaoming Zhang

(Received July 23, 2020; Accepted January 6, 2021)

Keywords: Chinese text recognition, synthetic data, convolutional recurrent neural network, conditional generative adversarial network, DenseNet

Automatic reading text from an identity (ID) card image has a wide range of social uses. In this paper, we propose a novel method for Chinese text recognition from ID card images taken by cellphone cameras. The paper has two main contributions: (1) A synthetic data engine based on a conditional adversarial generative network is designed to generate million-level synthetic ID card text line images, which can not only retain the inherent template pattern of ID card images but also preserve the diversity of synthetic data. (2) An improved convolutional recurrent neural network (CRNN) is presented to increase Chinese text recognition accuracy, in which DenseNet substitutes VGGNet architecture to extract more sophisticated spatial features. The proposed method is evaluated with more than 7000 real ID card text line images. The experimental results demonstrate that the improved CRNN model trained only on the synthetic dataset can increase the recognition accuracy of Chinese text in cellphone-acquired low-quality images. Specifically, compared with the original CRNN, the average character recognition accuracy is increased from 96.87 to 98.57% and the line recognition accuracy is increased from 65.92 to 90.10%.

Corresponding author: Shaoming Zhang




Forthcoming Regular Issues


Forthcoming Special Issues

Special Issue on Micro-nano Biomedical Sensors, Devices, and Materials
Guest editor, Tetsuji Dohi (Chuo University) and Seiichi Takamatsu (The University of Tokyo)


Special Issue on Artificial Intelligence in Sensing Technologies and Systems
Guest editor, Prof. Lin Lin (Dalian University of Technology)


Special issue on Novel Materials and Sensing Technologies on Electronic and Mechanical Devices Part 3
Guest editor, Teen-Hang Meen (National Formosa University), Wenbing Zhao (Cleveland State University), and Hsien-Wei Tseng (Yango University)


7th Special Issue on the Workshop of Next-generation Front-edge Optical Science Research
Guest editor, Takayuki Yanagida (Nara Institute of Science and Technology)


Special Issue on Sensing and Data Analysis Technologies for Living Environment, Health Care, Production Management and Engineering/Science Education Applications (Selected Papers from ICSEVEN 2020)
Guest editor, Chien-Jung Huang (National University of Kaohsiung), Rey-Chue Hwang (I-Shou University), Ja-Hao Chen (Feng Chia University), Ba-Son Nguyen (Research Center for Applied Sciences)
Call for paper


Special Issue on Materials, Devices, Circuits, and Analytical Methods for Various Sensors (Selected Papers from ICSEVEN 2020)
Guest editor, Chien-Jung Huang (National University of Kaohsiung), Ja-Hao Chen (Feng Chia University), and Yu-Ju Lin (Tunghai University)
Conference website
Call for paper


Copyright(C) MYU K.K. All Rights Reserved.