Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 16 papers published in subject: > since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. HCADecoder: A Hybrid CTC-Attention Decoder for Chinese Text Recgnition | |||
CAI Si-Qi,XUE Wen-Yuan,LI Qing-Yong | |||
Computer Science and Technology 17 March 2021 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Text recognition has attracted much attention and achieved exciting results on several commonly used public English datasets in recent years. However, most of these well-established methods, such as connectionist temporal classification (CTC)-based methods and attention-based methods, pay less attention to challenges on the Chinese scene, especially for long text sequences. In this paper, we exploit the characteristic of Chinese word frequency distribution and propose a hybrid CTC-Attention decoder (HCADecoder) supervised with bigram mixture labels for Chinese text recognition. Specifically, we first add high-frequency bigram subwords into the original unigram labels to construct the mixture bigram label, which can shorten the decoding length. Then, in the decoding stage, the CTC module outputs a preliminary result, in which confused predictions are replaced with bigram subwords. The attention module utilizes the preliminary result and outputs the final result. Experimental results on four Chinese datasets demonstrate the effectiveness of the proposed method for Chinese text recognition, especially for long texts. Code will be made publicly available. | |||
TO cite this article:CAI Si-Qi,XUE Wen-Yuan,LI Qing-Yong. HCADecoder: A Hybrid CTC-Attention Decoder for Chinese Text Recgnition[OL].[17 March 2021] http://en.paper.edu.cn/en_releasepaper/content/4753921 |
2. Semi-supervised Non-negative Matrix FactorizationBased on Semi-tensor Product | |||
WANG Lin, LI Li-Xiang, PENG Hai-Peng, YANG Yi-Xian | |||
Computer Science and Technology 30 December 2019 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Non-negative matrix factorization (NMF) is an effective feature extraction method. And the traditional NMF requires that the number of columns in the basis matrix is equal to the number of rows in the coefficient matrix, which imposes a great limitation on its engineering applications. Furthermore, some data in the practical applications may carry label information. These require novel methods to break this limitation and consider the influence of label information at the same time. Based on this idea, this paper proposes the semi-supervised non-negative matrix factorization based on semi-tensor product (TSNMF). The proposed method not only makes full use of the known label information, but also breaks through the limitation of dimension matching constraint in the traditional NMF, which can save storage space and improve the operation speed of the TSNMF method. Moreover, We evaluate the classification performance of the TSNMF method through numerical experiments in ORL face database and JAFFE face database. The experimental results show that the proposed TSNMF method is superior to the semi-supervised non-negative matrix factorization (SNMF). | |||
TO cite this article:WANG Lin, LI Li-Xiang, PENG Hai-Peng, et al. Semi-supervised Non-negative Matrix FactorizationBased on Semi-tensor Product[OL].[30 December 2019] http://en.paper.edu.cn/en_releasepaper/content/4750401 |
3. Adaptive Parameters Softmax Loss for Deep Face Recognition | |||
ZHANG Jian-Wei,GUO Qiu-Shan,DONG Yuan,XIONG Feng-Ye,BAI Hong-Liang | |||
Computer Science and Technology 17 September 2019 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Face recognition has achieved great success due to the development of Deep convolutional neural networks (DCNN). Loss functions with angular margin have been proposed to supervise DCNN for better feature representation. However, these methods would suffer from sensitivity of hyperparameters setting. In this paper, we propose an Adaptive Parameters Softmax Loss function with different scale parameters for target logits and non-target logits and dynamically adaptive margin parameter. Extensive experiments on MegaFace and IJB-C demonstrate the effectiveness of our method. | |||
TO cite this article:ZHANG Jian-Wei,GUO Qiu-Shan,DONG Yuan, et al. Adaptive Parameters Softmax Loss for Deep Face Recognition[OL].[17 September 2019] http://en.paper.edu.cn/en_releasepaper/content/4749649 |
4. Accelerating Large-scale Convolutional Neural Networks Based on Convolution in Blocks | |||
ZHANG Qiang-Qiang, WANG Chun-Lu, LIU Zhen-Yu | |||
Computer Science and Technology 05 April 2016 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:There has been an upsurge in machine learning in recent years. And we all know that deep learning is part of the machine learning. Convolutional neural network is an important method of deep learning, but the number of the networks' training parameters is very large and it has multiple layers, resulting in the training process very slowly. By unrolling the input of convolution layer into matrices and using the BLAS libraries, we can shorten the training time through matrix multiplication. However, another problem occurs when the input image is too large. In such case, during the rearrangement of the input data, data rearranged before may be crowed out of cache because of the later data. When it comes to convolution, the convolution will be greatly affected in that the cache hit rate is reduced. A method based on the process of convolution to accelerate the training is presented, which is dividing input data into blocks when convolution and the effect of the relationship between the block size and the capacity of the cache is studied as well. Experiments show that the method of convolution in blocks is simple and feasible, and the efficiency of convolution is promoted about 50% at best. | |||
TO cite this article:ZHANG Qiang-Qiang, WANG Chun-Lu, LIU Zhen-Yu. Accelerating Large-scale Convolutional Neural Networks Based on Convolution in Blocks[OL].[ 5 April 2016] http://en.paper.edu.cn/en_releasepaper/content/4682735 |
5. Player Identification Based on Jersey Number Recognition In Sports Video | |||
Zhang Nannan,Zhang Honggang,Li Siyuan,Guo Jie | |||
Computer Science and Technology 09 September 2014 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Detecting the identity of the players would be highly valuable for sports video content analysis. Since a player's jersey number is unique during a game, it is feasible to recognize the jersey number for identification in sports videos. Jersey number is considered as scene text which is difficult to localize and recognize. To solve this problem, we present a method for jersey region localization using a mixed color model as the rough detector and a SVM classifier as the refined detector. Once the jersey region is determined, a jersey number recognition algorithm with a KNN classifier is applied to extract and recognize the number. In addition, we develop an interactive system, which users can interact with to improve the recognition rate. Experimental results are presented on various kinds of sports videos, such as football and basketball videos, and the detection rate is over 85% and recognition rate is over 80% even under the condition of complicated background. | |||
TO cite this article:Zhang Nannan,Zhang Honggang,Li Siyuan, et al. Player Identification Based on Jersey Number Recognition In Sports Video[OL].[ 9 September 2014] http://en.paper.edu.cn/en_releasepaper/content/4606751 |
6. Nonparametric Kernel-Based Distribution Modeling of Bioelectrical Impedance Features for Breast Tissue Classification | |||
LU Meng,WU Yunfeng | |||
Computer Science and Technology 28 April 2013 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Classification of breast tissues helps assess early stage pathological conditions in the cancerating breast. In this paper, we present a nonparametric modeling method to estimate the bivariate probability densities of features for the normal and pathological breast tissues. Two representative bioelectrical features were first selected for classification by using the Kruskal-Wallis test and correlation analysis. The bivariate feature density was estimated using Gaussian kernels, and the nonlinear classification was performed using the maximal posterior probability method. The results showed that the kernel-based maximal posterior probability (KMPP) classification provided an accurate rate of 84.91% and the area under receiver operating characteristic (ROC) curve of 0.9307. The diagnostic performance and the nonlinear decision boundary of the proposed KMPP method were better than Fisher's linear discriminant analysis (accuracy: 83.02%, area under ROC curve: 0.8789). | |||
TO cite this article:LU Meng,WU Yunfeng. Nonparametric Kernel-Based Distribution Modeling of Bioelectrical Impedance Features for Breast Tissue Classification[OL].[28 April 2013] http://en.paper.edu.cn/en_releasepaper/content/4540663 |
7. CLOF: A Noise Removal Algorithm Based on Combined Local Outlier Factors | |||
REN Yi-li,WU Jun-jie,XIONG Hai-tao,LIU Chen | |||
Computer Science and Technology 22 January 2013 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Real-world data is never perfect and often suffer from noises that may affect interpretations of the data, the models created from the data, and the decisions made based on the data. A common solution for handling noise is to employ outlier detection techniques. LOF is a well-known and widely used algorithm for outlier detection based on local densities of data. However, it does not perform well on removing class noises since it does not take the information of class labels into consideration. In this paper, we propose a new noise removal algorithm based on Combined Local Outlier Factors: CLOF. Specifically, CLOF firstly defines three local outlier factors, i.e., lofa, lof1} and lof0}, and eliminates attribute noises using lofa}. Then, CLOF finds and corrects the labels of class noises by simultaneously using the three local outlier factors. Experimental results on artificial and real-world UCI data sets demonstrate that CLOF can effectively identify class noises and attribute noises so as to improve the classification performances of various classifiers, especially for data sets with severe class overlappings. | |||
TO cite this article:REN Yi-li,WU Jun-jie,XIONG Hai-tao, et al. CLOF: A Noise Removal Algorithm Based on Combined Local Outlier Factors[J]. |
8. Blockwise Coordinate Descent Schemes for Effective Dictionary Learning | |||
Liu Baodi,ZhangYujin | |||
Computer Science and Technology 29 November 2012 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Sparse coding, which is usually viewed as a method for rearranging the structure of the original data in order tomake the energy compact over non-orthogonal and overcomplete dictionary, is widely used in signal processing, pattern recognition, machine learning, statistics, and neuroscience. Unfortunately, finding sparse codes and learning bases remain computationally difficult up to now, and the performance of sparse coding is sensitive to the learned dictionary. In this paper, we propose a blockwise coordinate descent algorithm with guaranteed convergence to solve these two problems under a unified scheme. The variables involved in the optimization problems are partitioned into several suitable blocks with convexity preserved, making it possible to perform an exact block coordinate descent. For each separable subproblem, based on the convexity and monotonic property of the parabolic function, a closed-form solution is obtained. Thus the algorithm is simple, efficient and effective. Experimental results show that our algorithm not only significantly accelerates the learning process, but also greatly helps improve the performance of real applications. | |||
TO cite this article:Liu Baodi,ZhangYujin. Blockwise Coordinate Descent Schemes for Effective Dictionary Learning[OL].[29 November 2012] http://en.paper.edu.cn/en_releasepaper/content/4498604 |
9. ENTITY RANKING BASED ON DOCUMENT CENTERED MODEL | |||
Ouyang Haoyi,Xu Weiran | |||
Computer Science and Technology 29 November 2011 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The traditional search engines answer user's question by returning a collection of documents relevant to their query. However many user information needs would be better answered by specific entities instead of just any type of documents. This paper introduces a method that ranks a given list of entities according to their relevance to query. We use a method called Document Centered Model. Research models including BM25, KL-divergence have been tested to make the result better. Our criteria are from TREC Entity 2010. | |||
TO cite this article:Ouyang Haoyi,Xu Weiran. ENTITY RANKING BASED ON DOCUMENT CENTERED MODEL[OL].[29 November 2011] http://en.paper.edu.cn/en_releasepaper/content/4452555 |
10. Research on Human and Machine Performance of Handwritten Chinese Character Recognition in HCL2000 | |||
Wan Xinxin,Zhang Honggang | |||
Computer Science and Technology 02 December 2010 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In this paper, human performance on handwritten Chinese character recognition is compared to machine, which aims to obtain the required accuracy for further handwritten word segmentation and recognition. HCL2000, one of the largest databases of handwritten Chinese characters, introduces sample characters into the performance evaluation. A system of Human Performance Test on HCL2000 is designed to examine the accuracy of human recognition. According to the experiment results, the best machine record is competitive with average human performance. LPP and MFA employing the gradient feature vectors of size 512 far outperform LDA on the same dimensionality. | |||
TO cite this article:Wan Xinxin,Zhang Honggang. Research on Human and Machine Performance of Handwritten Chinese Character Recognition in HCL2000[OL].[ 2 December 2010] http://en.paper.edu.cn/en_releasepaper/content/4394583 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated