Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 4 papers published in subject: > since this site started. |
Results per page: |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. Attention based lattice bilstm model for Chinese named entity recognition | |||
CAO Xiaofei,YANG Juan,YANG Juan | |||
Computer Science and Technology 18 December 2019 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:A recently proposed model named Lattice LSTM has focused on integrating segmentation information into the long short-term memory (LSTM) network. However, it can only affect the subsequent character sequence of each character in the sequence from the level of word granularity, which results in insufficient extraction of word segmentation information. Besides, features of characters extracted by LSTM are given the same weight when transferred to the conditional random field (CRF) layer, the key semantic information does not receive much consideration. To solve the above problems, a novel neural network model is proposed in this paper which improves the original lattice model (Att-Lattice BiLSTM) with bidirectional long short-term memory based on the attention mechanism. An information path is added from the end character of word to the start character of word in the back propagation of LSTM, which integrates the word boundary information into both the start and end character of the word during bidirectional transfer of LSTM network, introducing the word information comprehensively. Moreover, this new model allows seamlessly incorporating attention mechanism to capture relatively important semantic feature automatically. Meanwhile, two strategies are provided to aggregate the bidirectional LSTM layers output to integrate semantic features effectively. Experimental results on four data sets show that the proposed model performs better than other most advanced models. | |||
TO cite this article:CAO Xiaofei,YANG Juan,YANG Juan. Attention based lattice bilstm model for Chinese named entity recognition[OL].[18 December 2019] http://en.paper.edu.cn/en_releasepaper/content/4750113 |
2. AJWE: Jointly Learning Chinese Word Embeddings with Heterogeneous Attention | |||
Liu Jie,Wang Yulong | |||
Computer Science and Technology 09 January 2019 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Much attention has been drawn to leveraging the sub-word information to improve word representation, especially in some morphological language like Chinese. Previous studies on Chinese word embeddings has explored diverse fine-grained sub-word information, such as character, radical, component, stoke n-grams. However, all of them do not distinguish the semantic contribution of word to the context and are weak at handling the ambiguity of characters and sub-character components as well. In this paper, we propose AJWE, a jointly model for learning Chinese word embeddings with heterogeneous attention. we explore an external self-attention mechanism to learning the word semantic contribution to the context, specially propose a bias-attention approach for in-ternal sub-word morphemes to address the ambiguity issue. Evaluation on the word similarity, word analogy, text classification and name entity recognition demonstrates that our model outperforms existing state-of-the-art methods. | |||
TO cite this article:Liu Jie,Wang Yulong. AJWE: Jointly Learning Chinese Word Embeddings with Heterogeneous Attention[OL].[ 9 January 2019] http://en.paper.edu.cn/en_releasepaper/content/4746998 |
3. Location Query System Based On Google Map | |||
Sheng Yadong,Wang Xiaojie | |||
Computer Science and Technology 12 December 2011 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the popularity of GPS, Location-Based Service has been developed widely and applied in many fields such as location query service、point of interest search service、self-funded travel service and so on. To differentiate many similar locations, sentence similarity is introduced which is a very important research topic in the field of NLP, and has been widely used in the fields such as text classification, information processing and so on. In recent years, a great many methods have been proposed to measure the similarity of sentences, but these methods for computing sentence similarity have almost derived from approaches used for long text documents, they are not suitable for some applications. So this paper mainly focuses on very short sentence similarity computation, especially the similarity between Chinese and English addresses. In the process of computation, the sentence similarity is calculated with the information of both structure and semantic information. Experiments on the similarity calculation show that this proposed method has higher accuracy. | |||
TO cite this article:Sheng Yadong,Wang Xiaojie. Location Query System Based On Google Map[OL].[12 December 2011] http://en.paper.edu.cn/en_releasepaper/content/4455267 |
4. Factors Analysis for a Computational Model of Emergent Simple Syntax | |||
Yu Hao,Wang Xiaojie | |||
Computer Science and Technology 07 November 2010 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:This paper proposes several factors for computational models of early child language acquisition, giving a better explanation on how external language input and intrinsic parameter affect learning, comprehension and production of simple syntax. Taking a model simulating transition from one-word stage to two-word stage (O2T) as beginning, the paper gives quantitative simulation based investigations on how the language input and parameter affect the volume of system (i.e. how much is learned) and evaluation output (i.e. how well the learned can be used by the system to comprehend or produce simple syntax). Factors including contributing word, related string/concept and critical abstract factor, have been figured out to uncover underlying reasons. Contributing words bring syntax information from language input to the system; related strings/concepts relate the learned syntax to new syntax; and abstract factor is crucial for the ability of generative learning. Experiment results show that contributing word and related string/concept have much greater influence respectively on the volume of system and evaluation output, compared to other information the language input contains. Jointly with related string/concept, critical abstract factor controls evaluation output. And there exists value ranges of critical abstract factor for the occurrence of under-extension and over-extension. After that, the paper makes similar investigation on MOSAIC (i.e. a mature and widely-accepted computational model of syntax acquisition), and get similar results, which indicate some degree of generality of the factors. In the light of discrepancies between the results, the paper also gets a clearer image of MOSAIC by discussing its differences from O2T model. | |||
TO cite this article:Yu Hao,Wang Xiaojie. Factors Analysis for a Computational Model of Emergent Simple Syntax[OL].[ 7 November 2010] http://en.paper.edu.cn/en_releasepaper/content/4390755 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
Results per page: |
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated