Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 35 papers published in subject: > since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. A cross-modal semantic model based on AlexNet for the security topic event identification in social network | |||
WANG Cong,DU Junping | |||
Computer Science and Technology 20 November 2018 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the rapid development of Internet technology, the use of online social networks in daily life is becoming more and more popular, it provides users with a platform to share information anytime and anywhere. Topics which related to national security, may be widely shared on the platform due to the popularity of social media, so the study of emergencies in social networks is particularly important. As the information carriers in social networks are mainly text and image. So in this paper a cross-modal semantic model based on deep convolutional neural networks for security topic events identification in social networks is proposed. The main job is studying and improving the AlexNet model, DepthWise convolution technique is introduced to deepen the network structure and reduce the parameter calculation of the model. Using the changed AlexNet model to extract image data and the text data which transformed by word2vec module, and the two feature vectors are fused using the attention mechanism. Using the newly proposed CSMBA framework to classify security topic events on the Sina Microblog dataset. The results of the classification are increased by 4 percent and the training time is reduced by 14 percent. | |||
TO cite this article:WANG Cong,DU Junping. A cross-modal semantic model based on AlexNet for the security topic event identification in social network[OL].[20 November 2018] http://en.paper.edu.cn/en_releasepaper/content/4746492 |
2. Sequential Stock Trading with Continuous Deep Q Learning | |||
SHI Hao,ZHANG Xiaohang,ZHANG Xiaohang | |||
Computer Science and Technology 30 November 2017 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:This paper proposes limit order is a more intelligent and profitable way to trade stock. When a bad market order is executed, trader will encounter certain loss since the bad decision makes trader stuck in bad price position. A Limit order is superior to market order in such way that it always give the trader a better price position. We use a customized deep continuous Q learning algorithm to pricing limit order and trade stocks in discrete time steps. Experiments on NSC market data show our strategy is better than market order strategy and our algorithm is more suitable for our problem. | |||
TO cite this article:SHI Hao,ZHANG Xiaohang,ZHANG Xiaohang. Sequential Stock Trading with Continuous Deep Q Learning[OL].[30 November 2017] http://en.paper.edu.cn/en_releasepaper/content/4742490 |
3. Nonlinear stochastic variance reduction gradient based neural networks and its convergence | |||
WANG Jian, YANG Guo-Ling, ZHANG Hua-Qing,GAO Tao, SUN Zhan-Quan | |||
Computer Science and Technology 17 May 2017 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract: A large number of optimization problems are nonconvex optimization problems. Stochastic variance reduced gradient (SVRG) algorithm provides a solution, which can solve the nonconvex optimization problems through training neural networks. In this paper, nonlinear stochastic variance reduced gradient method (NSVRG) is proposed, in which the objective function is nonlinear. Under mild conditions, the monotonicity of the error function is obtained. We then establish the weak convergence property with a constant learning rate. The weak convergence indicates that the gradient of the error function goes to zero. Finally, numerical example is given to substantiate the effectiveness of the theoretical results. | |||
TO cite this article:WANG Jian, YANG Guo-Ling, ZHANG Hua-Qing, et al. Nonlinear stochastic variance reduction gradient based neural networks and its convergence[OL].[17 May 2017] http://en.paper.edu.cn/en_releasepaper/content/4733411 |
4. A novel conjugate gradient method with generalized Armijo search for efficient training of feedforward neural networks | |||
WANG Jian, ZHANG Bing-Jie, SUN Zhan-Quan, HAO Wen-Xue, Qingying Sun | |||
Computer Science and Technology 07 May 2017 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Conjugate gradient method (CG) requires low memory and performs fast convergent behaviors in practical applications. A novel conjugate gradient method with generalized Armijo search is proposed for three-layer BP neural networks (BPNNs) in this paper. Theoretically, the proof of the deterministic convergent properties, which include weak and strong convergence, for this learning method is given by employing a novel technique. The gradient of error function tends to zero which results in the weak convergent behavior. For the so-called strong convergence, it represents that the sequence with respect to weights tends to a fixed point. Additionally, compared with the existing literature, the restrictive assumptions of the deterministic convergence are more relaxed. For checking the theoretical results, this paper gives some numerical experiments. | |||
TO cite this article:WANG Jian, ZHANG Bing-Jie, SUN Zhan-Quan, et al. A novel conjugate gradient method with generalized Armijo search for efficient training of feedforward neural networks[OL].[ 7 May 2017] http://en.paper.edu.cn/en_releasepaper/content/4733405 |
5. A projection based recurrent neural network approach to nonconvex optimization | |||
Shenshen Gu, Jiao Peng | |||
Computer Science and Technology 16 December 2015 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In this paper, we propose a projection based recurrent neural network for solving nonconvex programming problems subjected to nonlinear equality and bound constraints. The proposed neural network makes use of a gradient projection onto the tagent space of the constraints and the well-known projection theorem. It is shown here that the proposed neural network is stable and globally convergent to an optimal solution within a finite time. Global convergence analysis are established for nonconvex problems. Numerical examples are provided to show the applicability of the proposed neural network. And the performance proved its effective and efficiency. | |||
TO cite this article:Shenshen Gu, Jiao Peng. A projection based recurrent neural network approach to nonconvex optimization[OL].[16 December 2015] http://en.paper.edu.cn/en_releasepaper/content/4671758 |
6. FPGA Implementation of K-Winners-Take-All Neural Network Based on Linear Programming Formulation | |||
Shenshen Gu, Jiarui Zhang | |||
Computer Science and Technology 01 December 2015 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:K-Winner-take-all ($k$WTA) is an operation that identifies the $k$ largest inputs from multiple input signals. It has important applications in machine learning, statistics filtering and sorting, etc. As the number of inputs becomes large and the selection process should be operated in real time, parallel algorithms are desirable. For these reasons, many neural network algorithms have been proposed to solve $k$WTA. Compared with software simulations, the hardware implementation is capable of a high degree of parallelism. There are many hardware implementations that have been proposed, such as digital chips, analog chips, hybrids chips, FPGA based chips, and (non-electronic) optical chips implementation. Compared with other hardware implementations, the FPGA provides an effective programmable resource, together with a fast prototyping and rapid system deployment. In this paper, a new hardware implementation technique for a typical neural network of $k$WTA using a field-programmable-gate-array (FPGA) chip is proposed. Experimental results show that the proposed hardware implementation method has a high degree of parallelism and fast performance. | |||
TO cite this article:Shenshen Gu, Jiarui Zhang. FPGA Implementation of K-Winners-Take-All Neural Network Based on Linear Programming Formulation[OL].[ 1 December 2015] http://en.paper.edu.cn/en_releasepaper/content/4667669 |
7. Performance evaluation of wavelet scattering network in image texture classification in various color spaces | |||
WU Jiasong,JIANG Longyu,HAN Xu,SENHADJI Lotfi,SHU Huazhong | |||
Computer Science and Technology 16 July 2014 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Texture plays an important role in many image analysis applications. In this paper, we give a performance evaluation of color texture classification by performing wavelet scattering network in various color spaces. Experimental results on the KTH_TIPS_COL database show that opponent RGB based wavelet scattering network outperforms other color spaces. Therefore, when dealing with the problem of color texture classification, opponent RGB based wavelet scattering network is recommended. | |||
TO cite this article:WU Jiasong,JIANG Longyu,HAN Xu, et al. Performance evaluation of wavelet scattering network in image texture classification in various color spaces[OL].[16 July 2014] http://en.paper.edu.cn/en_releasepaper/content/4603925 |
8. Structure Learning of Bayesian Network Based on Dependency Analysis of Relational Database | |||
Wang LiMin ,Xia Huijie | |||
Computer Science and Technology 12 March 2012 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Many researchers attempt to combine relational database into probabilistic reasoning procedure and some achievements have been came up with. In order to reduce the computational complexity of structure learning of BN which is a NP-hard problem, this paper introduced an innovative data dependency named Local Multivalued dependency(LMVD) by combining advantages of MVDs and EMVDs, and proposed the method which utilizes FDs and LMVDs as expert knowledge to remove extraneous attributes and construct the initial network structure. The inner-dependencies implicated in each sample will be found based on association rule mining technique. A new classification model named FM-TAN was proposed by applying our method into NB and TAN network. Experimental results are presented to show the effectiveness and efficiency of the proposed approach. | |||
TO cite this article:Wang LiMin ,Xia Huijie . Structure Learning of Bayesian Network Based on Dependency Analysis of Relational Database[OL].[12 March 2012] http://en.paper.edu.cn/en_releasepaper/content/4471237 |
9. Calculating Eigenpairs of Real skew symmetric matrices by Neural Networks | |||
Liu Yiguang | |||
Computer Science and Technology 09 February 2009 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In this paper, a novel approach is presented to extract all eigenpairs of real skew symmetric matrices. Firstly, a functional neural network model is designed to calculate the eigenvalue whose imaginary part is the largest, and the associated eigenvector. In the next runs, by building initial vectors using known eigenvectors, all eigenvalues which have positive imaginary parts and the associated eigenvectors are obtained. Then, by the conjugate property, all eigenpairs are known. Compared with other approaches, the proposed method does not need to construct matrix series. Finally, two examples are employed to illustrate the effectiveness of the approach. | |||
TO cite this article:Liu Yiguang . Calculating Eigenpairs of Real skew symmetric matrices by Neural Networks[OL].[ 9 February 2009] http://en.paper.edu.cn/en_releasepaper/content/28632 |
10. A Recurrent Neural Networks Based Method calculating All Eigenpairs of Real Skew Matrices | |||
Liu Yiguang,Zhisheng You | |||
Computer Science and Technology 20 January 2009 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The efficient calculation of all eigen-pairs of a real skew matrix is significant for real applications. Neural networks run in an asynchronous manner and can achieve high performance in performing calculations, so a method founded upon newly designed recurrent neural networks is presented to calculate the eigenpairs of real skew matrices. In the first computation, the neural networks obtain the eigenvalue whose modulus is the largest, and the corresponding eigenvector. Using known eigenpairs to correct the matrix and inputting the results into the neural networks, all eigenpairs are calculated. The test on three matrices (one occupies 7 different eigenvalues, one has repeated eigen-values, the dimension of the other achieves 60) indicate that: the calculated eigenvalues are very close to the corresponding true ones, and each extracted eigenvector is equivalent to the corresponding true one, the both belong to a same eigen-subspace. A contrast test shows that in serial computation environment, the convergence speed and accuracy of this method are close to those of the power method. Compared with numerical methods, the approach is expected to have high performance in parallel running platform, especially where all nodes of networks can concurrently run. Compared with other neural networks based ways in the same direction, this method is suitable for real skew matrices, and can calculate all eigenpairs. | |||
TO cite this article:Liu Yiguang,Zhisheng You. A Recurrent Neural Networks Based Method calculating All Eigenpairs of Real Skew Matrices[OL].[20 January 2009] http://en.paper.edu.cn/en_releasepaper/content/28099 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated