Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 952 papers published in subject: since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. Classification of security vulnerability exploit codes using large language models | |||
Huang Linhui,He Yongzhong,Yin Min,Li Chao,Hou Lu,Wang Xiaonan,Guo Yaoyao | |||
Computer Science and Technology 26 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Software vulnerabilities are the root cause of security risks such as data breaches, system crashes, and network intrusions. Once malicious attackers exploit these vulnerabilities, they can result in significant losses. According to reports from the National Vulnerability Database (NVD), the number of disclosed vulnerabilities is steadily increasing, providing attackers with more opportunities to exploit these vulnerabilities. Consequently, more attack scripts, known as exploit data, are being publicly disclosed. In order to facilitate the use of such data by penetration testers and researchers, relevant individuals have established exploit databases. However, these databases largely rely on manual collection and categorization, making them susceptible to human factors.Therefore, there is a need to employ automated classification methods to effectively manage exploit programs targeting various software and systems. This can enhance management efficiency and reduce associated costs. This article introduces an automated exploit classifier that categorizes exploit information\'s text and code separately. It combines BERT and CodeBERT models along with W2V models to generate corresponding feature vectors. Subsequently, it utilizes models like BiLSTM to construct an automated exploit classifier, achieving effective exploit classification. | |||
TO cite this article:Huang Linhui,He Yongzhong,Yin Min, et al. Classification of security vulnerability exploit codes using large language models[OL].[26 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762133 |
2. Research on Option-Critic algorithm based Representation Erasure | |||
Meng JunWei,Hu Zheng | |||
Computer Science and Technology 06 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The Option-Critic (OC) framework can extract transferrable abstract knowledge without requiring any environment-specific prior knowledge, learning options (a form of temporal abstract policy) end-to-end. However, the OC framework exhibits lower data efficiency in transfer tasks. During the learning process, each option considers the entire task's state space, thereby increasing the scale of policy space search. This paper proposes an Option Learning algorithm based on Representation Erasure, which introduces the Representation Erasure method to clearly quantify the influence of each dimension on high-level and low-level policy learning. It identifies and erases dimensions that significantly interfere with training, effectively reducing the scale of policy space search. Through theoretical derivation and experimental validation, this paper demonstrates the effectiveness of the Representation Erasure-based Option Learning algorithm. | |||
TO cite this article:Meng JunWei,Hu Zheng. Research on Option-Critic algorithm based Representation Erasure[OL].[ 6 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762077 |
3. End-to-end 3D Human Pose Estimation using Dual Decoders | |||
WANG Zhang,SONG Mei,JIN Lei | |||
Computer Science and Technology 02 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Existing methods for 3D human pose estimation mainly divide the task into two stages. The first stage identifies the 2D coordinates of the human joints in the input image, namely the 2D human joint coordinates. The second stage uses the results from the first stage as input to recover the depth information of human joints from the 2D human joint coordinates to achieve 3D human pose estimation. However, the recognition accuracy of the two-stage method relies heavily on the results of the first stage and includes too many redundant processing steps, which reduces the inference efficiency of the network. To address these issues, we propose the EDD, a fully End-to-end 3D human pose estimation method based on transformer architecture with Dual Decoders. By learning multiple human poses, the model can directly infer all 3D human poses in the image using a pose decoder, and then further optimize the recognition result using a joint decoder based on the kinematic relations between joints. With the attention mechanism, this method can adaptively focus on the most relevant features to the target joint, effectively overcoming the feature misalignment problem in the human pose estimation task and greatly improving the model performance. Any complex post-processing step, such as non-maximum suppression, is eliminated, further improving the efficiency of the model. The results show that this method achieves an accuracy of 87.4\% on the MuPoTS-3D dataset, significantly improving the accuracy of the end-to-end 3D human pose estimation method based on mixed training. | |||
TO cite this article:WANG Zhang,SONG Mei,JIN Lei. End-to-end 3D Human Pose Estimation using Dual Decoders[OL].[ 2 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4761949 |
4. Lightweight Deep Neural Network Model With Padding-free Downsampling | |||
LIU Dengfeng,GUO Xiaohe,WANG Ning,WU Qin | |||
Computer Science and Technology 25 January 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Deep neural networks have achieved impressive performance in image classification tasks. However, due to limitations in hardware resources, including computing units and storage capacity, deploying these networks directly on resource-constrained devices such as mobile and edge devices is challenging. While lightweight network models have made significant advancements, the downsampling stage has received little attention. As the feature map is reused multiple times, the reduction of its size during the downsampling stage not only reduces the computational cost of the downsampling module itself but also lowers the computational burden of subsequent stages. This paper addresses this gap by proposing a padding-free downsampling module that effectively reduces computational costs and can seamlessly integrates into various deep learning models. Furthermore, we introduce a hybrid stem layer to obtain competitive accuracy. Extensive experiments were conducted on CIFAR-100, Stanford Dogs, and ImageNet datasets. On the CIFAR-100 dataset, the results show that the proposed module reduces computational costs by approximately 20% and improves inference speed on resource-constrained devices by around 10%. | |||
TO cite this article:LIU Dengfeng,GUO Xiaohe,WANG Ning, et al. Lightweight Deep Neural Network Model With Padding-free Downsampling[OL].[25 January 2024] http://en.paper.edu.cn/en_releasepaper/content/4761964 |
5. End-to-End No-Reference Video Semantic Communication Quality Assessment via Deep Neural Networks | |||
Zhang Baiquan,Que Xirong | |||
Computer Science and Technology 12 January 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Video semantic communication is developing rapidly nowadays, but traditional video quality assessment methods are not fully compatible with it. There is a lack of a no-reference video quality assessment method specifically designed for video semantic communication. In this paper, we propose a new end-to-end no-reference video semantic communication quality assessment using deep neural networks. Our model, which is implemented based on video semantic communication, it creatively uses both common and individual features extracted from videos through semantic communication for video quality assessment. Our model adopts a multi-task DNN framework, which assesses the quality of both common and individual features, and finally combines both to obtain the final video quality prediction score. Experimental results show that our assessment model outperforms other no-reference video quality assessment methods and is more suitable for semantic communication. | |||
TO cite this article:Zhang Baiquan,Que Xirong. End-to-End No-Reference Video Semantic Communication Quality Assessment via Deep Neural Networks[OL].[12 January 2024] http://en.paper.edu.cn/en_releasepaper/content/4761748 |
6. Research on entity extraction of demand for industry-university-research projects based on BERT-BiLSTM-CRF | |||
ZHANG Zhiqing,TAO Zekui | |||
Computer Science and Technology 04 January 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In order to realize the effective docking between schools and enterprises in industry-university-research cooperation projects and accurately obtain the technical needs of enterprises, based on the characteristics of concise and diversified texts of industry-university-research projects, a Chinese named entity extraction method based on BERT+BiLSTM+CRF model was proposed. Firstly, the BERT model is used to encode the input text, then the BiLSTM model is used to model the context to capture more comprehensive context information, and finally the label decoding is carried out through the CRF layer to obtain the optimal entity annotation results. Experimental results show that the proposed method is effective and feasible, and the extraction effect is better than that of the traditional method, which provides the possibility to solve the difficulties of technical demand information extraction, such as polysemy and language variants. | |||
TO cite this article:ZHANG Zhiqing,TAO Zekui. Research on entity extraction of demand for industry-university-research projects based on BERT-BiLSTM-CRF[OL].[ 4 January 2024] http://en.paper.edu.cn/en_releasepaper/content/4761868 |
7. Calculation of Lightning Electromagnetic Pulse Based on Neural Network Boundary | |||
Di Wang,Jianghai Wang | |||
Computer Science and Technology 02 January 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In this paper, a LSTM network based boundary model is investigated to compute the propagation of lightning electromagnetic pulsed fields (LEMP) in space. A LSTM neural network is introduced in place of the traditional Perfectly Matched Layer (PML) absorbing boundary conditions in solving the value of LEMP using the FDTD algorithm.The data on the PML boundary is utilized for model training. Compared with the traditional PML boundary model, the computational complexity and computation are greatly reduced because the neural network needs only one cell layer as the boundary. Meanwhile, in order to enhance the generalization ability of the model, the Random Forest (RF) algorithm is used to screen the features of the data on the PML boundary. The experimental results show that the new model calculates the EMF values at each position in the space with good accuracy. | |||
TO cite this article:Di Wang,Jianghai Wang. Calculation of Lightning Electromagnetic Pulse Based on Neural Network Boundary[OL].[ 2 January 2024] http://en.paper.edu.cn/en_releasepaper/content/4761839 |
8. An SDN-Based Flow Table Encoding Approach for Resource and Efficiency Optimization in Topic-based Pub/Sub Systems | |||
Zhou Yu,Zhang Yang | |||
Computer Science and Technology 03 December 2023 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the rapid development of software-defined networking (SDN), SDN-Based multi-level flow table architectures are always employed to address issues such as QoS, security policies, and matching efficiency. The rise of semantic communication has also sparked researchers\' interest in semantic information and its utilization. Many studies on semantic representation and semantic summarization have emerged. This study takes a new perspective to reduce the number of table entries by utilizing the semantic relationships implied in the topic tree in the topic-based pub/sub systems and introduces the concept of semantic aggregation. Semantic aggregation of table entries can work with multi-level flow table architecture to reduce the number of table entries while ensuring the correct delivery of streams. We propose a semantic-based table entry encoding algorithm to implement our idea and conduct several experiments to examine its performance. The experiment results demonstrate that our algorithm can achieve high space and efficiency optimization rate in a short encoding time. | |||
TO cite this article:Zhou Yu,Zhang Yang. An SDN-Based Flow Table Encoding Approach for Resource and Efficiency Optimization in Topic-based Pub/Sub Systems[OL].[ 3 December 2023] http://en.paper.edu.cn/en_releasepaper/content/4761615 |
9. Exploring Developer Social Networks: Unveiling the Impact on New Commit Activity in GitHub | |||
WAN Zhi-Jie, *WANG Yi | |||
Computer Science and Technology 01 December 2023 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Open source development (OSS) is a platform for developers to work collaboratively to finish a common project, which gives birth to the developer social network (DSN). Such DSNs provide valuable insights into knowledge flow, coordination effectiveness, innovation, and the diffusion of practices and technologies. Besides, the network structure, characterized by size, density, bridge, and degree centrality, also influences team cohesion, coordination efficiency, and the emergence of specialized expertise. We visualize 80 DSNs constructed from the empirical data of 80 popular projects in \textsc{GitHub}, identifying these DSNs' characteristics and employing a regression model to estimate the correlation between DSNs' properties and the average number of monthly new commits ($\overline{\textbf{NewC}}$). Our analysis reveals three key findings: (1) the effects of DSN size and DSN bridge are positively correlated with $\overline{\textbf{NewC}}$; (2) the effects of DSN density exhibit a weak negative correlation with $\overline{\textbf{NewC}}$; and (3) no relationship exists between DSN average degree centrality and $\overline{\textbf{NewC}}$. These results provide an integrated view of DSNs' structural characteristics and can inform software managers to enhance project management, team collaboration and software development outcomes. | |||
TO cite this article:WAN Zhi-Jie, *WANG Yi. Exploring Developer Social Networks: Unveiling the Impact on New Commit Activity in GitHub[OL].[ 1 December 2023] http://en.paper.edu.cn/en_releasepaper/content/4761606 |
10. Research on Deep Learning Stock Selection Method Based on Trend Decomposition Algorithm | |||
Wu Cheng-Hui,Zhang Hong-Jian | |||
Computer Science and Technology 20 June 2023 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Some studies using machine learning and deep learning models did not fully explore other indicator features that affect stock price trends, resulting in significant prediction errors and difficulty in directly applying the prediction results to obtain excess returns. Therefore, it is necessary to establish an efficient deep learning model based on stock characteristics and fully explore the factors that affect stock price trends to improve prediction accuracy.In view of the shortcomings of traditional stock trend prediction, this paper constructs LSTM-BP neural network based on Tianniuxu optimization for stock price prediction, and constructs PCA-BP neural network model based on particle swarm optimization based on multi angle correlation analysis for trend prediction. Taking into account both price prediction and trend prediction dimensions, construct a deep quantitative stock selection method. In this process, a stock trend decomposition judgment algorithm is proposed to identify and classify the stock data, improve the data utilization, build a new data set, and use the particle swarm optimization to adjust and optimize the parameters.The results of strategy backtesting show that compared to other deep learning strategies and benchmark strategies, the stock selection method and strategy proposed in this article have significantly improved in terms of victory and return. This indicates that the model has good predictive ability and application potential, and can provide effective decision support for stock investment. | |||
TO cite this article:Wu Cheng-Hui,Zhang Hong-Jian. Research on Deep Learning Stock Selection Method Based on Trend Decomposition Algorithm[OL].[20 June 2023] http://en.paper.edu.cn/en_releasepaper/content/4760906 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated