Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 322 papers published in subject: > since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. MLTN: Meta-Learning Tower Network for Cold-Start Recommendation | |||
LOU Si-Yuan, WANG Yu-Long | |||
Computer Science and Technology 20 January 2021 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Cold-start recommendation issues refer to the recommendation task about new users and items, lots of work has been made to solve this problem. Model agnostic meta-learning (MAML) is a popular paradigm recently, which is used to train models that are able to learn and can be generalized. The key idea underlying MAML is to train the model’s initial parameters such that the model has maximal performance on a new task after the parameters have been updated through one or more gradient steps computed with a small amount of data from that new task. Inspired by the thoughts, we regard cold-start recommendation issues as few-shot meta-learning problem and propose meta-learning tower network (MLTN). Then we formalize the task for each user and train the model’s parameters in meta-learning optimization way. Extensive experiments on both industrial datasets and public datasets demonstrate the superiority of MLTN. | |||
TO cite this article:LOU Si-Yuan, WANG Yu-Long. MLTN: Meta-Learning Tower Network for Cold-Start Recommendation[OL].[20 January 2021] http://en.paper.edu.cn/en_releasepaper/content/4753511 |
2. Adaptive Margin of Triplet-Center Loss for Deep Metric Learning | |||
YAO Li,ZHANG Bin | |||
Computer Science and Technology 06 January 2021 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In the family loss functions built on pair-based, most of them need to manually tune uniform thresholds between pairs to optimize the parameters of network. However, those hyper-parameters are fixed which is unreasonable for the reason that any two classes have different similarity. What’s more, it has to cost too much time and energy to tune the hyper-parameters for each task to find suitable values. Therefore, this paper proposes a novel loss named adaptive margin of triplet-center loss (AMTCL), which can learn a specific margin for a center of each class, while keep inter-class separateness, enhance the discriminative power of features and lighten our burden. Finally, the proposed AMTCL obtains state-of-the-art performance on four image retrieval benchmarks. Without whistle and blow, the proposed loss only need a few codes can be easily implemented in current network. | |||
TO cite this article:YAO Li,ZHANG Bin. Adaptive Margin of Triplet-Center Loss for Deep Metric Learning[OL].[ 6 January 2021] http://en.paper.edu.cn/en_releasepaper/content/4753303 |
3. A Dual-Attentive and Hybrid Word-Character Model for Chinese Short Text Summarization | |||
Li Yufeng,Xu Weiran | |||
Computer Science and Technology 24 December 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Automatic text summarization is an important field for NLP, which includes the extractive and the abstractive method. Among many languages, Chinese has many special properties, such as rich character semantic expressions, flexible abbreviation. Moreover, insufficient training samples are also a problem. In this paper, we propose a dual-attentive and word-character Chinese text summarization model. The hybrid word-character approach (HWC) will preserve the advantages of both word based and character-based representations, which are very suitable for Chinese language. And the extractive and abstractive methods are combined to accurately capture the key information and gain the essence of articles with less supervised samples. We evaluate our model using the ROUGE evaluation on a widely used Chinese Dataset LCSTS2.0. The experimental results show that the model is very effective. | |||
TO cite this article:Li Yufeng,Xu Weiran. A Dual-Attentive and Hybrid Word-Character Model for Chinese Short Text Summarization[OL].[24 December 2020] http://en.paper.edu.cn/en_releasepaper/content/4753279 |
4. An improved Faster R-CNN network for aeroengine fuse fracture detection | |||
Liao Minjie,Bo Lin,Wu Xialing,Liu Qunyang,Wu Wenhong | |||
Computer Science and Technology 13 December 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In order to meet the needs of aeroengine fuse fracture detection in practical application, an improved Faster R-CNN small target detection network is proposed. Firstly, FPN feature graph pyramid is added to improve the extraction ability of small target features, and then ROI Align is used to replace ROI pooling to reduce the loss of feature information of small targets. Experiments on the fuse fracture data set show that the improved detection network is 5.76% higher than Faster R-CNN on mAP. The experimental results show that the improved network is more advanced and has a practical application prospect in aeroengine fuse fracture detection based on computer vision. | |||
TO cite this article:Liao Minjie,Bo Lin,Wu Xialing, et al. An improved Faster R-CNN network for aeroengine fuse fracture detection[OL].[13 December 2020] http://en.paper.edu.cn/en_releasepaper/content/4753217 |
5. Relation Extraction with Domain Adversarial Neural Network and Graphical Model | |||
MA Kuo,ZHANG Xi | |||
Computer Science and Technology 23 July 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:People read a comment on the web to learn about a product or some news, and the adjectives and nouns in these comments express important information. When we extract the adjective and nouns in the comments, If we can determine that there is indeed a relationship between the adjective and the noun, it will be very helpful for us to understand the comment. This thesis is all about extracting these word pairs and using transfer learning to extract them more quickly and accurately. This adjective and noun pair may undergo some changes in their relationship in different domains. This thesis considers the different domains to identify whether they are related or not. In this paper we propose an adversarial neural network approach with the help of a graphical model, DANN-G. This method considers the relationship between the bags well, the relationship within the bags, and thus reduces the noise caused by remote supervision in the common methods of relationship extraction. Our model has improved in the five major data sets of Amazon. | |||
TO cite this article:MA Kuo,ZHANG Xi. Relation Extraction with Domain Adversarial Neural Network and Graphical Model[OL].[23 July 2020] http://en.paper.edu.cn/en_releasepaper/content/4752560 |
6. Medical Image Segmentation based on Octave Convolution | |||
Zhang Qiong,Tan Guanghua | |||
Computer Science and Technology 10 June 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Medical image segmentation is a very vital research field in computer vision. How to realize an instant and accurate segmentation is of great importance in medical image segmentation. Image segmentation based on deep learning technique can be described as an encoder-decoder architecture. The most classic existed encoder-decoder model is U-Net. However, it can not solve the blurred boundary problem in predicting the segmentation result of the high resolution image. Therefore, this paper proposes a deep learning method that is based on boundary information. This paper proposes adopting Octave convolution to decompose the features into low-frequency feature and high-frequency feature and utilizing the low spatial frequency component to get the segmentation of the smoothly changing structure in the original image and the high spatial frequency component to get the segmentation of the rapidly changing fine details in the original image, followed by using the segmentation of fine details as the constrain condition. This paper proposes concatenating the smoothly changing structure segmentation and the rapidly changing fine details segmentation to realize the constrain condition. The segmentation result of the whole original image is obtained by putting the concatenated segmentation into the convolutional layer for class prediction. Meanwhile, this paper considers the class imbalance problem in the multi-class segmentation and proposes giving more weight to the rare classes. Because this paper adopts Octave convolution and the encoder-decoder method as U-Net, this paper calls the proposed approach Oct-UNet. This proposed method can not only achieve better results than U-Net, but also contains less parameters. The following conducted experiments verify the effect of the proposed approach. | |||
TO cite this article:Zhang Qiong,Tan Guanghua. Medical Image Segmentation based on Octave Convolution[OL].[10 June 2020] http://en.paper.edu.cn/en_releasepaper/content/4752349 |
7. Residual Dilated Attention for Semantic Segmentation of Traffic Scene Understanding | |||
Haibo~Fan, Zulong~Diao, Dafang~Zhang | |||
Computer Science and Technology 21 May 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In recent years, the convolutional neural network has achieved remarkable success in semantic segmentation of traffic scene understanding. At present, the main problems in the field of semantic segmentation are as follows: 1) The repeated pooling and downsampling operations reduce resolution of traffic images in the convolutional networks, which leads to lose abundant spatial information and poor segmentation performance. 2) Traffic images contain many objects of different scales. How to accurately recognize and segment these multi-scale objects is another key problem in semantic segmentation. To handle these problems, this paper propose an image semantic segmentation method based on the Residual Dilated Attention. This method uses spatial CNN to extract high-level semantic information, and then uses the proposed model to capture low-level semantic information, and follows the designed sampling rules to set appropriate and effective sampling rates, and effectively aggregates multi-scale context information while maintaining high resolution of feature maps. Finally, this paper also designs a fusion module to effectively fuse the results generated by the spatial CNN and the Residual Dilated Attention. The method in this paper conducts a series of simulation experiments on CULane and CamVid traffic datasets, and achieves competitive results, proving the effectiveness of the proposed method. | |||
TO cite this article:Haibo~Fan, Zulong~Diao, Dafang~Zhang. Residual Dilated Attention for Semantic Segmentation of Traffic Scene Understanding[OL].[21 May 2020] http://en.paper.edu.cn/en_releasepaper/content/4752172 |
8. Research on Osteoporosis Risk Assessment Based on Semi-supervised Machine Learning | |||
LEI Lu, LUO Tao | |||
Computer Science and Technology 12 May 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:This paper proposes a semi-supervised machine learning method for osteoporosis risk assessment. Existing osteoporosis risk assessment models have problems of low accuracy, and cannot utilize large amounts of unlabeled data. In order to improve the accuracy of diagnosis, the method comprehensively considers the osteoporosis-related questionnaire data and bone image data, and fuses the multi-modal features extracted from them. Feature engineering and Word2vec are used to extract numerical and text features from questionnaires, respectively. CNN is used to extract image features from BMD images. Considering the difficulty of obtaining labeled medical data, this paper builds a self-training semi-supervised model based on XGBoost to classify and evaluate osteoporosis, which uses both labeled and unlabeled data for obtaining better generalization capabilities. Besides, in view of the fact that the questionnaire data has plenty of outliers and missing data, this paper removes outliers based on a DBSCAN algorithm and propose an improved PKNN algorithm to impute the missing data. Experimental results show that the proposed improved semi-supervised method achieves an accuracy of 0.78 in osteoporosis risk assessment and has obvious advantages compared with other methods. | |||
TO cite this article:LEI Lu, LUO Tao. Research on Osteoporosis Risk Assessment Based on Semi-supervised Machine Learning[OL].[12 May 2020] http://en.paper.edu.cn/en_releasepaper/content/4752070 |
9. 2D to 3D Depth Map Prediction Based on Image Segmentation | |||
QIAN Zhixuan,WANG Chensheng,YANG Guang,LI Yangguang,JING Xueliang,LI Yanjiang | |||
Computer Science and Technology 26 April 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:This paper proposes an algorithm to convert 2D video of road video to 3D video.In this kind of video, the foreground is the most concerned part, and accurately extracting the foreground object from the background is the key to get the depth map. In this paper, a graph cutting algorithm based on machine learning is used to obtain the foreground, and the background depth model is constructed according to the scene structure to obtain the background depth map. Based on the background depth map, the depth of the foreground object is assigned according to the distance relationship between the foreground and the lens. Then, the background depth map and foreground depth map are combined to obtain a complete depth map. | |||
TO cite this article:QIAN Zhixuan,WANG Chensheng,YANG Guang, et al. 2D to 3D Depth Map Prediction Based on Image Segmentation[OL].[26 April 2020] http://en.paper.edu.cn/en_releasepaper/content/4751786 |
10. An attribute reduction algorithm based on particle swarm optimization | |||
Liu Jingyu | |||
Computer Science and Technology 09 April 2020 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Due to the explosive growth of data, the attribute dimensions of the data sets are getting higher, and the volume is larger, which leads to increased training overhead and decreased prediction accuracy of machine learning algorithms. And most of the current attribute reduction algorithms are based on a single attribute reduction, which is not easy to obtain the global optimum and has a large amount of calculation. Based on this, this paper proposes an attribute reduction algorithm based on particle swarm optimization (ARPSO). This algorithm designs the importance function of the attribute set based on the variable precision rough set, and uses particle swarm optimization algorithm to construct the optimization space, optimizes the attribute set in the data set globally, and reduces the redundant attributes of the data set to reduce the training overhead of machine learning algorithms and improve their prediction accuracy. The experimental results show that the attribute reduction performance of the ARPSO algorithm is significantly better than the common attribute reduction algorithms, which verifies the effectiveness of it. | |||
TO cite this article:Liu Jingyu. An attribute reduction algorithm based on particle swarm optimization[OL].[ 9 April 2020] http://en.paper.edu.cn/en_releasepaper/content/4751524 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated