Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 97 papers published in subject: > since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. Optimized LightGBM-Based Survival Prediction Model for ENKTL | |||
Wenke Lian,Yu Song,Dong Dong,Wenxiang Yang,Huafeng Zeng,Shibiao Xu,Li Guo,Fengyang An,Xuemei Zhu | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the rapid advancements in artificial intelligence and machine learning, particularly in fields like image recognition, time series forecasting, disease diagnostics, and certain tumor prognostics, these technologies have demonstrated clear advantages over traditional statistical methods, offering vital support for developing more precise predictive models. However, challenges arise due to frequent data omissions in clinical datasets and the intricate relationships between data and survival outcomes. This study specifically addresses the complex survival relationships in ENKTL by comparing discrete and continuous time survival prediction methodologies. It employs an HSIC-Lasso optimized LightGBM algorithm for discrete-time survival forecasting, successfully predicting ENKTL patient survival rates. By evaluating the impact of various interpolation techniques on the predictive accuracy of models dealing with missing values, this work enhances the precision of discrete-time survival forecasts. The findings not only offer fresh insights and strategies for navigating the complex survival dynamics in extranodal nasal NK/T lymphoma but also bolster technical support in this medical domain. This contributes to enhancing the accuracy of disease prognostics and equipping physicians with more targeted treatment options. | |||
TO cite this article:Wenke Lian,Yu Song,Dong Dong, et al. Optimized LightGBM-Based Survival Prediction Model for ENKTL[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762512 |
2. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction | |||
Zhuo Liu,Chensheng Wang | |||
Computer Science and Technology 04 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Feature selection is critical in deep learning aiming to identify the most informative features in dimensionality reduction. In this paper, we propose a novel multi-phase feature selection method, namely Discrimination Improved Correlation-based Feature Selection (DI-CFS), which consists of three modules: the Discrimination Filtering Formula, the Isolation Forest (IF) algorithm, and the Correlation-based Feature Selection (CFS) method. In our method, the Discrimination Filtering Formula is employed to filter out invalid and insignificant features by calculating the discrimination value of them. The IF algorithm is utilized to remove redundant features which are more easily to be partitioned. The point-biserial correlation coefficient is utilized to calculate the weights of different features instead of the Pearson correlation coefficient, and the weights are evaluated by the Correlation-based Feature Selection (CFS) method. The experimental results show that the DI-CFS method is effective. | |||
TO cite this article:Zhuo Liu,Chensheng Wang. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction[OL].[ 4 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762431 |
3. Lightweight Deep Neural Network Model With Padding-free Downsampling | |||
LIU Dengfeng,GUO Xiaohe,WANG Ning,WU Qin | |||
Computer Science and Technology 25 January 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Deep neural networks have achieved impressive performance in image classification tasks. However, due to limitations in hardware resources, including computing units and storage capacity, deploying these networks directly on resource-constrained devices such as mobile and edge devices is challenging. While lightweight network models have made significant advancements, the downsampling stage has received little attention. As the feature map is reused multiple times, the reduction of its size during the downsampling stage not only reduces the computational cost of the downsampling module itself but also lowers the computational burden of subsequent stages. This paper addresses this gap by proposing a padding-free downsampling module that effectively reduces computational costs and can seamlessly integrates into various deep learning models. Furthermore, we introduce a hybrid stem layer to obtain competitive accuracy. Extensive experiments were conducted on CIFAR-100, Stanford Dogs, and ImageNet datasets. On the CIFAR-100 dataset, the results show that the proposed module reduces computational costs by approximately 20% and improves inference speed on resource-constrained devices by around 10%. | |||
TO cite this article:LIU Dengfeng,GUO Xiaohe,WANG Ning, et al. Lightweight Deep Neural Network Model With Padding-free Downsampling[OL].[25 January 2024] http://en.paper.edu.cn/en_releasepaper/content/4761964 |
4. Face Image Animation Based on Detail Feature Restoration | |||
ZHAO Runyuan,WANG Chun | |||
Computer Science and Technology 03 April 2023 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the development of mobile Internet and deep learning technology, short videos have become one of the most common software in people's mobile phones, and the secondary creation of videos based on deep learning algorithms has become a typical application scene. For the problem of video-to-image motion transfer in the face scene, there are many problems in the existing unsupervised algorithms, such as face and image over-distortion. In order to deal with these problems, this paper designs a face motion transfer model that combines face reenactment and optimized motion modeling. It mainly includes three parts: the face module reconstructs the target face based on 3DMM and FLAME algorithms and outputs the face reenactment and face motion field , the motion module outputs the predicted image optical flow and multi-scale occlusion map through the 2D interpolation algorithm, and the generator extracts the features of the original image and generates the face image of the target pose under the guidance of the front output. After the video reconstruction test on the VoxCeleb1 dataset, the algorithm in this paper can better restore the detailed features of the face, and has better performance in related indicators. | |||
TO cite this article:ZHAO Runyuan,WANG Chun. Face Image Animation Based on Detail Feature Restoration[OL].[ 3 April 2023] http://en.paper.edu.cn/en_releasepaper/content/4759987 |
5. A Semantic Segmentation Model for Top-Down View Image Based on Images from Multiple Vehicle On-board Cameras | |||
GE Mengcheng,SHI Yan | |||
Computer Science and Technology 10 March 2023 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Comprehensive environmental perception is crucial for autonomous driving. However, due to the issue of occlusion, current intelligent vehicle perception algorithms only recognize targets within the intelligent vehicle\'s perception area as much as possible, without predicting or annotating areas obscured by foreground objects. This limits the intelligent driving system\'s comprehensive perception and understanding of the driving environment.This paper proposes a semantic segmentation model that uses image data collected by cameras surrounding the intelligent vehicle as input. The model uses spatial transformation networks for perspective transformation and DeepLabv3p architecture as the backbone of the semantic segmentation network, which outputs the semantic segmentation perception results of the intelligent vehicle\'s driving environment from a bird\'s-eye view, including the obscured areas. In addition, this paper does not rely on manually labeled data but collects data sets through the Carla simulator and uses a designed ray-localization method for subsequent data annotation. By training on the collected data set, the proposed method achieved an MIoU score of 71.49%, which is better than traditional methods based on inverse perspective transformation and fully connected network models. | |||
TO cite this article:GE Mengcheng,SHI Yan. A Semantic Segmentation Model for Top-Down View Image Based on Images from Multiple Vehicle On-board Cameras[OL].[10 March 2023] http://en.paper.edu.cn/en_releasepaper/content/4759540 |
6. WSN: Weighted Segmentation Network for Scene Text Detection | |||
Wei Wenhu,Wang Yulong | |||
Computer Science and Technology 10 November 2022 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The detection of arbitrarily shaped scene text is an extremely challenging task. The existing segmentation-based methods have difficulty in distinguishing adjacent text instances and achieve unsatisfactory results. To solve this problem, a novel weighted segmentation network (WSN) that accurately detects text centers and distinguishes adjacent text instances is proposed in this study. In the WSN, a weighted segmentation map is generated by assigning different weights to each pixel in the text area. Then, we extract features from the weighted segmentation map and detect text centers accurately to distinguish adjacent text instances. To train the detector on the weighted segmentation map, we designed a method for annotation generation based on polygon scaling. Extensive experiments on two benchmark datasets, namely Total-Text and CTW-1500, which comprise highly curved texts in natural scene images demonstrated that our WSN can achieve great performance during segmentation-based methods. | |||
TO cite this article:Wei Wenhu,Wang Yulong. WSN: Weighted Segmentation Network for Scene Text Detection[OL].[10 November 2022] http://en.paper.edu.cn/en_releasepaper/content/4758375 |
7. Style Transfer\\Based on Enhanced Cycle conditional GAN | |||
ZHANG Ya-Meng,LI Li-Xiang,LI Li-Xiang | |||
Computer Science and Technology 28 March 2022 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Generative Adversarial Network(GAN) has achieved great performance in several image generation and manipulation tasks. Image-to-image transfer is a trend in the field of computer vision. However, it becomes challenging for some major drawbacks, such as the time- and computation-consuming training process, mode-collapse and lack of paired training data. To handle the above-mentioned limitations, we propose a novel model, called Enhanced Cycle conditional GAN (Enhanced CCGAN). Our model alleviates the problem of the lack of the aligned paired data problem by calculating the cycle consistency loss function. It realizes the representation disentanglement by using content encoder and style encoder with different architecture. In the content encoder, we use the ResNet block to extract the content feature to realize the multi-level feature fusion. We propose a semantic latent style loss function to ensure a precise semantic consistency of style vectors. Furthermore, we use the $3\times3$ convolutional kernel. The $3\times3$ convolutional kernel is brought up by VGG16. It can greatly reduce the amount of computation and still perform well. Experimental results have shown that our model can produce images with high quality and diversity across several data domains and significantly outperform the state-of-the-art models. | |||
TO cite this article:ZHANG Ya-Meng,LI Li-Xiang,LI Li-Xiang. Style Transfer\\Based on Enhanced Cycle conditional GAN[OL].[28 March 2022] http://en.paper.edu.cn/en_releasepaper/content/4757077 |
8. A Neo Adversarial Examples Defense Method \\Through Spatial Transformer Networks | |||
Li PengBo, Zhang DongMei | |||
Computer Science and Technology 16 March 2022 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In recent years, deep neural networks (DNNs) have achieved high accuracy in image recognition tasks. However, they have been demonstrated to be vulnerable to adversarial examples. This work proposes a spatial transformation defense method to defend adversarial examples. The method is to add spatial transformer networks (STNs) before the classification model. The STNs utilize the attention mechanism to extract the area of interest of the classification model and transform it to another vector space. Spatial transformation maintains the basic structure information of the original images while mitigates the effect of adversarial perturbations. The experiments prove that the proposed spatial transformation method is effective at defending against both single-step and iterative attacks. Combining the proposed method with an adversarially trained model achieves better defense effect against single-step attacks, while combining the proposed method with the randomization defense method achieves better defense effect under completely white box scenario. | |||
TO cite this article:Li PengBo, Zhang DongMei. A Neo Adversarial Examples Defense Method \\Through Spatial Transformer Networks[OL].[16 March 2022] http://en.paper.edu.cn/en_releasepaper/content/4757007 |
9. Wheat Kernel Quality Testing Based on Improved YOLOv5 | |||
Liu Shiyuan,Yang Huihua | |||
Computer Science and Technology 15 March 2022 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:As the health and safety of wheat kernel quality is an important part of food security, the rapid and accurate detection of wheat kernel quality has always been the focus of attention. Some detection methods have been proposed in the last few years. However, these algorithms are incapable of meeting both the requirements of speed and accuracy simultaneously. In order to meet these requirements, we propose a YOLOv5s_BR2 model based on the improved YOLOv5 model. A dataset of 7844 wheat kernels, including mildew wheat kernels, gibberella wheat kernels, germinant wheat kernels and normal wheat kernels, is constructed. Using this dataset, we analyze and research the object detection algorithms for wheat kernel quality detection. Through optimization operations such as decoupling and de-branching of the Neck structure of the YOLOv5 model, we propose the YOLOv5s_BR2 model. Experimental results show that YOLOv5s_BR2 achieves 95.5% accuracy on four kinds of wheat kernels. The detection speed on the GTX1050 graphics card reaches 32.8FPS, which is an improvement of 17% compared with YOLOv5s, and the detection time of 100 grams of 2500 kernels is 19 seconds. YOLOv5s_BR2 meets the requirements of high efficiency, accuracy and reliability applied to the wheat kernel quality detection system. | |||
TO cite this article:Liu Shiyuan,Yang Huihua. Wheat Kernel Quality Testing Based on Improved YOLOv5[OL].[15 March 2022] http://en.paper.edu.cn/en_releasepaper/content/4756731 |
10. 3D object detection-based vehicle localiazation system for bus stations | |||
Huang Xingbin,Wen Zhigang | |||
Computer Science and Technology 03 March 2022 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Now vehicle location technology is more and more widely used in people\'s lives. 3D object detection methods based on LiDAR sensors have achieved success in detection accuracy, but LiDAR sensors are too expensive to be widely used. The methods using images for 3D object detection reduce the cost, but often have poor performance because of the lack of depth information. In this paper, we propose a 3D object detection framework named Depth-Guided and Depth-Aware (DGDA) which is able to simultaneously utilize perspective information of RGB images and depth information of depth maps for 3D detection. Experiments on the KITTI dataset show that DGDA outperforms the most existing image-based 3D object detection algorithms. It is worth mentioning that traditional image-based 3D object detection techniques are only used for the images captured from a driving perspective. In order to apply the 3D detection technology to the vehicle localization of the surveillance video of the bus station, we also propose an angle conversion localization algorithm and combine it with the DGDA framework to design an end-to-end vehicle location system for bus station. | |||
TO cite this article:Huang Xingbin,Wen Zhigang. 3D object detection-based vehicle localiazation system for bus stations[OL].[ 3 March 2022] http://en.paper.edu.cn/en_releasepaper/content/4756382 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated