Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 939 papers published in subject: since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. Intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm | |||
Cheng Ao | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:White matter hyperintensities (WMHs) serves as a crucial imaging feature for assessing cerebral white matter abnormalities, and accurate segmentation of WMHs holds significant importance for tracking disease progression, evaluating treatment effects, and studying and understanding various neurological and geriatric disorders. Presently, deep learning-based methods for WMHs segmentation rely heavily on extensively annotated training data at the pixel level. However, the irregular shapes, random distribution, and fuzzy boundaries characteristic of WMHs make acquiring pixel-level precise labels prohibitively costly. To mitigate the reliance on pixel-level annotations, this paper introduces an intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm (IDBB), which substitutes precise labels with weak bounding box labels during model training. IDBB employs an intensity-based adaptive thresholding method to generate pixel-level pseudo-labels from bounding box labels and trains the segmentation network using both Dice loss and cross-entropy loss. Additionally, this paper introduces a WMHs segmentation dataset containing bounding box labels of various sizes, serving as a benchmark dataset for bounding box supervised WMHs segmentation tasks. Results demonstrate that the proposed method achieves segmentation performance on the Dice similarity coefficient (DSC) comparable to 90\% of fully supervised methods, surpassing other weakly supervised approaches. Experimental validation illustrates the effectiveness of the proposed method in reducing annotation costs while achieving satisfactory segmentation performance. | |||
TO cite this article:Cheng Ao. Intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762582 |
2. Cold Start Mitigation Approach for Serverless Applications Based on Adaptive Container Pool | |||
LI Zhuo,WANG Chun | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the development of cloud computing technology, Serverless is becoming increasingly popular among developers as an emerging paradigm for building applications in the cloud. Serverless allows developers to focus on the logic of applications without worrying about underlying server management. However, the performance of serverless applications is easily affected when facing cold start issues. In order to mitigate the impact of cold start issues on serverless applications, this paper proposes a cold start mitigation approach for serverless applications based on adaptive container pools to reduce the overall latency caused by cold start issues. This approach optimizes the cold start latency of serverless applications based on the internal structure of serverless applications, through the analysis of container pool mechanisms and request history. Experiments show that this approach has achieved significant effects in improving the performance of serverless applications and the efficiency of cloud resource utilization. | |||
TO cite this article:LI Zhuo,WANG Chun. Cold Start Mitigation Approach for Serverless Applications Based on Adaptive Container Pool[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762581 |
3. Optimized LightGBM-Based Survival Prediction Model for ENKTL | |||
Wenke Lian,Yu Song,Dong Dong,Wenxiang Yang,Huafeng Zeng,Shibiao Xu,Li Guo,Fengyang An,Xuemei Zhu | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the rapid advancements in artificial intelligence and machine learning, particularly in fields like image recognition, time series forecasting, disease diagnostics, and certain tumor prognostics, these technologies have demonstrated clear advantages over traditional statistical methods, offering vital support for developing more precise predictive models. However, challenges arise due to frequent data omissions in clinical datasets and the intricate relationships between data and survival outcomes. This study specifically addresses the complex survival relationships in ENKTL by comparing discrete and continuous time survival prediction methodologies. It employs an HSIC-Lasso optimized LightGBM algorithm for discrete-time survival forecasting, successfully predicting ENKTL patient survival rates. By evaluating the impact of various interpolation techniques on the predictive accuracy of models dealing with missing values, this work enhances the precision of discrete-time survival forecasts. The findings not only offer fresh insights and strategies for navigating the complex survival dynamics in extranodal nasal NK/T lymphoma but also bolster technical support in this medical domain. This contributes to enhancing the accuracy of disease prognostics and equipping physicians with more targeted treatment options. | |||
TO cite this article:Wenke Lian,Yu Song,Dong Dong, et al. Optimized LightGBM-Based Survival Prediction Model for ENKTL[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762512 |
4. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction | |||
Zhuo Liu,Chensheng Wang | |||
Computer Science and Technology 04 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Feature selection is critical in deep learning aiming to identify the most informative features in dimensionality reduction. In this paper, we propose a novel multi-phase feature selection method, namely Discrimination Improved Correlation-based Feature Selection (DI-CFS), which consists of three modules: the Discrimination Filtering Formula, the Isolation Forest (IF) algorithm, and the Correlation-based Feature Selection (CFS) method. In our method, the Discrimination Filtering Formula is employed to filter out invalid and insignificant features by calculating the discrimination value of them. The IF algorithm is utilized to remove redundant features which are more easily to be partitioned. The point-biserial correlation coefficient is utilized to calculate the weights of different features instead of the Pearson correlation coefficient, and the weights are evaluated by the Correlation-based Feature Selection (CFS) method. The experimental results show that the DI-CFS method is effective. | |||
TO cite this article:Zhuo Liu,Chensheng Wang. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction[OL].[ 4 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762431 |
5. DTAME: A Unified Approach for ABAC Policy Mining and Efficient Evaluation Using Decision Trees | |||
LAN Ze-Jun, GUAN Jian-Feng | |||
Computer Science and Technology 28 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:\justifying Attribute-Based Access Control (ABAC) has been chosen to replace the traditional access control model due to its dynamics, flexibility and scalability recently. However, during the migration and deployment process of ABAC policies, the key issue is how to mine an accurate and concise access control policy collection and quickly evaluate the policies when an access request arrives. Previous studies have typically taken the problems of policy mining and policy evaluation separately. Policy mining primarily focuses on the compactness of the policy itself, while policy evaluation concentrates on assessing the performance of policy matching. The lack of coordination between policy mining and policy evaluation results in that the concise strategy obtained through policy mining cannot maximize the performance of policy evaluation. To trick this issue, this paper proposed a decision tree based ABAC policy mining and policy evaluation (DTAME) scheme that addresses both issues concurrently by introducing an ABAC policy mining and evaluation method based on the decision tree algorithm. On the other hand, some hotspot policy rules are frequently accessed in some scenarios. Therefore, to maximize evaluation performance, this paper also optimizes the algorithm based on access control logs. Experimental results show that the DTAME can enhance the performance of policy evaluation while ensuring that the mined policies remain compact and effective. | |||
TO cite this article:LAN Ze-Jun, GUAN Jian-Feng. DTAME: A Unified Approach for ABAC Policy Mining and Efficient Evaluation Using Decision Trees[OL].[28 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762310 |
6. CPBNet:Concentrate,Parallel and Bimodal Network for Logistics Scene Text Detection and Recognition | |||
MA Yu-Chen | |||
Computer Science and Technology 28 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In the logistics industry, express sorting has always been an important link in ensuring smooth transportation of logistics. The recognition of logistics forms directly determines the sorting efficiency, but how to effectively improve the accuracy of text recognition of logistics forms in complex sorting environments is still a research challenge. In this article, we believe that the limitations of existing text models in logistics sorting scenarios mainly come from: 1) the interference caused by complex environments; 2) the incompatibility of recognition accuracy and speed; 3) single-modal limitation. Accordingly, this paper proposes CPBnet based on the principles of concentrate,parallel and bimodal. First, we corrected the form angularly, geometrically, and photometrically for complex scenes. Then, using a parallel method, the Attention mechanism is added to the visual model to guide the CTC training strategy, and the more accurate characteristics of the Attention model are used to train the backbone network to obtain better convolution features, and then the CTC branch is used for prediction, thereby ensuring Speed at inference. Finally, a language model is added after the visual model for semantic correction. The language model fully learns the input contextual information to make up for the visual semantic deficiency. There are basically very few pictures of sorting scenes in existing general text data sets. The lack of data in the field of sorting scenes has created a bottleneck for the application of deep learning in sorting scenes. Therefore, this article simulates real form data, prepares a sorting scene data set by itself, and proves through a large number of experiments that CPBNet has advantages on this data set and achieves the most advanced results. | |||
TO cite this article:MA Yu-Chen. CPBNet:Concentrate,Parallel and Bimodal Network for Logistics Scene Text Detection and Recognition[OL].[28 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762301 |
7. Relation Extraction Method for Chinese Public Opinion Based on Transferring Pre-trained Models and Merging Multiple Features | |||
ZHANG Yunkai,CHENG Bo | |||
Computer Science and Technology 27 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The public opinion has profoundly and extensively impacted the society, making the research on relation extraction(RE) in the field of public opinion crucial. Many existing relation extraction models either focus solely on basic information of Chinese characters or fail to fully leverage pre-trained models for extraction. Therefore, this paper proposes a relation extraction model, CwTransRE, which incorporates basic Chinese character information, glyph information, pinyin information and Chinese word information through transferring pre-trained models. CwTransRE enhances the effectiveness of relation extraction in two key aspects: firstly, besides basic Chinese character information, the integrated glyph, pinyin and Chinese word information enriches the semantic features of embeddings; secondly, the introduction of pre-trained models aids in obtaining more accurate embeddings, especially when dealing with relatively small training datasets. Experimental results on an open-source public opinion dataset demonstrate that our model achieves an F1 score of 0.703, outperforming NovelTagging, GraphRel(1p), GraphRel(2p) , TAG-JE and CasRel. | |||
TO cite this article:ZHANG Yunkai,CHENG Bo. Relation Extraction Method for Chinese Public Opinion Based on Transferring Pre-trained Models and Merging Multiple Features[OL].[27 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762184 |
8. Classification of security vulnerability exploit codes using large language models | |||
Huang Linhui,He Yongzhong,Yin Min,Li Chao,Hou Lu,Wang Xiaonan,Guo Yaoyao | |||
Computer Science and Technology 26 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Software vulnerabilities are the root cause of security risks such as data breaches, system crashes, and network intrusions. Once malicious attackers exploit these vulnerabilities, they can result in significant losses. According to reports from the National Vulnerability Database (NVD), the number of disclosed vulnerabilities is steadily increasing, providing attackers with more opportunities to exploit these vulnerabilities. Consequently, more attack scripts, known as exploit data, are being publicly disclosed. In order to facilitate the use of such data by penetration testers and researchers, relevant individuals have established exploit databases. However, these databases largely rely on manual collection and categorization, making them susceptible to human factors.Therefore, there is a need to employ automated classification methods to effectively manage exploit programs targeting various software and systems. This can enhance management efficiency and reduce associated costs. This article introduces an automated exploit classifier that categorizes exploit information\'s text and code separately. It combines BERT and CodeBERT models along with W2V models to generate corresponding feature vectors. Subsequently, it utilizes models like BiLSTM to construct an automated exploit classifier, achieving effective exploit classification. | |||
TO cite this article:Huang Linhui,He Yongzhong,Yin Min, et al. Classification of security vulnerability exploit codes using large language models[OL].[26 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762133 |
9. Research on Option-Critic algorithm based Representation Erasure | |||
Meng JunWei,Hu Zheng | |||
Computer Science and Technology 06 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The Option-Critic (OC) framework can extract transferrable abstract knowledge without requiring any environment-specific prior knowledge, learning options (a form of temporal abstract policy) end-to-end. However, the OC framework exhibits lower data efficiency in transfer tasks. During the learning process, each option considers the entire task's state space, thereby increasing the scale of policy space search. This paper proposes an Option Learning algorithm based on Representation Erasure, which introduces the Representation Erasure method to clearly quantify the influence of each dimension on high-level and low-level policy learning. It identifies and erases dimensions that significantly interfere with training, effectively reducing the scale of policy space search. Through theoretical derivation and experimental validation, this paper demonstrates the effectiveness of the Representation Erasure-based Option Learning algorithm. | |||
TO cite this article:Meng JunWei,Hu Zheng. Research on Option-Critic algorithm based Representation Erasure[OL].[ 6 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762077 |
10. End-to-end 3D Human Pose Estimation using Dual Decoders | |||
WANG Zhang,SONG Mei,JIN Lei | |||
Computer Science and Technology 02 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Existing methods for 3D human pose estimation mainly divide the task into two stages. The first stage identifies the 2D coordinates of the human joints in the input image, namely the 2D human joint coordinates. The second stage uses the results from the first stage as input to recover the depth information of human joints from the 2D human joint coordinates to achieve 3D human pose estimation. However, the recognition accuracy of the two-stage method relies heavily on the results of the first stage and includes too many redundant processing steps, which reduces the inference efficiency of the network. To address these issues, we propose the EDD, a fully End-to-end 3D human pose estimation method based on transformer architecture with Dual Decoders. By learning multiple human poses, the model can directly infer all 3D human poses in the image using a pose decoder, and then further optimize the recognition result using a joint decoder based on the kinematic relations between joints. With the attention mechanism, this method can adaptively focus on the most relevant features to the target joint, effectively overcoming the feature misalignment problem in the human pose estimation task and greatly improving the model performance. Any complex post-processing step, such as non-maximum suppression, is eliminated, further improving the efficiency of the model. The results show that this method achieves an accuracy of 87.4\% on the MuPoTS-3D dataset, significantly improving the accuracy of the end-to-end 3D human pose estimation method based on mixed training. | |||
TO cite this article:WANG Zhang,SONG Mei,JIN Lei. End-to-end 3D Human Pose Estimation using Dual Decoders[OL].[ 2 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4761949 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated