Authentication email has already been sent, please check your email box: and activate it as soon as possible.
You can login to My Profile and manage your email alerts.
If you haven’t received the email, please:
|
|
There are 954 papers published in subject: since this site started. |
Select Subject |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
1. Research on the hippocampus medical imaging segmentation method for small samples | |||
QI Shu-Wen,Jiang Zhu-qing1,Jiang Zhu-qing1,Jiang Zhu-qing1 | |||
Computer Science and Technology 16 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:The hippocampus is located between the thalamus and the medial temporal lobe. It is mainly responsible for cognition, learning, and long and short memory. It is closely related to many diseases such as Alzheimer's disease and temporal lobe epilepsy. Therefore, the accurate segmentation of the hippocampal structure in magnetic resonance imaging is of great significance for the diagnosis of brain injury and brain disease prediction in clinical medicine. In recent years, the rapid development of deep learning technology has brought about brand-new changes to the field of hippocampal segmentation. Deep learning is data-driven, and the quantity and quality of data directly affect the accuracy of hippocampal segmentation. However, due to the difficulty of MR imaging acquisition and expensive manual annotation, hippocampus MR imaging is relatively scarce, which limits the performance improvement of deep learning models in hippocampal segmentation tasks to some extent. In order to overcome the challenges in small sample data scenarios and improve the accuracy of hippocampal segmentation, this paper proposes a data augmentation method, which aims to expand the data (brain magnetic resonance images) and label (hippocampus mask) simultaneously, so as to alleviate the problem of data scarcity and annotation scarcity. Through experiments, the proposed method can effectively improve the accuracy of hippocampal segmentation. | |||
TO cite this article:QI Shu-Wen,Jiang Zhu-qing1,Jiang Zhu-qing1, et al. Research on the hippocampus medical imaging segmentation method for small samples[OL].[16 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762832 |
2. TransFuseNet: A Novel Multi-task Model for Community-Acquired Pneumonia Segmentation and Classification | |||
CHE PeiShuai,YIN Si-Xing,LI Shu-Fang | |||
Computer Science and Technology 13 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Community-acquired pneumonia (CAP) poses a global public health challenge, and in the current environment of the pneumonia pandemic, timely and accurate diagnosis of different types of pneumonia is particularly crucial. Computed Tomography (CT) is an effective means of diagnosing pneumonia, and the use of artificial intelligence (AI) for diagnostic assistance can enhance clinical diagnostic efficiency. Therefore, this paper introduces a 3D multitask deep learning model called TransFuseNet to achieve real-time and accurate segmentation and classification of CAP.Specifically, the proposed network consists of two sub-networks: a 3D scSEU-Net sub-network for pneumonia lesion segmentation and a classification sub-network based on a fully convolutional Transformer. Both sub-networks share the same encoder, where the segmentation branch captures local features and spatial relationships, while the classification branch performs long-range modeling to capture global context information. Simultaneously, a loss function is introduced to enhance the interaction between the two sub-networks, balancing the importance of the two tasks.The retrospective dataset includes 180 patients who underwent thin-slice chest CT scans at a medical center in China. Numerous experiments demonstrate that the model achieved AUC: 0.989, DSC: 0.723, average accuracy: 0.927, precision: 0.889, sensitivity: 0.866, and specificity: 0.835 on the test set. The model shows no significant difference in pneumonia detection accuracy compared to radiologists. | |||
TO cite this article:CHE PeiShuai,YIN Si-Xing,LI Shu-Fang. TransFuseNet: A Novel Multi-task Model for Community-Acquired Pneumonia Segmentation and Classification[OL].[13 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762404 |
3. A Multi-Document Inference Method Based on TR-BERT and Attention Networks | |||
ZHAO Jiaqi,LIN Rongheng | |||
Computer Science and Technology 13 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:In the task of multi-document inference, information related to the answer may reside across multiple relevant texts, and sometimes this information is not directly associated with the question. To address the challenge of balancing accuracy and efficiency in multi-document inference for the electric utility customer service scenario, this paper proposes a multi-level inference method based on pre-trained models and attention networks. The model utilizes pre-trained models to preserve the rich semantics extracted from paragraph texts and questions, and then evaluates the relevance of candidates through an attention mechanism. Furthermore, due to the real-time requirements of the question-answering scenario, we employ the TR-BERT pre-trained model based on dynamic token reduction and simplify the attention network. Experimental results on the WikiHop dataset demonstrate that the model overall exhibits advantages in both computational speed and accuracy, providing effective methodological support for the multi-turn question-answering functionality in intelligent question-answering systems. | |||
TO cite this article:ZHAO Jiaqi,LIN Rongheng. A Multi-Document Inference Method Based on TR-BERT and Attention Networks[OL].[13 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762558 |
4. Named Entity Recognition in the Perovskite Field Based on Convolutional Neural Networks and MatBERT | |||
ZHANG Jiaxin,ZHANG Lingxue,SUN Yuxuan,LI Wei,QUHE Ruge | |||
Computer Science and Technology 13 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Due to the significant increase in publications in the field of materials science, there has been a bottleneck in organizing material science knowledge and discovering new materials. The number of literature in the emerging field of perovskite materials has grown to a massive scale. It is necessary to compile information on the structure, properties, synthesis methods, characterization techniques, and applications of perovskite materials. To address this issue, we employ named entity recognition, a natural language processing technique, to extract important entities from perovskite material texts. In this paper, we propose a method based on convolutional neural networks (CNN) and MatBERT. Firstly, we utilize MatBERT, which has been pre-trained on a large amount of material science text, to generate contextualized word embeddings. Next, we extract feature information using a CNN model. Finally, a conditional random field (CRF) layer is used for decoding sequences in addition to calculating the training and validation loss. Experimental results demonstrate that the performance of our model on perovskite material dataset is improved by 1%~6% compared with BERT, SciBERT and MatBERT models. Through this model, we extract the entities of 2389 abstracts to obtain knowledge of perovskite materials. | |||
TO cite this article:ZHANG Jiaxin,ZHANG Lingxue,SUN Yuxuan, et al. Named Entity Recognition in the Perovskite Field Based on Convolutional Neural Networks and MatBERT[OL].[13 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762696 |
5. Enhancing Persona Consistency in Dialogue Generation Algorithm with Retrieval Augmentation | |||
SHI Haozhe | |||
Computer Science and Technology 12 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Open-domain dialogue systems are designed to fulfill people\'s daily communicative and emotional requirements, with the goal of cultivating long-term relationships with users. Yet, these systems encounter challenges in sustaining persona consistency, as responses generated at times are not logically aligned with the established character persona or preceding dialogues. This discrepancy undermines dialogue coherence and emotional engagement, consequently impeding the development of profound connections with users. Addressing this issue, this study introduce an innovative dialogue generation algorithm that incorporates retrieval-augmentation techniques. By forming an database of character information, the algorithm aids large language models in retrieving persona-relevant data during interactions, ensuring that responses consistently align with the character\'s defined persona. This method significantly mitigates the occurrence of generating inconsistent response, an "hallucination" effect. This study demonstrates the substantial impact of information optimization and filtering mechanisms on enhancing persona consistency within dialogue systems, as evidenced through comprehensive evaluation across three pivotal performance metrics: information relevance, faithfulness, and reponse relevance, facilitated by an integration of various retrieval strategies and information optimization techniques. | |||
TO cite this article:SHI Haozhe. Enhancing Persona Consistency in Dialogue Generation Algorithm with Retrieval Augmentation[OL].[12 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762669 |
6. Intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm | |||
Cheng Ao | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:White matter hyperintensities (WMHs) serves as a crucial imaging feature for assessing cerebral white matter abnormalities, and accurate segmentation of WMHs holds significant importance for tracking disease progression, evaluating treatment effects, and studying and understanding various neurological and geriatric disorders. Presently, deep learning-based methods for WMHs segmentation rely heavily on extensively annotated training data at the pixel level. However, the irregular shapes, random distribution, and fuzzy boundaries characteristic of WMHs make acquiring pixel-level precise labels prohibitively costly. To mitigate the reliance on pixel-level annotations, this paper introduces an intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm (IDBB), which substitutes precise labels with weak bounding box labels during model training. IDBB employs an intensity-based adaptive thresholding method to generate pixel-level pseudo-labels from bounding box labels and trains the segmentation network using both Dice loss and cross-entropy loss. Additionally, this paper introduces a WMHs segmentation dataset containing bounding box labels of various sizes, serving as a benchmark dataset for bounding box supervised WMHs segmentation tasks. Results demonstrate that the proposed method achieves segmentation performance on the Dice similarity coefficient (DSC) comparable to 90\% of fully supervised methods, surpassing other weakly supervised approaches. Experimental validation illustrates the effectiveness of the proposed method in reducing annotation costs while achieving satisfactory segmentation performance. | |||
TO cite this article:Cheng Ao. Intensity-driven bounding box supervised brain white matter hyperintensities segmentation algorithm[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762582 |
7. Cold Start Mitigation Approach for Serverless Applications Based on Adaptive Container Pool | |||
LI Zhuo,WANG Chun | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the development of cloud computing technology, Serverless is becoming increasingly popular among developers as an emerging paradigm for building applications in the cloud. Serverless allows developers to focus on the logic of applications without worrying about underlying server management. However, the performance of serverless applications is easily affected when facing cold start issues. In order to mitigate the impact of cold start issues on serverless applications, this paper proposes a cold start mitigation approach for serverless applications based on adaptive container pools to reduce the overall latency caused by cold start issues. This approach optimizes the cold start latency of serverless applications based on the internal structure of serverless applications, through the analysis of container pool mechanisms and request history. Experiments show that this approach has achieved significant effects in improving the performance of serverless applications and the efficiency of cloud resource utilization. | |||
TO cite this article:LI Zhuo,WANG Chun. Cold Start Mitigation Approach for Serverless Applications Based on Adaptive Container Pool[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762581 |
8. Optimized LightGBM-Based Survival Prediction Model for ENKTL | |||
Wenke Lian,Yu Song,Dong Dong,Wenxiang Yang,Huafeng Zeng,Shibiao Xu,Li Guo,Fengyang An,Xuemei Zhu | |||
Computer Science and Technology 08 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:With the rapid advancements in artificial intelligence and machine learning, particularly in fields like image recognition, time series forecasting, disease diagnostics, and certain tumor prognostics, these technologies have demonstrated clear advantages over traditional statistical methods, offering vital support for developing more precise predictive models. However, challenges arise due to frequent data omissions in clinical datasets and the intricate relationships between data and survival outcomes. This study specifically addresses the complex survival relationships in ENKTL by comparing discrete and continuous time survival prediction methodologies. It employs an HSIC-Lasso optimized LightGBM algorithm for discrete-time survival forecasting, successfully predicting ENKTL patient survival rates. By evaluating the impact of various interpolation techniques on the predictive accuracy of models dealing with missing values, this work enhances the precision of discrete-time survival forecasts. The findings not only offer fresh insights and strategies for navigating the complex survival dynamics in extranodal nasal NK/T lymphoma but also bolster technical support in this medical domain. This contributes to enhancing the accuracy of disease prognostics and equipping physicians with more targeted treatment options. | |||
TO cite this article:Wenke Lian,Yu Song,Dong Dong, et al. Optimized LightGBM-Based Survival Prediction Model for ENKTL[OL].[ 8 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762512 |
9. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction | |||
Zhuo Liu,Chensheng Wang | |||
Computer Science and Technology 04 March 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:Feature selection is critical in deep learning aiming to identify the most informative features in dimensionality reduction. In this paper, we propose a novel multi-phase feature selection method, namely Discrimination Improved Correlation-based Feature Selection (DI-CFS), which consists of three modules: the Discrimination Filtering Formula, the Isolation Forest (IF) algorithm, and the Correlation-based Feature Selection (CFS) method. In our method, the Discrimination Filtering Formula is employed to filter out invalid and insignificant features by calculating the discrimination value of them. The IF algorithm is utilized to remove redundant features which are more easily to be partitioned. The point-biserial correlation coefficient is utilized to calculate the weights of different features instead of the Pearson correlation coefficient, and the weights are evaluated by the Correlation-based Feature Selection (CFS) method. The experimental results show that the DI-CFS method is effective. | |||
TO cite this article:Zhuo Liu,Chensheng Wang. DI-CFS:A Multi-Phase Feature Selection Method for Dimensionality Reduction[OL].[ 4 March 2024] http://en.paper.edu.cn/en_releasepaper/content/4762431 |
10. DTAME: A Unified Approach for ABAC Policy Mining and Efficient Evaluation Using Decision Trees | |||
LAN Ze-Jun, GUAN Jian-Feng | |||
Computer Science and Technology 28 February 2024 | |||
Show/Hide Abstract | Cite this paper︱Full-text: PDF (0 B) | |||
Abstract:\justifying Attribute-Based Access Control (ABAC) has been chosen to replace the traditional access control model due to its dynamics, flexibility and scalability recently. However, during the migration and deployment process of ABAC policies, the key issue is how to mine an accurate and concise access control policy collection and quickly evaluate the policies when an access request arrives. Previous studies have typically taken the problems of policy mining and policy evaluation separately. Policy mining primarily focuses on the compactness of the policy itself, while policy evaluation concentrates on assessing the performance of policy matching. The lack of coordination between policy mining and policy evaluation results in that the concise strategy obtained through policy mining cannot maximize the performance of policy evaluation. To trick this issue, this paper proposed a decision tree based ABAC policy mining and policy evaluation (DTAME) scheme that addresses both issues concurrently by introducing an ABAC policy mining and evaluation method based on the decision tree algorithm. On the other hand, some hotspot policy rules are frequently accessed in some scenarios. Therefore, to maximize evaluation performance, this paper also optimizes the algorithm based on access control logs. Experimental results show that the DTAME can enhance the performance of policy evaluation while ensuring that the mined policies remain compact and effective. | |||
TO cite this article:LAN Ze-Jun, GUAN Jian-Feng. DTAME: A Unified Approach for ABAC Policy Mining and Efficient Evaluation Using Decision Trees[OL].[28 February 2024] http://en.paper.edu.cn/en_releasepaper/content/4762310 |
Select/Unselect all | For Selected Papers |
Saved Papers
Please enter a name for this paper to be shown in your personalized Saved Papers list
|
|
About Sciencepaper Online | Privacy Policy | Terms & Conditions | Contact Us
© 2003-2012 Sciencepaper Online. unless otherwise stated