Earlier Works
Parkinson’s disease (PD) is a neurodegenerative disease with a high incidence rate. Effective early diagnosis of PD is critical to prevent further deterioration of a patient’s condition, where gait abnormalities are important factors for doctors to diagnose PD. Deep learning (DL)-based methods for PD detection using gait information recorded by non-invasive sensors have emerged to assist doctors in accurate and efficient disease diagnosis. However, most existing DL-based PD detection models neglect information in the frequency domain and do not adaptively model the correlation of signals among sensors. Moreover, different people have different gait patterns. Therefore, the generalization capabilities of PD detection models on diversities of individuals’ gaits are essential. This work proposes a novel robust frequency-domain-based graph adaptive network (RFdGAD) for PD detection from gait information (i.e., vertical ground reaction force signals recorded by foot sensors). Specifically, the RFdGAD first learns the frequency-domain features of signals from each foot sensor by a frequency representation learning block. Then, the RFdGAD utilizes a graph adaptive network block taking frequency-domain features as input to adaptively learn and exploit the interconnection between different sensor signals for accurate PD detection. Moreover, the RFdGAD is trained by minimizing the proposed Jensen-Shannon divergence-based localized generalization error to improve the generalization performance of RFdGAD on unseen subjects. Experimental results show that the RFdGAD outperforms existing DL-based models for PD detection on three widely used datasets in terms of three metrics, including accuracy, F1-score, and geometric mean.
Key Points
- Question: Can a deep learning system provide reliable prediction of retinopathy of prematurity (ROP) using retinal photographs and clinical characteristics?
- Findings: In this prognostic study including data from 815 infants, the mean areas under the receiver operating characteristic curve (AUCs) of the system were 0.90 and 0.87 in predicting the occurrence and severity of ROP, respectively. For the external validation set, the AUCs were 0.94 and 0.88, respectively.
- Meaning: These findings suggest the feasibility of using deep learning approaches to predict ROP with high accuracy and generalizability.
The number of patient suffering from complex congenital heart diseases (CHDs) increases gradually each year. The perioperative parameters assessment of complex CHDs patients is critical in choosing a suitable surgery method, but there is still a lack of an accurate and interpretable approach to preoperatively assess surgical risks and prognosis. The vascular patterns in retinal images of patients with complex CHDs reflect the severity of heart disease, so retinal images are used to predict the risk of perioperative parameters of heart disease. Perioperative parameters classification from retinal images is challenging due to the limited available retinal image data in patients with CHDs and the interference caused by retinal images with poor quality. In this work, a method called deep learning based perioperative parameter classifier is proposed to classify perioperative parameter risk from retinal images of patients with complex CHDs. To evaluate its effectiveness, our method is verified with 6 perioperative parameters, respectively. Experimental results show that the proposed method is superior to several popular classification networks in this task. Saliency maps are also provided to enhance the interpretability in our model and may be of great use for future medical researches.
Multi-Occupancy Fall Detection Using Non-Invasive Thermal Vision Sensor 2020-10-21
Falling is a common issue within the aging population. The immediate detection of a fall is key to guarantee early and immediate attention to avoid other potential immobility risks and reduction in recovery time. Video-based approaches for monitoring fall detection, although being highly accurate, are largely perceived as being intrusive if deployed within living environments. As an alternative, thermal vision-based methods can be deployed to offer a more acceptable level of privacy. To date, thermal vision-based fall detection methods have largely focused on single-occupancy scenarios, which are not fully representative of real living environments with multi-occupancy. This work proposes a non-invasive thermal vision-based approach of multi-occupancy fall detection (MoT-LoGNN) which discriminates between a fall or no-fall. The approach consists of four major components: i) a multi-occupancy decomposer, ii) a sensitivity-based sample selector, iii) the T-LoGNN for single-occupancy fall detection, and iv) a fine-tuning mechanism. The T-LoGNN consists of a robust neural network minimizing a Localized Generalization Error (L-GEM) and thermal image features extracted by a Convolutional Neural Network (CNN). Comparing to other methods, the MoT-LoGNN achieved the highest average accuracy of 98.39% within the context of a multi-occupancy fall detection experiment.