Journal Paper

2020

51. [IEEE TMI] Erkun Yang, Mingxia Liu*, Dongren Yao, Bing Cao, Chunfeng Lian, Pew-Thian Yap, Dinggang Shen. Deep Bayesian Hashing with Center Prior for Multi-modal Neuroimage Retrieval. IEEE Transactions on Medical Imaging, DOI: 10.1109/TMI.2020.3030752, 2020.

A deep Bayesian hash learning framework, called CenterHash, which can map multi-modal data into a shared Hamming space and learn discriminative hash codes from imbalanced multi-modal neuroimages. The key idea to tackle the small inter-class variation and large inter-modal discrepancy is to learn a common center representation for similar neuroimages from different modalities and encourage hash codes to be explicitly close to their corresponding center representations.

50. [IEEE TMI] Shuai Wang#, Mingxia Liu#, Jun Lian, Dinggang Shen. Boundary Coding Representation for Organ Segmentation in Prostate Cancer Radiotherapy. IEEE Transactions on Medical Imaging, DOI: 10.1109/TMI.2020.3025517, 2020.

A novel boundary coding network is developed to learn a discriminative representation for organ boundary and use it as the context information to guide the segmentation. Specifically, we design a two-stage learning strategy: 1) Boundary coding representation learning. Two sub-networks under the supervision of the dilation and erosion masks transformed from the manually delineated organ mask are first separately trained to learn the spatial-semantic context near the organ boundary. We then encode the organ boundary based on the predictions of these two sub-networks and a multi-atlas based refinement strategy. 2) Organ segmentation. The boundary coding representation as context information and image patches, are used to train the segmentation network.

49. [FNS] Li Zhang, Mingliang Wang, Mingxia Liu*, Daoqiang Zhang. A Survey on Deep Learning for Neuroimaging-based Brain Disorder Analysis. Frontiers in Neuroscience, 2020. [pdf]

This paper reviews the applications of deep learning methods for neuroimaging-based brain disorder analysis. We first provide a comprehensive overview of deep learning techniques and popular network architectures, by introducing various types of deep neural networks and recent developments. We then review deep learning methods for computer-aided analysis of four typical brain disorders, including Alzheimer’s disease, Parkinson’s disease, autism spectrum disorder, and schizophrenia. We further discuss the limitations of existing studies and present possible future directions.

48. [IEEE TCYB] Chunfeng Lian#, Mingxia Liu#, Yongsheng Pan, Dinggang Shen. Attention-Guided Hybrid Network for Dementia Diagnosis with Structural MR Images. IEEE Transactions on Cybernetics, 10.1109/TCYB.2020.3005859, 2020.

An attention-guided deep learning framework is proposed to extract multi-level discriminative MRI features for dementia diagnosis. Specifically, we first design a backbone fully convolutional network to automatically localize the discriminative regions in brain sMRI scans in a weakly-supervised manner. Using the identified disease-related regions as spatial attention guidance, we further develop a hybrid network to jointly learn and fuse multi-level (i.e., subject-specific & inter-subject-consistent, and local & global) sMRI features for constructing the CAD model.

47. [MIA] Biao Jie#, Mingxia Liu#, Chunfeng Lian, Feng Shi, Dinggang Shen. Designing Weighted Correlation Kernels in Convolutional Neural Networks for Functional Connectivity based Brain Disease Diagnosis. Medical Image Analysis, 63: 101709, 2020. [pdf]

A novel weighted correlation kernel is designed to measure the correlation of brain regions, by which weighting factors are learned in a data-driven manner to characterize the contributions of different time points, thus conveying the richer interaction information among brain regions compared with conventional Pearson correlation based method. Furthermore, we build a unique kernel based convolutional neural network for learning the hierarchical (i.e., from local to global and also from low-level to high-level) features for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic FCNs using the proposed kernel. Then, we define another three layers to sequentially extract local region-specific, global network-specific) and temporal features from the constructed dynamic FCNs for classification.

46. [IEEE TMI] Yongsheng Pan, Mingxia Liu*, Chunfeng Lian, Yong Xia, Dinggang Shen. Spatially-Constrained Fisher Representation for Brain Disease Identification with Incomplete Multi-Modal Neuroimages. IEEE Transactions on Medical Imaging, 39(9): 2965-2975, 2020. [pdf]

Incomplete data problem is unavoidable in multi-modal neuroimage studies due to patient dropouts and/or poor data quality. Conventional methods usually discard data-missing subjects, thus significantly reducing the number of training samples. To this end, we propose a spatially-constrained Fisher representation framework for brain disease diagnosis with incomplete multi-modal neuroimages. We first impute missing PET images based on their corresponding MRI scans using a hybrid generative adversarial network. With the complete (after imputation) MRI and PET data, we then develop a spatially-constrained Fisher representation network to extract statistical descriptors of neuroimages for disease diagnosis, assuming that these descriptors follow a Gaussian mixture model with a strong spatial constraint (i.e., images from different subjects have similar anatomical structures). Experimental results on 2, 317 subjects suggest that our method can synthesize reasonable neuroimages and achieve promising results in brain disease identification, compared with several state-of-the-art methods.

45. [MIA] Tao Zhou, Kim-Han Thung, Mingxia Liu, Feng Shi, Changqing Zhang, Dinggang Shen. Multi-modal Latent Space Inducing Ensemble SVM Classifier for Early Dementia Diagnosis with Neuroimaging Data. Medical Image Analysis, 60: 101630, 2020. [pdf]

We propose an early AD diagnosis framework via novel multi-modality latent space inducing ensemble SVM classifier. Specifically, we first project the neuroimaging data from different modalities into a latent space, and then map the learned latent representations into the label space to learn multiple diversified classifiers. Finally, we obtain the more reliable classification results by using an ensemble strategy. More importantly, we present a Complete Multi-modality Latent Space (CMLS) learning model for complete multi-modality data and also an Incomplete Multi-modality Latent Space (IMLS) learning model for incomplete multi-modality data. Extensive experiments using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset have demonstrated that the proposed models outperform other state-of-the-art methods.

11144. [IEEE TBME] Mingliang Wang, Chunfeng Lian, Dongren Yao, Daoqiang Zhang, Mingxia Liu*, Dinggang Shen. Spatial-Temporal Dependency Modeling and Network Hub Detection for Functional MRI Analysis via Convolutional-Recurrent Network. IEEE Transactions on Biomedical Engineering, 67(8): 2241-2252, 2020. [pdf]

A unique Spatial-Temporal convolutional-recurrent neural Network (STNet) is proposed for automated prediction of AD progression and network hub detection from rs-fMRI time series. Our STNet incorporates the spatial-temporal information mining and AD-related hub detection into an end-to-end deep learning model. Specifically, we first partition rs-fMRI time series into a sequence of overlapping sliding windows. A sequence of convolutional components are then designed to capture the local-to-global spatially-dependent patterns within each sliding window, based on which we are able to identify discriminative hubs and characterize their unique contributions to disease diagnosis. A recurrent component with long short-term memory (LSTM) units is further employed to model the whole-brain temporal dependency from the spatially-dependent pattern sequences, thus capturing the temporal dynamics along time. Experimental results suggest the effectiveness of the proposed method in both tasks of disease progression prediction and AD-related hub detection.

43. [MIA] Jun Zhang#, Mingxia Liu#, Li Wang, Si Chen, Peng Yuan, Jianfu Li, Steve Guo-Fang Shen, Zhen Tang, Ken-Chung Chen, James J. Xia, Dinggang Shen. Context-Guided Fully Convolutional Networks for Joint Craniomaxillofacial Bone Segmentation and Landmark Digitization. Medical Image Analysis, 60: 101621, 2020. [pdf]

We propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.

tmi2019_mingliang

42. [IEEE TMI]  Mingliang Wang, Daoqiang Zhang*, Jiashuang Huang, Pew-Thian Yap, Dinggang Shen*, Mingxia Liu*. Identifying Autism Spectrum Disorder with Multi-Site fMRI via Low-Rank Domain Adaptation. IEEE Transactions on Medical Imaging, 39(3): 644-655, 2020. [pdf]

A multi-site adaption framework based on low-rank representation decomposition was proposed for Autism identification based on functional MRI (fMRI). The main idea is to determine a common low-rank representation for data from multiple sites, aiming to reduce differences in data distributions. Treating one site as a target domain and the remaining sites as source domains, data from these domains are transformed (i.e., adapted) to a common space using low-rank representation. To reduce data heterogeneity between the target and source domains, data from the source domains are linearly represented in the common space by those from the target domain.

41. [IEEE TPAMI] Chunfeng Lian#Mingxia Liu#, Jun Zhang, Dinggang Shen. Hierarchical Fully Convolutional Network for Joint Atrophy Localization and Alzheimer’s Disease Diagnosis using Structural MRI. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(4): 880-893, 2020. [pdf

A hierarchical fully convolutional network was proposed to automatically identify discriminative local patches and regions in the whole brain sMRI, where multi-scale feature representations are then jointly learned and fused to construct hierarchical classification models.

40. [IEEE TCYBMingxia Liu, Jun Zhang, Chunfeng Lian, Dinggang Shen. Weakly-supervised Deep Learning for Brain Disease Prognosis using MRI and Incomplete Clinical Scores. IEEE Transactions on Cybernetics, 50(7): 3381-3392, 2020. [pdf]

A weakly-supervised densely connected neural network (wiseDNN) was developed for brain disease prognosis using baseline MRI data and incomplete clinical scores. Multiscale image patches (located by anatomical landmarks) from MRI were employed to extract local-to-global structural information of images and a weakly-supervised network with a weighted loss function was developed for task-oriented feature extraction and joint prediction of multiple clinical measures.

39. [NeuroInformtics] Mingliang Wang, Xiaoke Hao, Jiashuang Huang, Kangcheng Wang, Li Shen, Xijia Xu, Daoqiang Zhang, Mingxia Liu. Hierarchical Structured Sparse Learning for Schizophrenia Identification. Neuroinformatics, 18(1): 43-57, 2020. [pdf]

A hierarchical structured sparse learning method was proposed to sufficiently utilize the specificity and complementary structure information across four different frequency bands (from 0.01Hz to 0.25Hz) for schizophrenia diagnosis. The proposed method can help preserve the partial group structures among multiple frequency bands and the specific characters in each frequency band. 

2019

38. [MIA] Mingliang Wang, Daoqiang Zhang, Dinggang Shen, Mingxia Liu*. Multi-task Exclusive Relationship Learning for Alzheimer’s Disease Progression Prediction with Longitudinal Data. Medical Image Analysis, 53: 111-122, 2019. [pdf]

A multi-task exclusive relationship learning model was designed to automatically capture the intrinsic relationship among tasks at different time points for estimating clinical measures based on longitudinal imaging data. The proposed method can select the most discriminative features for different tasks and also model the intrinsic relatedness among different time points, by utilizing an exclusive lasso regularization and a relationship-inducing regularization.

37. [IEEE TMI] Tao Zhou#Mingxia Liu#, Kim-Han Thung, Dinggang Shen. Latent Representation Learning for Alzheimer’s Disease Diagnosis with Incomplete Multi-modal Neuroimaging and Genetic Data. IEEE Transactions on Medical Imaging, 38 (10): 2411-2422, 2019. [pdf]

A latent representation learning method was developed for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation but also use samples with incomplete multi-modality data to learn an independent modality-specific latent representation. We then project the latent representations to the label space for AD diagnosis.

36. [IEEE TMI] Zhenghan Fang, Yong Chen, Mingxia Liu, Lei Xiang, Qian Zhang, Qian Wang, Weili Lin, Dinggang Shen. Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Accelerated Data in Magnetic Resonance Fingerprinting. IEEE Transactions on Medical Imaging, 38 (10): 2364-2374, 2019. [pdf]

A two-step deep learning model was designed to learn the mapping from the observed signals to the desired properties for tissue quantification, i.e., 1) with a feature extraction module for reducing the dimension of signals by extracting a low-dimensional feature vector from the high-dimensional signal evolution and 2) a spatially-constrained quantification module for exploiting the spatial information from the extracted feature maps to generate the final tissue property map. 

35. [IEEE TBME] Tao Zhou, Kim-Han Thung, Mingxia Liu, Dinggang Shen. Brain-wide Genome-wide Association Study for Alzheimer’s Disease via Joint Projection Learning and Sparse Regression Model, IEEE Transactions on Biomedical Engineering, 66 (1): 165-175, 2019. [pdf]

A joint projection and sparse regression model was developed to discover the associations between the phenotypes and genotypes. Specifically, to alleviate the negative influence of data heterogeneity, we first map the genotypes into an intermediate imaging-phenotype-like space. Then, to better reveal the complex phenotype–genotype associations, we project both the mapped genotypes and the original imaging phenotypes into a diagnostic-label-guided joint feature space, where the intraclass projected points are constrained to be close to each other.

34. [IEEE TBMEMingxia Liu, Jun Zhang, Ehsan Adeli, Dinggang Shen. Joint Classification and Regression via Deep Multi-task Multi-channel Learning for Alzheimer’s Disease Diagnosis. IEEE Transactions on Biomedical Engineering, 66 (5): 1195-1206, 2019. [pdf]

A deep multi-task multi-channel learning framework was proposed for simultaneous brain disease classification and clinical score regression, using MRI data and demographic information of subjects. Specifically, we first identify the discriminative anatomical landmarks from MR images in a data-driven manner, and then extract multiple image patches around these detected landmarks. We then proposed a deep multi-task multi-channel convolutional neural network for joint classification and regression. The proposed framework can not only automatically learn discriminative features for MR images, but also explicitly incorporate the demographic information of subjects into the learning process. 

33. [NeuroImage] Xuyun Wen, Han Zhang, Gang Li, Mingxia Liu, Weiyan Yin, Weili Lin, Jun Zhang, Dinggang Shen. First-year Development of Modules and Hubs in Infant Brain Functional Networks. NeuroImage, 185: 222-235, 2019. [pdf]

A novel algorithm was designed to construct the robust, temporally consistent and modular structure augmented group-level network based on which functional modules were detected at each age. Our study reveals that the brain functional network is gradually subdivided into an increasing number of functional modules accompanied by the strengthened intra- and inter-modular connectivities. 

32. [PR] Yu Zhang, Han Zhang, Xiaobo Chen, Mingxia Liu, Xiaofeng Zhu, Seong-Whan Lee, Dinggang Shen. Strength and Similarity Guided Group-level Brain Functional Network Construction for MCI Diagnosis, Pattern Recognition, 88: 421-430, 2019. [pdf]

A strength and similarity guided group sparse representation method was developed to exploit both BOLD signal temporal correlation-based “low-order” FC (LOFC) and intersubject LOFC-profile similarity-based “high-order” FC (HOFC) as two priors to jointly guide the GSR-based network modeling. Extensive experimental comparisons are carried out, with the rs-fMRI data from mild cognitive impairment (MCI) subjects and healthy controls, between the proposed algorithm and other state-of-the-art brain network modeling approaches.

31. [BIB] Bo Cheng#Mingxia Liu#, Daoqiang Zhang, Dinggang Shen. Robust Multi-label Transfer Feature Learning for Early Diagnosis of Alzheimer’s Disease. Brain Imaging and Behavior, 13(1): 138-153, 2019. [pdf]

This work proposed to transform the original binary class label of a particular subject into a multi-bit label coding vector with the aid of multiple source domains. A robust multi-label transfer feature learning (rMLTFL) model was further developed to simultaneously capture a common set of features from different domains (including the target domain and all source domains) and to identify the unrelated source domains. Evaluation was performed on 406 subjects from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database with baseline magnetic resonance imaging (MRI) and cerebrospinal fluid (CSF) data. The experimental results suggested the efficacy of the proposed method, in comparison to state-of-the-art methods. 

2018

30. [IEEE TMI] Daoqiang Zhang, Jiashuang Huang, Biao Jie, Junqiang Du, Liyang Tu, Mingxia Liu*. Ordinal Pattern: A New Network Descriptor for Brain Connectivity Networks. IEEE Transactions on Medical Imaging, 37(7): 1711-1722, 2018. [pdf]

A new network descriptor (i.e., ordinal pattern that contains a sequence of weighted edges) was proposed for brain connectivity network analysis. Compared with previous network properties, the proposed ordinal patterns can not only take advantage of the weight information of edges but also explicitly model the ordinal relationship of weighted edges in brain connectivity networks. An ordinal pattern based learning framework was further designed for brain disease diagnosis using resting-state fMRI data.

29. [MIAMingxia Liu, Jun Zhang, Ehsan Adeli, Dinggang Shen. Landmark-based Deep Multi-instance Learning for Brain Disease Diagnosis. Medical Image Analysis, 43: 157-168, 2018. [pdf]

In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this work, a landmark-based deep multi-instance learning framework was proposed for brain disease diagnosis. A data-driven learning approach was used to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, LDMIL learns an end-to-end MR image classifier for capturing both local structural information conveyed by image patches and the global structural information derived from all detected landmarks.

28. [IEEE JBHIMingxia Liu, Jun Zhang, Dong Nie, Pew-Thian Yap, Dinggang Shen. Anatomical Landmark–based Deep Feature Representation for MR Images in Brain Disease Diagnosis. IEEE Journal of Biomedical and Health Informatics, 22(5): 1476-1485, 2018. [pdf]

An anatomical landmark based deep feature learning framework was designed to extract patch-based representation from MRI for automatic diagnosis of Alzheimer’s disease (AD). We first identify discriminative anatomical landmarks from MR images in a data-driven manner, and then propose a convolutional neural network for patch-based deep feature learning.

27. [IEEE JBHIMingxia Liu, Yue Gao, Pew-Thian Yap, Dinggang Shen. Multi-hypergraph Learning for Incomplete Multi-modality Data. IEEE Journal of Biomedical and Health Informatics. 22(4): 1197-1208, 2018. [pdf] (Featured Article)

A multi-hypergraph learning method was proposed for dealing with incomplete multimodality data. Specifically, we first construct multiple hypergraphs to represent the highorder relationships among subjects by dividing them into several groups according to the availability of their data modalities. A hypergraph regularized transductive learning method is then applied to these groups for automatic diagnosis of brain diseases.

26. [IEEE TIP] Biao Jie#Mingxia Liu#, Daoqiang Zhang, Dinggang Shen. Sub-network Kernels for Connectivity Networks in Brain Disease Classification. IEEE Transactions on Image Processing, 27(5): 2340-2353, 2018. [pdf]

A unique sub-network kernel was proposed for measuring the similarity between a pair of brain networks and then apply it to brain disease classification. Different from current graph kernels, our proposed sub-network kernel not only takes into account the inherent characteristic of brain networks, but also captures multi-level (from local to global) topological properties of nodes in brain networks, which are essential for defining the similarity measure of brain networks. We perform extensive experiments on subjects with baseline functional magnetic resonance imaging (fMRI) data obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. Experimental results demonstrate that the proposed method outperforms several state-of-the-art graph-based methods in MCI classification.

25. [MIA] Biao Jie, Mingxia Liu, Dinggang Shen. Integration of Temporal and Spatial Properties of Dynamic Connectivity Networks for Automatic Diagnosis of Brain Disease. Medical Image Analysis, 47: 81-94, 2018. [pdf]

This work defined a new measure to characterize the spatial variability of dynamic connectivity networks (DCNs) and propose a novel learning framework to integrate both temporal and spatial variabilities of DCNs for automatic brain disease diagnosis. Specifically, we first construct DCNs from the rs-fMRI time series at successive non-overlapping time windows. Then, spatial variability of a specific brain region was characterized by computing the correlation of functional sequences (i.e., the changing profile of FC between a pair of brain regions within all time windows) associated with this region.

24. [IEEE TCBB] Wei Shao, Mingxia Liu, Ying-Ying Xu, Hong-Bin Shen, and Daoqiang Zhang. An Organelle Correlation-guided Feature Selection Approach for Classifying Multi-label Subcellular Bio-Images. IEEE Transactions on Computational Biology and Bioinformatics, 15(3): 828-838, 2018. [pdf]

This work proposed to capture structural correlation among different cellular compartments and designed an organelle structural correlation regularized feature selection method. We formulated the multi-label classification problem by adopting a group-sparsity regularizer to select common subsets of relevant features from different cellular compartments. We also added a cell structural correlation regularized Laplacian term, which utilizes the prior biological structural information to capture the intrinsic dependency among different cellular compartments.

23. [MIA] Chunfeng Lian, Jun Zhang, Mingxia Liu, Xiaopeng Zong, Weili Lin, Dinggang Shen. Multi-channel Multi-scale Fully Convolutional Network for 3D Perivascular Spaces Segmentation in 7T MR Images, Medical Image Analysis, 46: 106-117, 2018. [pdf]

A novel fully convolutional neural network (FCN) with no requirement of any specified hand-crafted features and ROIs is proposed for efficient segmentation of PVSs. Particularly, the original T2-weighted 7T magnetic resonance (MR) images are first filtered via a non-local Haar-transform-based line singularity representation method to enhance the thin tubular structures. Both the original and enhanced MR images are used as multi-channel inputs to complementarily provide detailed image information and enhanced tubular structural information for the localization of PVSs. Multi-scale features are then automatically learned to characterize the spatial associations between PVSs and adjacent brain tissues. 

22. [HBM] Li Wang, Gang Li, Ehsan Adeli, Mingxia Liu, Zhengwang Wu, Yu Meng, Weili Lin, Dinggang Shen. Anatomy-guided Joint Tissue Segmentation and Topological Correction for 6-month Infant Brain MRI with Risk of Autism. Human Brain Mapping, 39(6): 2609-2623, 2018. [pdf]

An anatomy-guided joint tissue segmentation and topological correction framework was designed for isointense infant MRI. Particularly, we adopt a signed distance map with respect to the outer cortical surface as anatomical prior knowledge, and incorporate such prior information into the proposed framework to guide segmentation in ambiguous regions. Experimental results on the subjects acquired from the National Database for Autism Research demonstrate the effectiveness of topological errors and also some levels of robustness to motion.

2017

21. [MIAMingxia Liu, Jun Zhang, Pew-Thian Yap, Dinggang Shen. View-aligned Hypergraph Learning for Alzheimer’s Disease Diagnosis with Incomplete Multi-modality Data. Medical Image Analysis, 36(2): 123-134, 2017. [pdf]

A view-aligned hypergraph learning method was created to explicitly model the coherence among views. Specifically, we first divide the original data into several views based on the availability of different modalities and then construct a hypergraph in each view space based on sparse representation. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to capture coherence among views, followed by a multi-view label fusion method for making a final decision.

20. [IEEE TIP] Jun Zhang#Mingxia Liu#, Dinggang Shen. Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-stage Task-oriented Deep Neural Networks. IEEE Transactions on Image Processing, 26(10): 4753-4764, 2017. [pdf

This work proposed a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, another CNN model was developed, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. 

19. [IEEE TBME] Biao Jie, Mingxia Liu, Jun Liu, Daoqiang Zhang, Dinggang Shen. Temporally-constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease. IEEE Transactions on Biomedical Engineering, 64(1): 238-249, 2017. [pdf]

A novel temporally-constrained group sparse learning method was created aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. To reflect the smooth changes between data derived from adjacent time-points, two smoothness regularization terms were incorporated into the proposed objective function.

18. [IEEE JBHI] Jun Zhang, Mingxia Liu, Le An, Yaozong Gao, Dinggang Shen. Alzheimer’s Disease Diagnosis using Landmark-based Features from Longitudinal Structural MR Images. IEEE Journal of Biomedical and Health Informatics. 21(6): 1607-1616, 2017. [pdf]

An anatomical landmark-based feature extraction method was developed for AD diagnosis using longitudinal structural MR images, which does not require nonlinear registration or tissue segmentation in the application stage and is also robust to inconsistencies among longitudinal scans. Discriminative anatomical landmarks were automatically discovered from the whole brain using training images, followed by a fast landmark detection method for testing images, without the involvement of any nonlinear registration and tissue segmentation. Then, high-level statistical spatial features and contextual longitudinal features were extracted based on those detected landmarks, followed by a linear support vector machine.

17. [Neurocomputing] Mingxia Liu, Jun Zhang, Xiaochun Guo, Liujuan Cao. Hypergraph Regularized Sparse Feature Leanring. Neurocomputing, 237(10): 185-192, 2017. [pdf]

A hypergraph regularized sparse feature learning method is proposed, where the high-order relationships among samples are modeled and incorporated into the learning process. Specifically, we first construct a hypergraph with multiple hyperedges to capture the high-order relationships among samples, followed by the computation of a hypergraph Laplacian matrix. Then, we propose a hypergraph regularization term, and a hypergraph regularized Lasso model.

16. [Neuroinformatics] Bo Cheng, Mingxia Liu, Dinggang Shen, Daoqiang Zhang. Multi-domain Transfer Learning for Early Diagnosis of Alzheimer’s Disease. Neuroinformatics, 15(2): 115-132, 2017. [pdf]

This work considered the joint learning of tasks in multi-auxiliary domains and the target domain, and proposed a novel Multi-Domain Transfer Learning (MDTL) framework for early diagnosis of AD. Specifically, the proposed MDTL framework consists of two key components: 1) a multi-domain transfer feature selection (MDTFS) model that selects the most informative feature subset from multi-domain data, and 2) a multidomain transfer classification (MDTC) model that can identify disease status for early AD detection.

15. [Medical Physics] Yulian Zhu, Li Wang, Mingxia Liu, Chunjun Qian, Ambereen Yousuf, Aytekin Oto, Dinggang Shen. MRI-based Prostate Cancer Detection with High-level Representation and Hierarchical Classification. Medical Physics, 44 (3): 1028-1039, 2017. [pdf]

High-level feature representation was first learned by a deep learning network, where multiparametric MR images were used as the input data. Then, based on the learned high-level features, a hierarchical classification method was developed, where multiple random forest classifiers are iteratively constructed to refine the detection results of prostate cancer.

14. [SCI REP-UK] Le An, Ehsan Adeli, Mingxia Liu, Jun Zhang, Seong-Whan Lee, Dinggang Shen. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer’s Disease Diagnosis. Scientific Reports, 7: 45269, 2017.  [pdf]

This work formulated a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilized both labeled and unlabeled data during training, making our method semi-supervised.

2016

13. [IEEE TPAMIMingxia Liu, Daoqiang Zhang, Songcan Chen, Hui Xue. Joint Binary Classifier Learning for ECOC-based Multi-class Classification.  IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(11): 2335-2341, 2016. [pdf]

Error-correcting output coding (ECOC) is one of the most widely used strategies for dealing with multi-class problems by decomposing the original multi-class problem into a series of binary sub-problems. In this paper, we explored to mine and utilize such relationship through a joint classifier learning method, by integrating the training of binary classifiers and the learning of the relationship among them into a unified objective function. We also developed an efficient alternating optimization algorithm to solve the objective function.

12. [IEEE TMIMingxia Liu, Daoqiang Zhang, Dinggang Shen. Relationship Induced Multi-template Learning for Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment. IEEE Transactions on Medical Imaging, 35(6): 1463-1474, 2016. [pdf]

A novel relationship induced multi-template learning method was proposed for automatic brain disease diagnosis, by explicitly modeling structural information in the multi-template data. Specifically, we first nonlinearly register each brain’s magnetic resonance (MR) image separately onto multiple pre-selected templates, and then extract multiple sets of features for this MR image. A novel feature selection algorithm was designed to model the relationships among templates and among individual subjects, followed by an ensemble classification strategy for automated disease diagnosis. 

11. [IEEE TCYB] Mingxia Liu, Daoqiang Zhang. Pairwise Constraint-guided Sparse Learning for Feature Selection. IEEE Transactions on Cybernetics, 46(1): 298-310, 2016. [pdf]

A pairwise constraint-guided sparse learning method was developed for feature selection, where the must-link and the cannot-link constraints are used as discriminative regularization terms that directly concentrate on the local discriminative structure of data. Furthermore, we develop two variants of CGS, including 1) semi-supervised CGS that utilizes labeled data, pairwise constraints, and unlabeled data and 2) ensemble CGS that uses the ensemble of pairwise constraint sets. An efficient optimization algorithm was further developed for solving the proposed problem.

10. [IEEE TBMEMingxia Liu, Ehsan Adeli-Mosabbeb, Daoqiang Zhang, Dinggang Shen. Inherent Structure Based Multi-view Learning with Multi-template Feature Representation for Alzheimer’s Disease Diagnosis.  IEEE Transactions on Biomedical Engineering, 63(7): 1473-1482, 2016. [pdf]

An inherent structure-based multiview learning method was proposed by using multiple templates for AD/MCI classification. Multiview feature representations were first extracted using multiple selected templates and then cluster subjects within a specific class into several subclasses (i.e., clusters) in each view space. Those subclasses were encoded by considering both their original class information and their own distribution information, followed by a multitask feature selection model and a multi-view classification model. 

9. [NeurocomputingMingxia Liu, Daoqiang Zhang. Feature Selection with Effective Distance. Neurocomputing, 215: 100-109, 2016. [pdf]

To reflect the dynamic structure of data, this paper proposed a set of effective distance-based feature selection methods, where a probabilistically motivated effective distance is used to measure the similarity of samples. Specifically, we first develop a sparse representation-based algorithm to compute the effective distance. Three new filter-type unsupervised feature selection methods were further developed using effective distance, including an effective distance-based Laplacian Score, and two effective distance-based Sparsity Scores. 

8. [Bioinformatics] Wei Shao, Mingxia Liu, Daoqiang Zhang. Human Cell Structure-driven Model Construction for Predicting Protein Subcellular Location from Biological Images. Bioinformatics, 32(1): 114-121, 2016. [pdf]

A cell structure-driven classifier construction method was created by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error-correcting output coding (ECOC) framework. Then, multiple classifiers were constructed corresponding to the columns of the ECOC codeword matrix using a multi-kernel support vector machine (SVM) classification approach.

7. [BIB] Chen Zu, Biao Jie, Mingxia Liu, Songcan Chen, Dinggang Shen, Daoqiang Zhang. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment. Brain Imaging and Behavior, 10(4): 1148-1159, 2016. [pdf]

A multimodal learning method was developed for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. The proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification.

2015

6. [HBMMingxia Liu, Daoqiang Zhang, Dinggang Shen. View-centralized Multi-atlas Classification for Alzheimer’s Disease Diagnosis. Human Brain Mapping, 36(5): 1847-1865, 2015. [pdf]

A view-centralized multi-atlas classification method, which can better exploit useful information in multiple feature representations from different atlases. All brain images were registered onto multiple atlases individually, to extract feature representations in each atlas space. Then, the proposed view-centralized multi-atlas feature selection method was used to select the most discriminative features from each atlas with extra guidance from other atlases, followed by an SVM classifier in each atlas space. 

5. [BIB] Bo Cheng, Mingxia Liu, Heung-Il Suk, Dinggang Shen, Daoqiang Zhang. Multimodal Manifold-regularized Transfer Learning for MCI Conversion Prediction. Brain Imaging and Behavior, 1-14, 2015. [pdf]

A multimodal manifold-regularized transfer learning (M2TL) method was designed to jointly utilize samples from another domain as well as unlabeled samples to boost the performance of MCI conversion prediction in the source domain. Specifically, the proposed M2TL method includes two components: 1) a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain and the target domain; and 2) a semi-supervised multimodal manifold-regularized least-squares classification method, where target-domain samples, auxiliary-domain samples, and unlabeled samples can be jointly used for model learning. 

4. [IEEE TBME] Bo Cheng#Mingxia Liu#, Daoqiang Zhang, BC Munsell, Dinggang Shen. Domain Transfer Learning for MCI Conversion Prediction. IEEE Transactions on Biomedical Engineering, 62(7): 1805-1817, 2015. [pdf]

A domain transfer learning method was proposed for MCI conversion prediction, which can use data from both the target domain (i.e., MCI) and auxiliary domains (i.e., AD and NC). Three key components were included: 1) a domain transfer feature selection component that selects the most informative feature-subset from both target domain and auxiliary domains from different imaging modalities; 2) a domain transfer sample selection component that selects the most informative sample-subset from the same target and auxiliary domains from different data modalities; and 3) a domain transfer support vector machine classification component that fuses the selected features and samples to separate MCI-C and MCI-NC patients.

2014

3. [IEEE TRMingxia Liu, Linsong Miao, Daoqiang Zhang. Two-stage Cost-sensitive Learning for Software Defect Prediction. IEEE Transactions on Reliability, 63(2): 676-686, 2014. [pdf]

A two-stage cost-sensitive learning (TSCS) framework was developed for detect prediction, by utilizing cost information not only in the classification stage but also in the feature selection stage. Then, specifically for the feature selection stage, we develop three novel cost-sensitive feature selection algorithms, namely, Cost-Sensitive Variance Score, Cost-Sensitive Laplacian Score, and Cost-Sensitive Constraint Score, by incorporating cost information into traditional feature selection algorithms. Experiments suggested that the proposed cost-sensitive feature selection methods outperform traditional cost-blind methods, validating the efficacy of using cost information for feature selection.

2. [NeurocomputingMingxia Liu, Daoqiang Zhang, Songcan Chen. Attribute Relation Learning for Zero-shot Classification. Neurocomputing, 139: 34-46, 2014. [pdf]

This work proposed a unified framework that learns the attribute–attribute relation and the attribute classifiers jointly to boost the performances of attribute predictors. Specifically, we unify the attribute relation learning and the attribute classifier design into a common objective function, through which we can not only predict attributes, but also automatically discover the relation between attributes from data. Furthermore, based on the afore-learned attribute relation and classifiers, we develop two types of learning schemes for zero-shot classification. 

1. [IJPRAIMingxia Liu, Daoqiang Zhang. Sparsity Score: A Novel Graph-preserving Feature Selection Method. International Journal of Pattern Recognition and Artificial Intelligence, 28 (4): 1450009, 2014. [pdf]

A general graph-preserving feature selection framework was developed in this work, where graphs to be preserved vary in specific definitions. A number of existing filter-type feature selection algorithms can be unified within this framework. Based on the proposed framework, a new filter-type feature selection method called sparsity score (SS) was proposed, aiming to preserve the structure of a pre-defined l1 graph that is proven robust to data noise. Here, the modified sparse representation based on an l1-norm minimization problem was used to determine the graph adjacency structure and corresponding afinity weight matrix simultaneously. Furthermore, a variant of SS called supervised SS (SuSS) is also proposed, where the l1 graph to be preserved is constructed by using only data points from the same class.

COPYRIGHT NOTICE: These materials are presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.

*Corresponding Author; #Co-First Author.