Top PDF Detection of Autism using Magnetic Resonance Imaging data and Graph Convolutional Neural Networks

Detection of Autism using Magnetic Resonance Imaging data and Graph Convolutional Neural Networks

Detection of Autism using Magnetic Resonance Imaging data and Graph Convolutional Neural Networks

Autism or Autism Spectrum Disorder (ASD) is a development disability which generally begins during childhood and may last throughout the lifetime of an individual. It is generally associated with difficulty in communication and social interaction along with repetitive behavior. One out of every 59 children in the United States is diagnosed with ASD [11] and almost 1% of the world population has ASD [12]. ASD can be difficult to diagnose as there is no definite medical test to diagnose this disorder. The aim of this thesis is to extract features from resting state functional Magnetic Resonance Imaging (rsfMRI) data as well as some personal information provided about each subject to train variations of a Graph Convolutional Neural Network to detect if a subject is Autistic or Neurotypical. The time series information as well as the connectivity information of specific parts of the brain are the features used for analysis. The thesis converts fMRI data into a graphical representation where the vertex represents a part of the brain and the edge represents the connectivity between two parts of the brain. New adjacency matrix filters were added to the Graph CNN model and the model was altered to add a time dimension.
Show more

64 Read more

Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks

Two-phase multi-model automatic brain tumour diagnosis system from magnetic resonance images using convolutional neural networks

CNN is the deep learning model that has improved over the past two decades. CNNs can learn features automati- cally from input data, especially images, as is the case in this research [37]. The convolutional layers are used to convolve the input image with kernels (weights) to obtain a feature map. The weights of the kernels connect the fea- ture map units to the previous layer. AlexNet and two other CNN models, namely, VGG-16 and VGG-19, were used and evaluated in the tumour detection phase. How- ever, the focus of the theoretical foundation section is on the AlexNet model. The AlexNet model was selected due to its flexibility to be modified, its ability to reduce over- fitting using a dropout layer, and its capability to train faster through using a rectified linear unit (ReLU).
Show more

10 Read more

A Review on Brain Tumour Detection using Magnetic Resonance Imaging

A Review on Brain Tumour Detection using Magnetic Resonance Imaging

As above writing study demonstrates that such a significant number of strategies is utilized for brain tumor segmentation. All technique object is to accomplish precise and effective framework grew so it is anything but difficult to discover tumor in least time with most extreme exactness. Clustering calculation is all the time utilized for segmentation in above investigates. Clustering calculation is all the time utilized for segmentation in above investigates. Looking at Fuzzy c-means and K-means clustering demonstrates that both give roughly same outcome yet Fuzzy c-means needs more computational time than K-means. Fluffy c- means can isolate distinctive tissue type utilizing modest number of bunch however K-means utilizes enormous number of groups for discrete tissue types. So the segmentation exactness of both is picture autonomous. Fluffy c-means recognizes threatening tumor all the more precisely contrasted with K-means by keeping more data from the first picture. Convolution neural networks is another methodology of profound neural calculation for segmentation. This technique is computationally progressively productive in contrast with other existing strategies. CNN is straightforwardly prepared through picture modalities, so it learns complex highlights/portrayals legitimately from information.
Show more

5 Read more

Disease Detection of Plants using Deep Learning and Convolutional Neural Networks

Disease Detection of Plants using Deep Learning and Convolutional Neural Networks

Image classification and object detection have been taken care by Convolutional Neural Networks, which is a type of Deep Neural Network which was developed as similar as Human Visual system, Many CNN models was proposed to use it for recognising the object. LeNet[10] has been used in this proposed model. To train the model the dataset that we used is downloaded from an open source dataset[11]. The paper is standardized in a manner that: division II describes the previous work organised in this similar area. Division III shows us the steps and the materials used to perform the experiment. The results acquired is conveyed in division IV and it concludes by concluding how this model can be advance its methods for future improvement.
Show more

5 Read more

Object detection and classification in aquatic environment using convolutional neural networks

Object detection and classification in aquatic environment using convolutional neural networks

Avtorji Girshick idr. [18] so zasluˇ zni za velik napredek na podroˇ cju kon- volucijskih nevronskih mreˇ z za probleme detekcije, ko so leta 2014 pred- stavili metodo na regijah temeljeˇ ce konvolucijske nevronske mreˇ ze (angl. region-based Convolutional Neural Networks, r-CNN). Ta je ob predstavi- tvi izboljˇsala do takrat najboljˇso doseˇ zeno natanˇ cnost 40, 9% na podatkovni zbirki PASCAL VOC 2012 [19] in dosegla 53, 3% natanˇ cnost. Leto kasneje je avtor predlagal izboljˇsavo originalne metode in predstavil metodo hitre na regijah temeljeˇ ce konvolucijske mreˇ ze (angl. Fast r-CNN ) [20]. Ren idr. [21] so predstavili metodo hitrejˇse na regijah temeljeˇ ce konvolucijske mreˇ ze (angl. Faster r-CNN ), ki trenutno velja za najboljˇso izvedbo konvolucijskih nevronskih mreˇ z, ki temeljijo na ideji predlaganja regij. Redmon idr. [22] so leta 2016 predstavili metodo YOLO (angl. You Only Look Once ), kjer za razliko od prej predstavljenih metod predstavijo problem detekcije kot regresijski problem in s tem doseˇ zejo obˇ cutno pohitritev delovanja. Avtorji kasneje predstavijo model YOLO9000 [23], ki izboljˇsa natanˇ cnost in hitrost delovanja originalnega modela. Leta 2016 so Liu idr. [24] predstavili me- todo SSD (angl. Single Shot MultiBox Detector ), ki sledi metodi YOLO in predlaga enotno konvolucijsko nevronsko mreˇ zo za detekcijo.
Show more

68 Read more

Localization Using Convolutional Neural Networks

Localization Using Convolutional Neural Networks

Three CNN’s were tested in unison while varying other parameters: VGG16, VGG19,  InceptionV3(Figure 9, 10, 11). InceptionV3 testing accuracy achieved 77% while VGG19  achieved 87.8%. VGG16 showed its slightly smaller convolution network and came in at a  testing accuracy of ~85.7%. This higher performance for the middle sized convolution network  suggests that VGG16 was too simple and InceptionV3 too complex. InceptionV3 may have  performed more memorization than generalization. Observing an even simpler network such as  AlexNet may have yielded interesting results since VGG16 and VGG19 were comparable  despite the deeper VGG19 structure. Location 7, 8, and 9 were often the failed test frames. Loc  5, 6, 7, and 8 were often mistaken for one another. Similarly, loc 1, 2, 3, and 4 were often  confused and loc 8 and 9. These results are to be expected due to the symmetry of the building  that was recorded. Loc 1-4, and loc 5-8 are in separate courtyards and 9-10 are both hallways.  These results line up with expectations regarding error in this environment. Loc 3 images seem  to exclusively error toward loc 4, this points to a possible lack of data for a certain angle of loc 3.  In fact, a majority of the errors probabilities are greater than 80%. This seems to mean that each  of these locations captured has a section with not enough data resulting in the other location  video superseding based on the features the network learned. 
Show more

29 Read more

Automated cardiovascular magnetic resonance image analysis with fully convolutional networks

Automated cardiovascular magnetic resonance image analysis with fully convolutional networks

An estimated 17.7 million people died from cardiovascu- lar disease (CVD) in 2015, representing 31% of all global deaths [1]. More people die annually from CVD than any other cause. Technological advances in medical imag- ing have led to a number of options for non-invasive investigation of CVD, including echocardiography, com- puted tomography (CT), cardiovascular magnetic reso- nance (CMR) etc., each having its own advantages and disadvantages. Due to its good image quality, excellent soft tissue contrast and absence of ionising radiation, CMR has established itself as the non-invasive gold standard for assessing cardiac chamber volume and mass for a wide range of CVD [2–4]. To derive quantitative measures such as volume and mass, clinicians have been relying on man- ual approaches to trace the cardiac chamber contours. It typically takes a trained expert 20 minutes to analyse images of a single subject at two time points of the cardiac cycle, end-diastole (ED) and end-systole (ES). This is time consuming, tedious and prone to subjective errors.
Show more

13 Read more

Motion Detection and Correction in Magnetic Resonance Imaging

Motion Detection and Correction in Magnetic Resonance Imaging

The analytical Shepp-Logan head phantom (Fig. 4.1) is useful for simulation purposes. The phantom was originally designed to be ‘a reasonable model of a section of the skull’ [96]. The bright region on the outer edge, for example, is analogous to bone, while the small circular feature at the bottom centre represents a tumor. The grey levels in the original version, shown in Fig. 4.1(a), were chosen to represent the radiographic density variations in the human brain, as the phantom was originally developed for X-ray imaging. In MRI, radiographic density is not the measured parameter and therefore the original grey levels are inappropriate. For this research, the modifications made by Pan and Kak [97] are useful: these provide improved contrast for viewing purposes. The phantom shown in Fig. 4.1(b) is generated using the parameters in Table 4.1 which are taken directly from [97]. Unless stated otherwise, the analytical phantom used throughout this thesis is generated using these parameters, although in some cases the dimensions are scaled.
Show more

185 Read more

Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks

Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks

We apply GCNs to the semantic dependency graphs and experiment on the English–German language pair (WMT16). We observe an im- provement over the semantics-agnostic baseline (a BiRNN encoder; 23.3 vs 24.5 BLEU). As we use exactly the same modeling approach as in the syn- tactic method of Bastings et al. (2017), we can easily compare the influence of the types of lin- guistic structures (i.e., syntax vs. semantics). We observe that when using full WMT data we ob- tain better results with semantics than with syntax (23.9 BLEU for syntactic GCN). Using syntactic and semantic GCNs together, we obtain a further gain (24.9 BLEU) which suggests the complemen- tarity of information encoded by the syntactic and semantic representations.
Show more

7 Read more

Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks

Geometric Hawkes Processes with Graph Convolutional Recurrent Neural Networks

Variations of Architecture Setting We compare differ- ent architecture settings of our model and the results are presented in fig. 1(c). First of all, the RNN structure such as LSTM or GRU is essential to learn the diffusion pro- cess of coefficients. The LSTM is more effective com- pared to GRU (Chung et al. 2014) because LSTM can re- member more historical information. Besides, the results show that adding more GCN layers enhances the perfor- mance of modeling Hawkes processes, which indicates that deeper network may extract more useful features. As data size increases, it is necessary to build deeper architectures. More extensive studies on the architecture of GCN in dif- ferent applications can be found at (Kipf and Welling 2017; Monti, Bronstein, and Bresson 2017). In our experiment, we found the structure of two GCN layers plus one LSTM layer works best.
Show more

8 Read more

Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification

Bayesian Graph Convolutional Neural Networks for Semi-Supervised Classification

Recently, techniques for applying convolutional neural net- works to graph-structured data have emerged. Graph con- volutional neural networks (GCNNs) have been used to ad- dress node and graph classification and matrix completion. Although the performance has been impressive, the current implementations have limited capability to incorporate un- certainty in the graph structure. Almost all GCNNs process a graph as though it is a ground-truth depiction of the re- lationship between nodes, but often the graphs employed in applications are themselves derived from noisy data or mod- elling assumptions. Spurious edges may be included; other edges may be missing between nodes that have very strong relationships. In this paper we adopt a Bayesian approach, viewing the observed graph as a realization from a paramet- ric family of random graphs. We then target inference of the joint posterior of the random graph parameters and the node (or graph) labels. We present the Bayesian GCNN framework and develop an iterative learning procedure for the case of assortative mixed-membership stochastic block models. We present the results of experiments that demonstrate that the Bayesian formulation can provide better performance when there are very few labels available during the training pro- cess.
Show more

8 Read more

Deep Convolutional Neural Networks for pedestrian detection

Deep Convolutional Neural Networks for pedestrian detection

plied pixel-by-pixel or window-by-window. Second, the fea- tures for any given spatial window are fed to a classifier that assesses whether such a region depicts a human. Furthermore, a scale-space is typically used in order to detect pedestrians at di ff erent scales, that is, distance with respect to the sensing de- vice. In 2003, Viola and Jones [26] propose a pedestrian de- tection system based on box-shaped filters, that can be applied e ffi ciently resorting to integral images. The features, i.e. the result of the convolution of a window with a given box-shaped filter, are then fed to a classifier based on AdaBoost [10]. Dalal and Triggs refine the process, proposing Histogram Of Gradi- ents (HOG) [3] as local image features, to be fed to a linear Sup- port Vector Machine aimed at identifying windows containing humans. Such features proved to be quite effective for the task at hand, representing the basis for more complex algorithms. Felzenswalb et al. [9] further improve the detection accuracy by combining the Histogram Of Gradients with a Deformable Part Model. In particular, such approach aims at identifying a human shape as a deformable combination of its parts such as the trunk, the head, etc. Each body part has peculiar characteris- tics in terms of its appearance and can be e ff ectively recognized resorting to the HOG features and a properly trained classifier. Such a model proved to be more robust with respect to body shape and pose and to partial occlusions. Doll´ar et al. [6] pro- pose to use features extracted from multiple di ff erent channels. Each channel is defined as a linear or non-linear transformation of the input pixel-level representation. Channels can capture different local properties of the image such as corner, edges, intensity, color.
Show more

9 Read more

Data Augmentation for Time Series Classification using Convolutional Neural Networks

Data Augmentation for Time Series Classification using Convolutional Neural Networks

Model. Our CNN model, denoted t-leNet in the following, is a time-series specific version of leNet model [11]. leNet has proved successful for image classi- fication. It is made of two convolutions layers, each followed by a sub-sampling step performed through max pooling. Finally, fully connected layers enable to match extracted features with class labels to be predicted. The convolutional part of our model is presented in Fig. 1: a first convolution with 5 filters of temporal span equal to 5 is used, followed by a max pooling of size 2. Then, a second convolution layer is made of 20 filters with the same time span as the previous ones and a final max pooling of size 4 is used.
Show more

9 Read more

End-to-end Convolutional Neural Networks for Intent Detection

End-to-end Convolutional Neural Networks for Intent Detection

Spoken dialogue systems are agents that are intended to help users to access infor- mation efficiently by speech interactions. Creating such a system has been a challenge for both academic investigations and commercial applications for decades. Spoken language understanding (SLU) is one of the essential components in spoken dialogue systems [1]. SLU is aiming to form a semantic frame that captures the semantics of user utterances or queries. The three major tasks in an SLU system are domain classi- fication, intent detection, and slot filling. Intent detection can be treated as a semantic utterance classification problem [5,10]. Intent detection solutions classify speakers’ intent and extract semantic concepts as constraints for natural language. Take a weath-
Show more

12 Read more

Event Detection and Domain Adaptation with Convolutional Neural Networks

Event Detection and Domain Adaptation with Convolutional Neural Networks

We study the event detection problem us- ing convolutional neural networks (CNNs) that overcome the two fundamental limi- tations of the traditional feature-based ap- proaches to this task: complicated feature engineering for rich feature sets and er- ror propagation from the preceding stages which generate these features. The experi- mental results show that the CNNs outper- form the best reported feature-based sys- tems in the general setting as well as the domain adaptation setting without resort- ing to extensive external resources.

7 Read more

Deep Convolutional Neural Networks for Fire Detection in Images

Deep Convolutional Neural Networks for Fire Detection in Images

Abstract. Detecting fire in images using image processing and com- puter vision techniques has gained a lot of attention from researchers during the past few years. Indeed, with sufficient accuracy, such systems may outperform traditional fire detection equipment. One of the most promising techniques used in this area is Convolutional Neural Networks (CNNs). However, the previous research on fire detection with CNNs has only been evaluated on balanced datasets, which may give misleading in- formation on real-world performance, where fire is a rare event. Actually, as demonstrated in this paper, it turns out that a traditional CNN per- forms relatively poorly when evaluated on the more realistically balanced benchmark dataset provided in this paper. We therefore propose to use even deeper Convolutional Neural Networks for fire detection in images, and enhancing these with fine tuning based on a fully connected layer. We use two pretrained state-of-the-art Deep CNNs, VGG16 and Resnet50, to develop our fire detection system. The Deep CNNs are tested on our imbalanced dataset, which we have assembled to replicate real world sce- narios. It includes images that are particularly difficult to classify and that are deliberately unbalanced by including significantly more non-fire images than fire images. The dataset has been made available online. Our results show that adding fully connected layers for fine tuning in- deed does increase accuracy, however, this also increases training time. Overall, we found that our deeper CNNs give good performance on a more challenging dataset, with Resnet50 slightly outperforming VGG16. These results may thus lead to more successful fire detection systems in practice.
Show more

11 Read more

Detection and Recognition of License Plates by Convolutional Neural Networks

Detection and Recognition of License Plates by Convolutional Neural Networks

After detecting the vehicles, the output image from positive detections is resized before being fed to Warped-Net license plate detection [4]. We used this network as a semi black box, and we did perform a small change and refinement to the threshold value and the bounding box size of license plate. In order to understand this network, we should notice that the license plates have mostly rectangular shapes and they are planar, which are attached to vehicles for identification reasons. S.M Silva and C.R. Jung proposed a novel CNN called Warped Planar Object Detection Network. They said that: "This network has learned to detect license plates in different distortions and regresses coefficients of an affine transformation that “unwarps” the distorted license plate into a rectangular shape resembling a frontal view" [4]. The WPOD-NET was produced using visions from YOLO, Single Shot Multi-Box Detector (SSD) [50], and Spatial Transformer Networks (STN) [49]. Fast multiple object detection and recognition can be done by utilizing YOLO and SSD. However, they are unable to perform spatial transformations and they only generate the rectangular bounding boxes for each detection. On the other hand, STN can detect non-rectangular areas, but it is not able to handle multiple transformations at the same time, and it can perform simply a single spatial transformation over the entire input [4]. The detection process using WPOD-NET is illustrated in Fig. 3. 8.
Show more

64 Read more

Deep Learning on Graphs using Graph Convolutional Networks

Deep Learning on Graphs using Graph Convolutional Networks

Graphs are a powerful way to model network data with the objects as nodes and the relationship between the various objects as links. Such graphs contain a plethora of valuable information about the underlying data which can be extracted, analyzed, and visualized using Machine Learning (ML). The challenge to this task is that graphs are non-Euclidean structures which means that they cannot be directly used with ML techniques because ML techniques only work with Euclidean structures like grids or sequences. In order to overcome this challenge, the graph structure first needs to be encoded into an equivalent Euclidean representation in the form of a low-dimensional vector. This low-dimensional vector is called an embedding vector, and the encoding process is called node embedding. Traditionally, user-defined heuristics and matrix- factorization based methods were used for node embedding. However, these methods are slow and perform poorly on large and complex graphs. During the recent years, various ML techniques have been developed that learn the encoding of the graph automatically, and in a faster and more efficient way. A few of these techniques called Graph Convolutional Networks (GCNs) use variants of the convolutional neural networks adapted for graphs, and are implemented using deep neural networks. The aim of this project is two-fold. Firstly, to develop a unified framework focusing on three major GCN techniques in order to analyze, evaluate, and compare their performance on select benchmark datasets for the task of node classification. And secondly, to implement a new aggregator for one of the techniques — GraphSAGE, and compare the performance of the aggregator with the existing GCN methods as well as the other aggregators provided by GraphSAGE.
Show more

66 Read more

Glioma Grading Using Structural Magnetic Resonance Imaging and Molecular Data

Glioma Grading Using Structural Magnetic Resonance Imaging and Molecular Data

between our study and previous studies are not possible without having access to advanced MRI data of similar patient cohort, as reported in these studies. Nevertheless, we show the state-of-the- art studies on tumor grading in Table 4 that have used either advanced MRI modalities along with conventional structural MRIs and/or pathology images unlike our study. There is a large variability in the reported accuracy due to small sample size, difference in grading classes, datasets and imaging modal- ities, and performance metrics. Based on the results obtained in this study, we summarize the limitations of our work below. First, DP images and molecular alteration status are not avail- able for the BRATS-2017 dataset, which does not allow direct comparisons within the same patient cohort. Second, our smaller sample size for DP grading and molecular information, and unbalanced data in the BRATS 2017 dataset may have compro- mised sensitivity, specificity, and consistency in classifier per- formance. Limited samples in the dataset may have overfitted with nonlinear classifier models, which are usually known to be superior to linear models. Hence, linear classifier model is recommended for smaller datasets since such model is not flex- ible enough to overfit the data. The availability of more patient samples with both DP and MR images would help harness the benefits of nonlinear models more efficiently as it is seen with the BRATS dataset. However, the AUC result reported as above is a reliable metric that takes both sensitivity and specificity into account and reports an unbiased result when compared to a simple accuracy metric.
Show more

11 Read more

Dynamic Spatial-Temporal Graph Convolutional Neural Networks for Traffic Forecasting

Dynamic Spatial-Temporal Graph Convolutional Neural Networks for Traffic Forecasting

Fusing the spatial-temporal information with dynamic Laplacian matrix, our model is shown to be more fault tol- erant with on average 10% − 25% accuracy improvement compared with two state-of-the-art models based on graph CNNs. Even when the fault ratio reaches 0.9, DGCNN still has a strong forecast capability. With the same amount of noise contamination, other models’ performance drops dra- matically without exception. Comparing the results over three PeMS datasets, the performance gain of our model will become larger with the increase of road network scale. Our DGCNN model can detect the changes of spatial de- pendencies hidden in “contaminated” traffic samples and ad- just the receptive field of graph convolution operations. The right of Figure 3 shows the learning curves of three mod- els with roughly the same number of parameters. With the increase of training epochs, DGCNN achieves the lowest validation error compared with GCNN and STGCN, which shows its training effectiveness. The intuition is that the dy- namic Laplacian matrix estimator gives the model the ability and flexibility to capture the influence from various factors in the road network.
Show more

8 Read more

Show all 10000 documents...