In this paper, we propose and evaluate a deepneuralnetwork architecture inspired by DroNet for short-termpathplanning which is to predict a sequence of steering angles directly from an image obtained by forward camera, hence an end-to-end model. The main difference between  and our work is the statement of steering angle prediction problem. In fact, as mentioned in , it can be transformed from a regression problem of continuous values to a classification problem where the steering angle range is tessellated into discrete spans with width of 0.01 radians. Such choice of span width is justified by the jitter of steering angle applied by a human driver in straight road. Moreover, the calculated steering sequence is mapped into a non-parameterized path and, once the path is output, any motion planning algorithm can be implemented to timestamp the path.
While DNN acoustic models have been successfully applied to noisy speech recognition tasks with both config- urations, little has been revealed about which element of the DNN acoustic model is susceptible to environmental distortion and which robustness techniques still matter in the DNN-based systems. The DNN–HMM hybrid approach was first applied to a noisy ASR task by Seltzer et al. (2013). They achieved the best published result on the Aurora 4 data set using a multi-condition hybrid acoustic model trained with the drop-out technique. They also showed that performing speech enhancement on both training and test sets with the method described in Yu et al. (2008) degraded the recognition performance. Geiger et al. (2014) showed that a non-negative matrix factorisation-based enhancement method improved the recognition performance of a heterogeneous acoustic model consisting of GMM-HMMs and a long short-term memory network in the CHiME2 medium vocabulary task. Li and Sim (2013) attempted to exploit a vector Taylor series (VTS) adaptation to improve the noisy digit recognition performance of a hybrid acoustic model. A few papers were also presented at ICASSP 2014 that proposed front-end processing schemes using clean/noisy stereo corpus (Narayanan and Wang, 2014; Li and Sim, 2014; Weninger et al., 2014). On the other hand, a large body of work applied MLP tandem acoustic models, with both shallow and deep configurations, to tasks related to environmental robustness such as meeting and lecture transcriptions based on distant microphones (Hain et al., 2012; Stolcke, 2011; Chang et al., 2013). These previous efforts showed that, while DNN acoustic models greatly outperform conventional GMM-based models in acoustically adverse environments, they still suffer from acoustic degradation. However, since the previous work made little use of existing robustness techniques developed for GMM-based models, the perfor- mance gain that can be obtained from the existing techniques when DNNs are being used is unknown. Furthermore,
Different from the traditional identification of power components , the images from live working robots have complex backgrounds, high density of parts, and high timeliness requirements and contain many interfer- ence objects. In this sense, traditional power component identification cannot be applied well to the patrol images from live working robots because they mainly use manu- ally designed features and segmentation algorithm, where classical features include SIFT (scale-invariant feature transform) , edge detector , and HOG (histogram of oriented gradients) , while the segmentation algorithms are mainly based on peripheral contour skeleton  and adaptive threshold . However, applying these methods to automatic detection is not practical due to the following drawbacks: (1) they are often based on specific categories in the design principle so that their accuracy is lower and the scalability is not stronger. and (2) these methods always have a loose structure and lack comprehensive uti- lization of low-level features to achieve the goal of optimal global identification.
Previously, magnitude spectrograms have been interpreted as probability density functions of random variables with varying characteristics. This has led to the use of divergence-based cost functions for single-channel source sep- aration. Some of these examples include mean squared error (MSE) , KL divergence [26, 39] and IS divergence . However these are often used as proxies to the performance metrics we ultimately measure, which are almost always waveform based. For end-to-end architectures, statistical metrics like mean squared error [40, 41] and l-1 loss [42, 43] have been tried. Cross- entropy and its modified versions have also been tried out . In source separation, however, we most often evaluate the performance of source sepa- ration algorithms using BSS Eval metrics viz., SDR, SIR, SAR  and Short- term Objective Intelligibility (STOI) metrics. Thus, a logical step is to interpret these metrics as cost functions themselves. In these cost functions, we denote the network output waveform as x. This output should ideally match the source waveform y and suppress the interference z. Thus, y and z are fixed constants with respect to the optimization.
In this paper, we addressed the context recon- struction problem, which includes referring ex- pression detection and coreference resolution in the dialogue domain. We present our part-of- speech (POS) tagging based deepneuralnetwork, including both the step-by-step models and the end-to-end model, for the detections and resolu- tions of coreference and ellipsis. Our coreference and ellipsis detection model reasons over the in- put sequence to detect the positions of corefer- ence and ellipsis in the sentence. Our resolution model ranks the candidate entities with the input sentence where coreference and/or ellipsis are an- notated. We also present an end-to-end detection- resolution network which consumes only the non- annotated input sentence and candidate entities. Our models utilize both the syntactic and seman- tic information by employing word embedding, convolution layers, and Long-short-term-memory (LSTM) units. Due to the lack of large well- annotated data, in this paper, we proposed a novel approach to construct annotated data in dialogue domain.
We use the the Long Short-Term Memory (LSTM) architec- ture  for sequence characterization. LSTMs implements a Recurrent NeuralNetwork in which the activations of neurons are learned with respect not just to their current inputs, but to previous inputs in a sequence. Unlike regular recurrent networks in which the strength of learning decreases over time (a symptom of the vanishing gradients problem ), LSTMs employ a forget gate with a linear activation function, allowing them to retain activations for arbitrary durations. This makes them effective at learning complex relationships over long se- quences , an especially important capability for modeling program code, as dependencies in sequences frequently occur over long ranges (for example, a variable may be declared as an argument to a function and used throughout).
We use the the Long Short-Term Memory (LSTM) architec- ture  for sequence characterization. LSTMs implements a Recurrent NeuralNetwork in which the activations of neurons are learned with respect not just to their current inputs, but to previous inputs in a sequence. Unlike regular recurrent networks in which the strength of learning decreases over time (a symptom of the vanishing gradients problem ), LSTMs employ a forget gate with a linear activation function, allowing them to retain activations for arbitrary durations. This makes them effective at learning complex relationships over long sequences , an especially important capability for modeling program code, as dependencies in sequences frequently occur over long ranges (for example, a variable may be declared as an argument to a function and used throughout). We use a two layer LSTM network. The network receives a sequence of embedding vectors, and returns a single output vector, characterizing the entire sequence.
Most of the empirical work to date has examined the impact of director purchases in conventional companies with the assumption that such purchases help alleviate information asymmetry. The positive price reactions recorded by previous studies are assumed to result from the directors knowing more about the prospects of the company than outsider shareholders. This study examines the impact of director purchases in closed-end funds, the majority of which are simple and transparent entities where information asymmetry is not an issue. Despite this, director purchases are accompanied by significant positive price returns. This price reaction is attributed to the activities of retail, attention-driven investors. The results provide support for the theory of Barber and Odean (2008) and Barber et al. (2009). In line with their theory, director sells are not associated with abnormal price returns around the date of the transaction. Most of the empirical results are similar to those examining director purchases in conventional companies: the magnitude of the price reaction is positively related to the size of the purchase; purchases in smaller funds are associated with higher price returns; and a more positive return for purchases in those funds that hold assets which are likely to have higher informational asymmetries. In most cases the price impact begins to dissipate 15 days following the purchase, a result that is broadly similar to those reported by Barber et al. (2009).
While different research has addressed different subsets of the AM problem (see below), the ul- timate goal is to solve all of them, starting from unannotated plain text. Two recent approaches to this end-to-end learning scenario are Persing and Ng (2016) and Stab and Gurevych (2017). Both solve the end-to-end task by first training indepen- dent models for each subtask and then defining an integer linear programming (ILP) model that en- codes global constraints such as that each premise has a parent, etc. Besides their pipeline architec- ture the approaches also have in common that they heavily rely on hand-crafted features.
We roughly follow the method presented in Xu et al. (2017) with extensions. Under the TAG analysis, VP coordination involves a VP-recursive auxiliary tree headed by the coordinator that in- cludes a VP substitution node (for the second con- junct) with label 1. In order to allow the first clauses subject argument (as well as modal verbs and negations) to be shared by the second verb, we add the relevant relations to the second verb. In ad- dition, we analyze sentential coordination cases. Sentence coordination in our TAG grammar usu- ally happens between two complete sentences and no modifiers or arguments are shared, and there- fore it can be analyzed via substituting a sentence int the coordinator with label 1. However, when sentential coordination happens between two rela- tive clause modifiers, our TAG grammar analyzes the second clause as a complete sentence, meaning that we need to recover the extracted argument by consulting the property of the first clause. Further- more, the deep syntactic role of the extracted argu- ment can be different in the two relative clauses. For instance, in the sentence, “... the same stump which had impaled the car of many a guest in the past thirty years and which he refused to have re- moved,” we need to recover an arc from removed to stump with label 1 whereas the arc from im- paled to stump has label 0. To resolve this issue, when there is coordination of two relative clause modifiers, we add an edge from the head of the second clause to the modified noun with the same label as the label that under which the relative pro- noun is attached to the head.
Slot filling can be formulated as a sequence labelling task [2,3]. Joint training of intent detection and slot filling models has been investigated [5,6]. The slot-gated SLU model, which incorporates attention and gating mechanism into the language under- standing (LU) network was proposed by . Moreover, conditional random field (CRF), introduced in , provides a framework for building probabilistic models to segment and label sequences and applies on different natural language processing (NLP) tasks (e.g., part of speech tagging, sentence classification, grapheme-to- phoneme conversion). Jointly modelling intent labels and slot sequences, thus, ex- ploiting their dependencies by the combination of convolutional neural networks (CNN) and the triangular CRF model (TriCRF) can be beneficial . With this approach, the intent error on Airline Travel Information System (ATIS) dataset was 5.91% for intent detection, and the F1-score was 95.42% for slot filling. Bidirectional Gated Recurrent Units (GRUs) could also be used to learn sequence representations shared by intent detection and slot filling tasks . This approach employs max- pooling layer for capturing global features of a sentence for intent detection.
Given the current trends in designing a “clean-slate” future Internet, our findings motivate the need for a secure next-generation Internet. We argue that any next-generation Internet design must provide robust functionality to support secure network measurements . Since network measurements are gaining paramount importance in monitoring the performance of the Internet, secure infrastructural support for network measurements becomes rather a necessity. We can identify a number of reasons why it might be beneficial to embed parts of the measurement functions “inside the network”: 1) by performing measurements at an intermediate point, end-users can avoid the cost and overhead of generating unwel- come traffic across the network, 2) by pushing functionality from end-hosts to dedicated and trusted network components, several security threats can be eliminated. For instance, edge routers could securely timestamp incoming and outgoing probes and identify whether queuing has occurred . This would facilitate the detection of delay attacks. Performance “awareness” is another desirable design property for next-generation Internet. Dedicated network components could in the future construct and store bandwidth and latency maps of Internet hosts by monitoring incoming and outgoing traffic.
Miwa and Bansal (2016) were among the first to use neural networks for end-to-end relation extrac- tion, showing highly promising results. In partic- ular, they used bidirectional LSTM (Graves et al., 2013) to learn hidden word representations under a sentential context, and further leveraged tree- structured LSTM (Tai et al., 2015) to encode syn- tactic information, given the output of a parser. The resulting representations are then used for making local decisions for entity and relation ex- traction incrementally, leading to much improved results compared with the best statistical model (Li and Ji, 2014). This demonstrates the strength of neural representation learning for end-to-end rela- tion extraction.
Task graphs in Wield simplify factorisation of tasks into different sub-tasks which may train and act jointly, or at different time-scales. Task objects primarily encapsulate distinct RLgraph agents or any other optimisation implementing gym-style interfaces. Hierarchical tasks often require to transform the output of one task before inputting it to a subsequent task, e.g. by enriching it with additional environment information or preparing a specific input format. Nodes in a task graph hence further encapsulate pre-and post-processing for each sub-task. Edges in the graph are implicitly created by creating one task as a sub-task of another task in the same task-graph. When performing inference, task outputs are routed through the task graph based on user-defined directed edges between tasks, and the results of all tasks during execution are returned. In Chapter 6, I illustrate how task graphs can be used in practice to design hierarchical tasks.
This paper provides three contributions to the study of autonomic intrusion detection systems. First, we evaluate the feasibility of an unsupervised/semi-supervised approach for web attack detection based on the Robust Software Modeling Tool (RSMT), which autonomically monitors and characterizes the runtime behavior of web applications. Second, we describe how RSMT trains a stacked denoising autoencoder to encode and reconstruct the call graph for end-to-enddeep learning, where a low-dimensional representation of the raw features with unlabeled request data is used to recognize anomalies by computing the reconstruction error of the request data. Third, we analyze the results of empirically testing RSMT on both synthetic datasets and production applications with intentional vulnerabilities. Our results show that the proposed approach can efficiently and accurately detect attacks, including SQL injection, cross-site scripting, and deserialization, with minimal domain knowledge and little labeled training data.
Recent advances of deep learning have inspired many applications of neural models to dialogue systems. Wen et al. (2017) and Bordes et al. (2017) introduced a network-based end-to-end trainable task-oriented dialogue system, which treated dialogue system learning as the problem of learning a mapping from dialogue histories to system responses, and applied an encoder-decoder model to train the whole system. However, the system is trained in a supervised fashion: not only does it require a lot of training data, but it may also fail to find a good policy robustly due to lack of exploration of dialogue control in the training data. Zhao and Eskenazi (2016) first presented an end-to-end reinforcement learning (RL) approach to dialogue state tracking and policy learning in the DM. This approach is shown to be promising when applied to the task-oriented dialogue prob- lem of guessing the famous person a user thinks of. In the conversation, the agent asks the user a series of Yes/No questions to find the correct an- swer. However, this simplified task may not gen- eralize to practical problems due to the following: 1. Inflexible question types — asking request questions is more natural and efficient than Yes/No questions. For example, it is more natural and efficient for the system to ask “Where are you located?” instead of “Are you located in Palo Alto?”, when there are a large number of possible values for the lo- cation slot.
Because IBM employees frequently download software in the process of de- veloping, engineering, testing and marketing products, IBM must ensure that its employees utilize the most efficient methods for downloading large files. Many employees use IBM Standard Software Installer to download and install software using file transfer protocol and dedicated servers throughout the IBM worldwide intranet. However, download times for software installation, demos, videos and other types of large files take up hundreds of thousands of employee hours each year. To reduce this time, improve productivity and use existing resources more efficiently, IBM IntraGrid, a testbed for Grid services and solutions, is developing solutions to download large digital files faster than before. The Netmapper system developed as part of this thesis can be used to monitor the network, and hence manage network delays for IBM Intra-grid. Netmapper builds a map of the IP network from the perspective of a set of hosts using the network, and annotates each link with available bandwidth using end-to-end techniques. The idea here is that a smart scheduler could use this annotated network map to maximize its performance and minimize network load. We are planning to integrate Netmapper in server selection for the download Grid.
Closed-end funds are publicly quoted companies that typically invest in the equity of other companies; these other companies are normally also quoted. The market value of the closed- end-fund’s assets (the Net Asset Value, NAV) is, in the UK, publically available on a daily basis. In most cases, the asset management function is delegated to a fund management firm. The fund board comprises a majority of independent members and may include a representative of the fund management firm. The board determines the investment mandate and receives and reviews reports on the performance of the fund from the manager on a regular basis. Details of the mandate and the individual investments held by a fund are normally detailed in the annual financial statements while an abbreviated list is usually provided in the half-year statement.
The Sunjammer mission is led by industry manufacturer L’Garde Inc. of Tustin, California, and includes participation by the National Oceanic and Atmospheric Administration (NOAA). Its aim is to demonstrate the propellantless propulsion potential of solar sails and to boost the Technology Readiness Level (TRL) of the L’Garde solar sail from ~6 to ~9. It will build on successful ground-deployment experiments led by L’Garde in 2004-2005 and the successful in-space deployment of the NanoSail-D2 mission in 2011. 5,6 The Sunjammer solar sail was designed to be 38 x 38 m 2 in size. In prior mass estimates, the Sunjammer sailcraft was about 45 kg and attached to it was a 135 kg disposable support module. The final configuration of the spacecraft bus is still in development. For the purposes of this analysis, the original design is used. It will be launched as a secondary payload and boost to L 1
IP-over- WDM networks consider the correlation between the physical and logical topologies. Minimizing the impact based on fiber and logical links failures , showed that topology mapping is strongly affected by the reliability of IP layer. Moreover, our approach is based on a cross-layer design. They aim at finding reliable backup paths; while our objective is to minimize routing disruption. Our paper also considers the topology mapping, but it is different in two aspects. First, the CPF model considers both independent and correlated logical link failures. Second, Multiple backup paths