Top PDF Molecular Computing Viability for Solving Computational Problems (Future and Challenges)

Molecular Computing Viability for Solving Computational Problems (Future and Challenges)

Molecular Computing Viability for Solving Computational Problems (Future and Challenges)

In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips. The computer could perform 330 trillion operations per second, more than 100,000 times the speed of the fastest PC [27]. On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announced in the journal Nature that they had constructed a DNA computer. This was coupled with an input and output module and is capable of diagnosing cancerous activity within a cell, and then releasing an anti-cancer drug upon diagnosis [28]. Donald Beaver in [2] design a molecular Turing machine based on interactions among DNA molecules. Unlike the non-universal methods described in [1], his methods specify a universal computing device capable of maintaining a state and memory and performing an indefinite number of transitions. Each computing device consists of single DNA molecular. This is means, many different molecules, encoding many different machines in arbitrarily different configurations, can be located in the same mixture and induced to undergo state transitions simultaneously Moreover, the chemical mechanisms for sate transitions permit parallel, heterogeneous, synchronized, computation.
Show more

5 Read more

Solving computational problems in the theory of word-representable graphs

Solving computational problems in the theory of word-representable graphs

It is also interesting to identify minimal non-word-representable graphs of each size, i.e. graphs containing no non-word-representable strict induced subgraphs. To do this, we stored all non-word-representable graphs of each size. After computing with geng all possible graphs with one more vertex, we eliminate graphs containing one of the stored graphs as an induced subgraph. We did this with a simple constraint model which tries to find a mapping from the vertices of the induced subgraph to the vertices of the larger graph, and if successful discards the larger graph from consideration. This enabled us to count all minimal non- word-representable graphs of each size up to 9, which is shown in Table 2. The filtering process we used was too inefficient to complete the cases n ≥ 10.
Show more

18 Read more

High performance computing and 
		communication models for solving the complex interdisciplinary problems 
		on DPCS

High performance computing and communication models for solving the complex interdisciplinary problems on DPCS

The paper presents some advanced high performance (HPC) and parallel computing (PC) methodologies for solving a large space complex problem involving the integrated difference research areas. About eight interdisciplinary problems will be accurately solved on multiple computers communicating over the local area network. The mathematical modeling and a large sparse simulation of the interdisciplinary effort involve the area of science, engineering, biomedical, nanotechnology, software engineering, agriculture, image processing and urban planning. The specific methodologies of PC software under consideration include PVM, MPI, LUNA, MDC, OpenMP, CUDA and LINDA integrated with COMSOL and C++/C. There are different communication models of parallel programming, thus some definitions of parallel processing, distributed processing and memory types are explained for understanding the main contribution of this paper. The matching between the methodology of PC and the large sparse application depends on the domain of solution, the dimension of the targeted area, computational and communication pattern, the architecture of distributed parallel computing systems (DPCS), the structure of computational complexity and communication cost. The originality of this paper lies in obtaining the complex numerical model dealing with a large scale partial differential equation (PDE), discretization of finite difference (FDM) or finite element (FEM) methods, numerical simulation, high- performance simulation and performance measurement. The simulation of PDE will perform by sequential and parallel algorithms to visualize the complex model in high-resolution quality. In the context of a mathematical model, various independent and dependent parameters present the complex and real phenomena of the interdisciplinary application. As a model executes, these parameters can be manipulated and changed. As an impact, some chemical or mechanical properties can be predicted based on the observation of parameter changes. The methodologies of parallel programs build on the client-server model, slave-master model and fragmented model. HPC of the communication model for solving the interdisciplinary problems above will be analysed using a flow of the algorithm, numerical analysis and the comparison of parallel performance evaluations. In conclusion, the integration of HPC, communication model, PC software, performance and numerical analysis happens to be an important approach to fulfil the matching requirement and optimize the solution of complex interdisciplinary problems.
Show more

9 Read more

METHODS INVOLVED IN SOLVING PROBLEMS OF FLUID MECHANICS USING COMPUTATIONAL FLUID DYNAMICS - A STUDY

METHODS INVOLVED IN SOLVING PROBLEMS OF FLUID MECHANICS USING COMPUTATIONAL FLUID DYNAMICS - A STUDY

The vortex method is a grid-free technique for the simulation of turbulent flows. It uses vortices as the computational elements, mimicking the physical structures in turbulence. Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid- based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). A breakthrough came in the late 1980s with the development of the fast multipole method (FMM), an algorithm by V. Rokhlin (Yale) and L. Greengard (Courant Institute). This breakthrough paved the way to practical computation of the velocities from the vortex elements and is the basis of successful algorithms. They are especially well-suited to simulating filamentary motion, such as wisps of smoke, in real-time simulations such as video games, because of the fine detail achieved using minimal computation. [15]
Show more

11 Read more

Numeric treatment of nonlinear second order multi-point boundary value problems using ANN, GAs and sequential quadratic programming technique   Pages 431-442
		 Download PDF

Numeric treatment of nonlinear second order multi-point boundary value problems using ANN, GAs and sequential quadratic programming technique Pages 431-442 Download PDF

In this paper, computational intelligence technique are presented for solving multi-point nonlinear boundary value problems based on artificial neural networks, evolutionary computing approach, and active-set technique. The neural network is to provide convenient methods for obtaining useful model based on unsupervised error for the differential equations. The motivation for presenting this work comes actually from the aim of introducing a reliable framework that combines the powerful features of ANN optimized with soft computing frameworks to cope with such challenging system. The applicability and reliability of such methods have been monitored thoroughly for various boundary value problems arises in science, engineering and biotechnology as well. Comprehensive numerical experimentations have been performed to validate the accuracy, convergence, and robustness of the designed scheme. Comparative studies have also been made with available standard solution to analyze the correctness of the proposed scheme.
Show more

12 Read more

Computational Thinking Concepts for Grade School

Computational Thinking Concepts for Grade School

The popular term encompassing the use of digital computing as an aid to thinking and problem solving is “computational thinking” (http://www.cs.cmu.edu/~CompThink/). Today, technology appears at the earliest level of public education in the U.S. This paper points out a way in which this new technology can be employed as a ubiquitous partner to learning science and mathematics. An early introduction of “computational thinking” is required as such thinking is critical in today’s world. Jeanette Wing (2006) states that computational thinking is a fundamental skill for everyone, not just for computer scientists. Computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the concepts fundamental to computer science. Computational thinking is conceptualizing i.e., thinking at multiple levels of abstraction and it complements and combines mathematical and engineering thinking (Wing, 2006).
Show more

10 Read more

Using Cloud Computing for Solving Constraint Programming Problems

Using Cloud Computing for Solving Constraint Programming Problems

There are different way for improving the resolution of a problem. We can change the model or improving the internal algorithms. We can also use a more efficient proces- sor. In this paper, we are interested in the acceleration of the resolution by using more processors. More precisely, we would like to know the number of cores we should use to improve the resolution time by a factor of p. This is not an easy task because usu- ally increasing the number of cores by a factor of k does not mean that we increase the computational power by a factor of k. There are several reasons: the parallelization must scale up and the communication must be reduced.
Show more

9 Read more

Solving Hard Graph Problems with Combinatorial Computing and Optimization

Solving Hard Graph Problems with Combinatorial Computing and Optimization

Many problems arising in graph theory are difficult by nature, and finding solutions to large or complex instances of them often require the use of computers. As some such problems are NP-hard or lie even higher in the polynomial hierarchy, it is unlikely that efficient, exact algorithms will solve them. Therefore, alternative computational methods are used. Combinatorial computing is a branch of mathematics and computer science concerned with these methods, where algorithms are developed to generate and search through combinatorial structures in order to determine certain properties of them. In this thesis, we explore a number of such techniques, in the hopes of solving specific problem instances of interest.
Show more

108 Read more

DNA Computation Based Approach for Enhanced Computing Power

DNA Computation Based Approach for Enhanced Computing Power

Abstract . DNA computing is a discipline that aims at harnessing individual molecules at the nano-scopic level for computational purposes. Computation with DNA molecules possesses an inherent interest for researchers in computers and biology. Given its vast parallelism and high-density storage, DNA computing approaches are employed to solve many problems. DNA has also been explored as an excellent material and a fundamental building block for building large-scale nanostructures, constructing individual nano- mechanical devices, and performing computations. Molecular-scale autonomous programmable computers are demonstrated allowing both input and output information to be in molecular form. This paper presents a review of recent advancements in DNA computing and presents major achievements and challenges for researchers in the coming future.
Show more

7 Read more

The Computational and Educational Viability of Deploying  Intelligent Tutoring Systems

The Computational and Educational Viability of Deploying Intelligent Tutoring Systems

The standard teaching strategy of offering drill exercises in problem solving is one that takes a considerable amount of effort on the part of the instructor. Preparing on- line materials is often very time-consuming. Martin has added an automated problem generator to the SQL tutor. The new component takes advantage of the Constraint Based Modeling of the student (see below) that is used in all tutors from the University of Canterbury. The list of violated constraints (concepts not understood by the student) is used as the basis for generating new problems. Martin found that the use of the generated problem set improved students’ learning speed by a factor of two. This is attributed to two possible causes (or a combination, either more practice as a result of more exercises being available or better selection of the exercises generated appropriately for each student. (Martin and Mitrovic 2002).
Show more

86 Read more

Solving computational problems in the theory of word representable graphs

Solving computational problems in the theory of word representable graphs

It is also interesting to identify minimal non-word-representable graphs of each size, i.e. graphs containing no non-word-representable strict induced subgraphs. To do this, we stored all non-word-representable graphs of each size. After computing with geng all possible graphs with one more vertex, we eliminate graphs containing one of the stored graphs as an induced subgraph. We did this with a simple constraint model which tries to find a mapping from the vertices of the induced subgraph to the vertices of the larger graph, and if successful discards the larger graph from consideration. This enabled us to count all minimal non- word-representable graphs of each size up to 9, which is shown in Table 2. The filtering process we used was too inefficient to complete the cases n ≥ 10.
Show more

18 Read more

Computational Shedding in Stream Computing

Computational Shedding in Stream Computing

A stream computing application which becomes overloaded must change and adapt to the new input message rate in one form or another such that it can process the data stream at the higher temporary input rate while continuing to produce timely and accurate application results. This can be rationalised as a resource optimisation problem for which the current CPU capacity of the application or a specific PE is not enough to process the current data stream workload. As mentioned in section 1.5 we scope this adaptation to computational and CPU based resources. Deduced from this assertion, there are two approaches which ensue [116]. First, we either increase the available CPU capacity such that it matches the new workload processing requirements and thus increase the ingestion rate of the application to match that of the bursty data stream. Or second, we reduce the cost of processing the workload such that the existing available processing resources can increase their ingestion rate to match the new input rate of the stream [16]. Additionally, the suggested application adaptation should also occur quickly such that in the transition from one application state to another, minimal or no input or intermediate data should be lost. Finally, the adaptation activities should have low over-heads such that in an already overloaded system, further resource consumption should be low for non-functional purposes as to prevent further CPU saturation.
Show more

187 Read more

Computational Linguistics and Its Role in Mechanized or Man Machine Cognitive Problem Solving

Computational Linguistics and Its Role in Mechanized or Man Machine Cognitive Problem Solving

COMPUTATIONAL LINGUISTICS AND ITS ROLE IN MECHANIZED OR MAN MACHINE COGNITIVE PROBLEM SOLVING COMPUTATIONAL LINGUISTICS AND ITS ROLE IN MRCHANIZED OR MAN MACHINE COGNITIVE PROBLEM SOLVING M K Chytil C[.]

6 Read more

MATHEMATICS IS FOR SOLVING PROBLEMS

MATHEMATICS IS FOR SOLVING PROBLEMS

Around him a small group, of which Julian was a part , began in 1948 to work seriously on general analytical methods for the systematic approximation to solutions of PD Es, including wha[r]

280 Read more

A Reasoning Method on Knowledge Base of Computational Objects and Designing a System for Automatically Solving Plane Geometry Problems

A Reasoning Method on Knowledge Base of Computational Objects and Designing a System for Automatically Solving Plane Geometry Problems

Abstract— In artificial intelligence, there are many methods for knowledge representation. Nonetheless, these methods are not efficient for executing and reasoning on complex knowledge. Ontology, a modern approach, has been studying deeply due to its ability to represent knowledge. The Knowledge Base of Computational Objects (KBCO) model is an ontology which can be used efficiently for designing complex knowledge base systems in the areas: Geometry, Analysis, Physics, etc. Besides, inference methods also play an important role in knowledge base systems. However, the current methods are too general to imitate human’s way of thinking. In reality, when solving a problem, we often start by finding the problems which, in some sense, are relevant to the current problem. In this paper, we present an extended model of the KBCO model. Sample problems will be used as available knowledge in a way that imitate more optimally human’s knowledge. KBCO model using Sample Problems can be applied to construct complex intelligent systems to simulate some knowledge domains of human.
Show more

6 Read more

Solving the Challenges of Pervasive Computing

Solving the Challenges of Pervasive Computing

Pervasive Computing has more prominent convince in diverse domains on both local and worldwide situations. It is critical for analysts to recognize the challenges, objectives, and methods for mounting these technologies in diverse areas to completely aware of its potential. Pervasive Computing would detriment the entire society and absent the limits in computing. In general pervasive technology advancements will be oppressed through an ad- vanced situation that is mindful of their presence. Natural interaction is pervasively available by means of adap- tive, sensitive and receptive to their needs, habits and feelings. Progressively, a significant number of the chips around us will sense their surroundings in simple however effective ways [25]. Pervasive Computing, part of procedures and challenges need to be tended to in order to adequately make smart spaces and accomplish mi- niaturization. Tremendous development and examination commitments are going on towards Mark Weiser’s vi- sion [2] on adding to a framework that can sense, compute and interconnect in a manner that can make human life simple with brilliant smart objects supporting from around his surroundings. In this procedure of emergence towards a smart environment, the actual challenges to contemplate upon are the performance issues, information administration, programming support, energy efficient, trust, security and privacy of the processing device to be designed [26]-[31]. The name alone infers pervasive systems everywhere, yet with the goal achievement should be attained to; they must break up out of focus. To do this, Pervasive Computing systems must overcome fol- lowing challenges. Security outline must consider standards of time and area though Pervasive Computing is expanded in various environments transparently [32]. Protection from Unauthenticated user (security), avoid- ance of access by an attacker through unverified techniques (integrity), giving availability to user totally (acces- sibility) and evading an entity from denying previous activities (non-denial) are essential factors the security model. Recognizing kind of exchanging information, conceivable distortion or misuse, shortcomings and fea- tures, the security issues in remote system base for network infrastructure can be represented [7].
Show more

10 Read more

Artificial Neural Network Based Hybrid Algorithmic Structure for Solving Linear Programming Problems

Artificial Neural Network Based Hybrid Algorithmic Structure for Solving Linear Programming Problems

Linear Programming Problems are mathematical models used to represent real life situations in the form of linear objective function and constraints various methods are available to solve linear programming problems. When formulating an LP model, systems analysts and researchers often include all possible constraints although some of them may not be binding at the optimal solution. The presence of redundant constraints does not alter the optimum solution(s), but may consume extra computational effort. Redundant constraints identification methods are applied for reducing computational effort in LP problems. But accuracy of the LP problems goes down due to this reduction of loops and constraints. To achieve optimality in accuracy and also in computational effort, we propose an algorithm, called, hybrid algorithm, it trains the constraint and parameter before applying the formal methodology.
Show more

6 Read more

Volume 35: Automated Verification of Critical Systems 2010

Volume 35: Automated Verification of Critical Systems 2010

The correct-by-construction approach can be supported by a progressive and incremental process controlled by the refinement of models for distributed algorithms. Event-B modelling language is supporting our methodological proposal suggesting proof-based guidelines. The main objective is to facilitate the correct-by-construction approach for designing distributed algorithms [Reh09, BM09, Mér09] by combining local computing models [CM10, CM07c] and Event-B models [Abr10a] to get benefits of both models. In fact, local computation models provide an abstraction of dis- tributed computations, which can be expressed in Event-B; they provide a graphical complement for Event-B models and Event-B models provide a framework for expressing correctness with re- spect to safety properties. More precisely, we introduce several methodological steps identified during the development of case studies. A general structure characterizes the relationship between the contract, the Event-B models, and the developed algorithm using a specific application of Event-B models and refinement. Distributed algorithms are considered with respect to the local computation model [CGM08] based on a relabelling relation over graphs representing distributed systems. The ViSiDiA toolbox [Mos09] provides facilities for simulating local computation mod- els which can be easily modelled using Event-B with refinement.
Show more

22 Read more

PP 2012 29: 
  Complexity of Judgment Aggregation

PP 2012 29: Complexity of Judgment Aggregation

judgment set is replaced by the goal of finding a procedure under which returning an inconsistent set is highly unlikely. The (negative) result obtained for this framework is that this does however not extend the range of available procedures in a significant way. Second, Slavkovik and Jamroga (2011) extend the standard JA framework with weights (to model differences in influence between individuals) and provide an upper bound on the complexity of the winner determination problem for a family of distance-based aggregation procedures. Third, Baumeister et al. (2011) provide the first study of the computational complexity of the bribery problem in JA, asking whether it is possible to obtain a desired outcome if up to k individual agents can be bribed so as to change their judgment set. Finally, Baumeister et al. (2012) discuss the complexity of various forms of controlling judgment aggregation processes, e.g., influencing the outcome by adding or removing judges.
Show more

34 Read more

The Process of Solving Complex Problems

The Process of Solving Complex Problems

When it comes to gathering information (e.g., when the structural knowledge about the problem proves to be insufficient), some strategies may be especially useful for gen- erating viable structural knowledge about the system. As Vollmeyer et al. (1996) pointed out, systematicity in strategy use allows a problem solver to coherently infer the conse- quences of single interactions, i.e., to build viable structural knowledge about parts of the system structure. For example, following Tschirgi (1980), to “vary one thing at a time” (while setting the other variables on a constant value like zero)—commonly referred to as the VOTAT-strategy—may be a strategy useful to systematically identify the effects of independent (exogenous) variables on dependent (endogenous) variables in certain scenarios (especially when each exogenous variable was contrasted to the other ones at least one time. Setting the increments of all input variables to a value of zero from time to time may facilitate the detection of eigendynamics and indirect effects). Systematic strategy use and generating (as well as using) structural knowledge might be especially important in complex systems when there is no (or even cannot be) sufficient implicit knowledge about a correct solution of the problem. But as human cognitive resources are limited, even detailed and extensive structural knowledge about all the aspects of a complex system may not be fostering CPS per se as they may overcharge the human working memory. Based on this crucial aspect of complex problems the following sec- tion proposes the most influential theories on how and why information reduction is an essential aspect of CPS.
Show more

24 Read more

Show all 10000 documents...