Top PDF A Software Framework for Data Based Analysis

A Software Framework for Data Based Analysis

A Software Framework for Data Based Analysis

By using a programming language for statistics, one has a lot more flexibil- ity to program algorithms. But this approach requires familiarity with the respective language and the resulting programs are usually script-based. This means that it is less convenient and more troublesome to use these algorithms compared to a software with a GUI for interactive modelling. Often even the programmer herself has problems getting a script running that she has not touched for a while. Furthermore, model building in econometrics is typically a multi-step procedure with a number of different algorithms involved. With a script-based approach combining these procedures can become quite a com- plex undertaking. It always requires text editing of sometimes lengthy source code. Furthermore, documentation is often quite sloppy, which requires to in- vestigate the algorithms themselves to know exactly how parameters need to be prepared and what the contents of the results are. Another problem is that the authors of these algorithms usually see themselves rather as Scientists in- stead of Programmers and they often do not reflect very much about software engineering techniques. The result is that software reuse is often limited to reusing single procedures written in some script language for statistics. More complex interactions or object-oriented design is only applied by experienced developers and can still not be considered a mainstream technique in that area.
Show more

39 Read more

The Tulip 3 Framework: A Scalable Software Library for Information Visualization Applications Based on Relational Data

The Tulip 3 Framework: A Scalable Software Library for Information Visualization Applications Based on Relational Data

In this approach, the contents of metanodes, either derived from topological structures or attribute information, were constructed and/or drawn on demand as the user explores the data. Grouse [5] took a large graph and hierarchy as input and was able to draw parts of it on demand as users opened metanodes. Appropriate graph drawing algorithms were used to draw the subgraphs based on their topological structure. For example, if the node contains a tree, a tree drawing algorithm will be used. GrouseFlocks [7] was created to construct graph hierarchies based on attribute data and progressively draw them. Search strings selected or categorized nodes and computed induced subgraphs based on attribute values that were placed inside connected metanodes. These metanodes could be drawn on demand with Grouse. However, often parts of a graph are near certain nodes and metanodes are of interest and certain metanodes can be too large to draw on demand. TugGraph [8] was created for these situations when topology near a node or metanode is interesting. Also, it can summarize specific sets of paths in the graph.
Show more

33 Read more

An Eclipse-Based Tool Framework for Software Model Management

An Eclipse-Based Tool Framework for Software Model Management

specify the whole system, the relations between these parts must be expressed and then the parts must be merged into a single UML model in a way that correctly reflects these relations. The area of metadata management has similar challenges due to the need to relate many schemas (i.e., models) in scenarios such as database integration, message mapping, data migration, etc. There, the field of Model Management [3] has emerged as a way to address these complexities by proposing that model relations be expressed as first class objects called model mappings and that generic operators be defined that could be used to manipulate models and mappings in a sound way to achieve various modeling goals. A key strength of this approach is a solid mathematical foundation [5].
Show more

6 Read more

SOA Based Mobile Application Software Test Framework

SOA Based Mobile Application Software Test Framework

• Service: Services adhere to a communications agreement, as defined collectively by one or more service description documents. Mobile test oriented services design needs to summarize the functions in the form of services in accordance with the mobile software requirement, and define specific services according to its implementation and invoking behavior. After analysis, the main test platform services are divided into two categories: user interaction services, testing-related services. User interaction service is responsible to provide GUI, but testing services differ in their ways of interaction with users, and for which it is difficult to propose a generic model for them. Therefore we need to provide a service, which manage all GUI element of testing services. Testing-related services design depends on the features of mobile software testing. These services include test management, test execution, process control, testing implementation, communications, results analysis, data management and some other services.
Show more

5 Read more

SDIoT: A Software Defined based Internet of Things framework

SDIoT: A Software Defined based Internet of Things framework

These challenges and other ones that exist in classical storage solutions have been studied by the authors in Cecchinel et al. ( 2014 ) and motivated them to propose a new software-based architecture to handle the Big Data which generated from the sensors and other objects in IoT network. This architecture based on the cloud computing to store this data instead of storing it in the physical appli- ances. Before they start to build their solution, they set up four design requirements that must be carried over by any storage solution architecture for IoT-based network. The new solution must be able to support different types and platforms of sensors, data and protocols, and heterogeneous hardware. Building a scalable solution either vertically to add an extra storage space, or horizontally to provide a good load balancing is also considered a mandatory requirement for any solution. In addition, a remotely re- configuration for the underling devises should be provided by that solution. Finally, it should have fine-grained user applications to let the end users to access and query the gathering data in a smooth way.
Show more

10 Read more

Suitability assessment framework of agent-based software architectures

Suitability assessment framework of agent-based software architectures

Context: A common distributed intelligent system architecture is Multi Agent Systems (MASs). Creating systems with this architecture has been recently supported by Agent Oriented Software Engineering (AOSE) methodologies. But two questions remain: how do we determine the suitability of a MAS implementation for a particular problem? And can this be determined without AOSE expertise? Objective: Given the relatively small number of software engineers that are AOSE experts, many problems that could be better solved with a MAS system are solved using more commonly known but not necessarily as suitable development approaches (e.g. object-oriented). The paper aims to empower software engineers, who are not necessarily AOSE experts, in deciding whether or not they should advocate the use of an MAS technology for a given project. Method: The paper will construct a systematic framework to identify key criteria in a problem requirement definition to assess the suitability of a MAS solution. The criteria are first identified using an iterative process. The features are initially identified from MAS implementations, and then validated against related work. This is followed by a statistical analysis of 25 problems that characterise agent-oriented solutions previously developed to group features into key criteria. Results: Key criteria were sufficiently prominent using factor analysis to construct a framework which provides a process that identifies within the requirements the criteria discovered. This framework is then evaluated for assessing suitability of a MAS architecture, by non-AOSE experts, on two real world problems: an electricity market simulation and a financial accounting system. Conclusion: Substituting a software engineer's personal inclination to (or not to) use a MAS, our framework provides an objective mechanism. It can supplant current practices where the decision to use a MAS architecture for a given problem remains an informal process. It was successfully illustrated on two real world problems to assess the suitability of a MAS implementation. This paper will potentially facilitate the take up of MAS technology. © 2012 Elsevier B.V. All rights reserved.
Show more

54 Read more

A Context Aware Framework for Product Based Software Certification

A Context Aware Framework for Product Based Software Certification

Context awareness is a property of a system that uses context to provide relevant information and /or services to the user, where rele- vancy depends on the user’s task. There are three main categories by which context aware systems can be classified; these are device context, user context and physical context. The user context cat- egory deals with user driven actions and is the most appropriate type of context awareness for the proposed framework. The e fficient management of the context is supported and driven by the context model and its structure. The philosophy behind modeling context models follows two main objectives; namely, the own- ership of a flexible structure in which knowledge sharing is enabled, and logical reasoning in which reasoning over static data can occur. The success of the context aware systems directly depends on their ability to maintain these key objectives. The multi level ontological approach was selected to model the context for the framework. The upper layer within the ontological hierarchy models generic con- cepts and relations for the product based software certification. The lover levels within the ontology are used for the modeling of domain specific concepts and relations. This allows for the criteria, which occur commonly in lower levels, to be gathered in one location which would often be moved to upper levels within the ontological hierarchy without being redefined multiple times. This approach eliminates issues in which concepts or properties could be defined or evaluated differently in different domains [33].
Show more

26 Read more

Comparative Analysis of Forensic Software on Android-based Blackberry Messenger using NIJ Framework

Comparative Analysis of Forensic Software on Android-based Blackberry Messenger using NIJ Framework

V. F UTURE W ORK For future work, there are many comparative study using many forensic tools such as Oxygen Forensic Suite [11], Andriller [12], Cellebrite UFED Physical Pro and XRY [13] that can be conducted. to get an overview on what forensic tool that best for digital forensic investigations. The comparison also can be conducted on forensic frameworks and parameters such as National Institute of Standard Technology (NIST) [14] [15], and Integrated Digital Forensic Investigation Framework (IDFIF) [16].

6 Read more

Selecting an open-source framework: a practical case based on software development for sensory analysis

Selecting an open-source framework: a practical case based on software development for sensory analysis

Community activity indicates how intensively a project is used. Strong user activity will usually lead to stronger development. A possible way to measure user activity is, for example, to look at the number of monthly questions in StackOverflow. Search trends indicate developers’ interest and relevance of the open-source projects. Interest tendencies give some insight into how developers focus and preferences are evolving. Regarding these two characteristics, Figure 1 (left side) illustrates the StackOverflow tagged monthly questions and Figure 1 (right side) shows the Google search trends (search statistics were collected from Google pertaining to the period from January of 2004 to June of 2014). As the charts in Figure 1 show, Django and Ruby on Rails have the highest activity although Grails and Play are slowly growing. Older frameworks will tend to have larger user bases even if there are new better ones. Developers have a tendency to adopt a framework they like and keep with it for some time. Projects that have been completed need maintenance or further development, meaning the framework chosen at start will continue to be utilized.
Show more

8 Read more

A Framework to Detect and Analyze Software Vulnerabilities: Analysis

A Framework to Detect and Analyze Software Vulnerabilities: Analysis

Different methods for teaching software development begin with an initial step of programming class. Time is not given to this security issues. Rather than this other courses contains computer network, data communication database management, analysis and design. Security methods are under the high level class and are considered to be an add in to the original software. The habits formed from initial programming can be for a long time. Having students focus repeatedly on issues of syntax, and primitive details of data structures, control structures, etc. forms habits associated with this level of concern. Higher level issues, testability, requirements, security and maintainability may be covered late in coursework, but never to the degree to form strong work habits. To change programmer behavior will continually run against initial habits formed early in their educational experience [5].
Show more

8 Read more

Data Warehouse Requirements Analysis Framework: Business-Object Based Approach

Data Warehouse Requirements Analysis Framework: Business-Object Based Approach

d) Identification of Materialized Views: For user oriented DW requirements engineering, it is also important to analyze that how user will efficiently interact with the DW system to perform the necessary analysis activities. Materialized views are the central issue for the usability of the DW system. DW data are organized multi dimensionally to support OLAP. A DW can be seen as a set of materialized views defined over the source relations. Those views are frequently evaluated by the user queries. The materialized views need to be updated when the source relations change. During the DW analysis and design, the initial materialized view need to be selected to make the user's interactions simple and efficient in terms of accomplishing user analysis objectives. In the proposed requirements engineering framework, the domain boundary has been drawn through identifications of Fact BOs, Dimension BOs, Actor BOs, and interactions between them in the first phase and which have been further refined in this phase. The list of analysis activities may be performed by Actor BOs based on their roles and also Event BOs have been identified in the same phase. Moreover, the feature tree concept explores the constraint requirements for the interest of domain. Based on those identifications, the different materialized views can be identified in this step. In this step the materialized views are used to represent semantically in the context of some Fact BO and in terms of actor along with their roles, analysis activities those may be performed, events those may be occurred, related Dimension BOs involved and the related constraints. Related to one Fact BO, there may exist several materialized views to minimize the views level dependency and to meet the analytical evaluation requirements of the stake holders. Semantically, a materialized view will be represented using View Template. The Interface Template will contain the information of View name, identification, analysis objectives, target Fact BO, Actor BO, roles, related activities, related Dimension BOs to realize the source relations, related Event BOs and related constraints. Any view template is reusable and modifiable through iterative process to accommodate the updatable materialized view.
Show more

10 Read more

Implementing Software Requirements and Modeling Using Mapping Rules Based on B-SCP Framework Analysis

Implementing Software Requirements and Modeling Using Mapping Rules Based on B-SCP Framework Analysis

The first research aspect mainly concerns the linkage between organizational strategy and software requirements analysis and definition. From the research [1] presents B-SCP requirements framework analysis for validating an alignment between software requirements and organization strategy based on strategy, context and process. Another research [2] presents a framework for domain requirements analysis and architecture modelling in software product lines. Another researches [3] and [4] presents a requirements analysis method are Role Activity Diagrams (RADs) to represent the business process and Jackson context diagrams to represent requirements analysis in an interest domain. Both researches are present Role Activity Diagrams and Jackson Context Diagram in detail. That proposed requirements analysis method which cover business strategy and software requirements are used to validate and verify an alignment organizational IT to support the business strategy. The researches apply the method with case study Seven-Eleven Japan. Another research [5] presents a requirements analysis method called PALM (Pedigreed Attribute eLicitation Method). PALM is the methods that analyze requirements from business strategy in various points of view. Next interpret business strategy to Non-Functional Requirements (NFR) that important to software architecture. This research proposes the quality requirements analysis method from goal. Another research [6] presents a requirements analysis method from business strategy. First, Use goal oriented and i* Model to requirements analysis. Then use problem frame to observation and capture interest domain in process. Applying method with case study appointment system that show architecture and relation of make appointment. This research proposes requirements analysis in various technique that be able to apply together. Another research [7] presents a requirements analysis method from business strategy and scenarios based and then interpret to functional requirements which represent by use case diagrams. Applying method and tools with Home Integration System (HIS) used to identify and meaning to software requirements in product line based on the business goal, product marketing plan. This research proposes requirements analysis step from goal and key
Show more

8 Read more

Managing software evolution through midleware and policy-based software adaptation framework

Managing software evolution through midleware and policy-based software adaptation framework

In a typical environment of an enterprise system, 3 main levels exist, Device layer, Delivery Channels layer, and Back-end Systems layer. This architecture is observed to exist in large organizations in Malaysia such as financial institutions and public services. The observation is backed-up by a guided interview conducted with IT personnel of specific organizations and IT personnel from System Integrator (SI) Company. Please refer to Appendix A for the questionnaire used for the guided interview and analysis of the result. The description of each layer is as follows:-
Show more

38 Read more

PolyLens: Software for Map-based Visualization and Analysis of Genome-scale Polymorphism Data

PolyLens: Software for Map-based Visualization and Analysis of Genome-scale Polymorphism Data

1) GeneID Manager, Location Manager, Sample Manager, and Stop List Manager: Each of these manager classes is simply tasked with maintaining an in-memory mapping of data from each of the four data file types. The GeneIDManager class maintains a mapping from gene IDs to their corre- sponding RADTags. The LocationManager class maintains a mapping from organism IDs to their corresponding latitude and longitude pairs. The SampleManager class maintains both a mapping from loci to their corresponding RADTags and a mapping from RADTags to their corresponding loci. Finally, the StopListManager class maintains a map of all loci that are in the stop list. Each of these manager classes also maintains a list of files from which their data is collected. At parse time, each manager class also validates the data, excluding erroneous entries and generating informative error messages for the user. Each of these manager classes also maintains an update status that the master model class can poll to discover whether or not the data has changed.
Show more

6 Read more

A Cloud-based System Framework for Storage and Analysis on Big Data of Massive BIMs

A Cloud-based System Framework for Storage and Analysis on Big Data of Massive BIMs

Our team has proposed and developed a cloud-based BIM system called CloudBIM [1] that can perform storage and viewing on the data of massive BIMs. As shown in Figure. 1, it uses cloud computing technology to store massive BIM data and adopts IFC as the BIM file upload format of the CloudBIM system. Based on the IFC format, we developed a commonly-used BIM upload interface and then developed the Web interface for BIM viewing using WebGL, such that the BIM can be reached on any device through a standard web browser online viewing system. Though this system solves the problems caused by the project management mode of existing commercial BIM software based on specific file formats and the compatibility of files between manufacturers, as its main function is only to use cloud computing technology on data storage and the three-dimensional visualization viewing level of massive BIMs, the storage of the data of massive BIMs in the CloudBIM system still has many possibilities for use in analysis. Some such examples are the use of cloud computing technology to conduct statistics and analysis on the property data of massive BIMs, or the addition of the dynamic data of BIM inside the buildings' space for joint operations, and other possibilities that are yet to be realized. If these functions can be added into CloudBIM, its functionality will be more complete.
Show more

8 Read more

A Framework for Incremental Quality Analysis of Large Software Systems

A Framework for Incremental Quality Analysis of Large Software Systems

Frameworks: A framework integrating a plethora of ana- lyses in order to support continuous quality monitoring is SQUANER [24]. Like our approach, it proceeds incremen- tally, and updates are triggered by commits to a version control system. The goal, like ours, is also to provide rapid developer feedback. In addition, SQUANER presents advice for improving the analyzed code base, based on the findings of their analyses. However, the types of analyses supported differs: SQUANER, unlike our approach, focusses exclusively on object-oriented systems. Furthermore, the metrics calculated by SQUANER are file-based and thus limited to local analyses. Our approach, in contrast, supports local as well as global analyses. Additionally, we provide quality history of each file in the system at a per-commit granularity. The type of continuous quality control data provided by SQUANER could not be determined, as the corresponding web site was unreachable. As far as perfor- mance is concerned, we can not compare our approach to SQUANER, as no empirical data was available.
Show more

10 Read more

EDGAR: a software framework for the comparative analysis of prokaryotic genomes

EDGAR: a software framework for the comparative analysis of prokaryotic genomes

Understanding the taxonomic relation of Xanthomonas strains has become an awkward endeavor. In the early days of microbiology, each bacterial isolate identified from a host plant for which no member of this bacterial genus had been described previously was classified as a new species [42]. Later many of these species were merged on the basis of in vitro tests, but the original name identi- fying the main host plant was conserved in the term "pathovar" [43]. Incorporation of information derived from partial knowledge of DNA sequences, such as 16S rDNA sequences or RFLP patterns, led then to a reassess- ment of the Xanthomonas taxonomy [44], which is still in progress [45,46]. This phylogenetic analysis provides not only the basis for a systematic order of the Xanthomonas bacteria, but also a deeper understanding of the evolution of the Xanthomonas strains. However, all attempts so far to reconstruct the true evolutionary relationships between the Xanthomonads did not lead to a taxonomy that is gen- erally applied within the community. Instead, the differ- ing classifications of the strains resulted in inconsistent naming in the literature. Thus, exploiting the emerging genome data may now open the door to obtain a well- established Xanthomonas taxonomy on a definite basis. We have used EDGAR to assess this approach.
Show more

14 Read more

Assessment of a Framework for Comparing Software Architecture Analysis Methods

Assessment of a Framework for Comparing Software Architecture Analysis Methods

For example in ATAM, quality attributes are specified by building a utility tree, and results can be presented using a result tree, however, other methods do not use these techniques. Thus, any tool that claims to sufficiently support ATAM is expected to help build utility and result trees. Furthermore, if an architecture analysis method requires architecture be described in a certain description language (such as UniCon, Acme, Wright, and Rapide), that method should have a supporting tool technology to help create, maintain, evolve, and analyze the architectures specified in the required architecture description language. Moreover, a recent effort to assess three architecture analysis methods using the features analysis approach considered tool support provided by each method as an assessment criterion ( Griman et al. , 2006 ) . FOCSAAM compares architecture analysis methods based on the level of support provided by a tool. Such support may vary from non-existent to full support. However, there is a need for further research on the criticality of using tool support as a differentiation point for architecture analysis method.
Show more

9 Read more

SSketch: An Automated Framework for Streaming Sketch-based Analysis of Big Data on FPGA

SSketch: An Automated Framework for Streaming Sketch-based Analysis of Big Data on FPGA

A. Light Field Experiments A light field image is a set of multi-dimensional array of images that are simultaneously captured from slightly differ- ent viewpoints. Promising capabilities of light field imaging include the ability to define the field’s depth, focus or refocus on a part of image, and reconstruct a 3D model of the scene [25]. For evaluating SSketch algorithm accuracy, we run our experiments on a light field data consisting of 2500 samples each of which constructed of 25 8 × 8 patches. The light field data results in a data matrix with 4 million non-zero elements. We choose this moderate input matrix size to accommodate the SVD algorithm for comparison purposes and enable the exact error measurement especially for correlation matrix (a.k.a., Gram matrix) approximation. The Gram matrix of a data collection consists of the Hamiltonian inner products of data vectors. The core of several important data analysis algorithms is the iterative computation on the data Gram matrix. Examples of Gram matrix usage include but are not limited to kernel- based learning and classification methods, as well as several regression and regularized least squares routines [31].
Show more

8 Read more

A Software Framework for Spam Campaign Detection and Analysis

A Software Framework for Spam Campaign Detection and Analysis

FP-Growth is a data-mining algorithm that is used for discovering association rules between items in large datasets. It includes two main steps: FP-Tree Construction and Frequent Itemset Generation. In the first step, the algorithm builds a compact data structure, called FP-Tree, using two passes over the dataset. In the first pass, the algorithm scans the dataset and counts the number of occurrences of each item in the dataset. In the second pass, the FP-Tree structure is constructed by inserting instances from the dataset. Items in each instance are sorted in a decreasing order based on their frequency in the dataset, while infrequent items are discarded so that the tree can be processed quickly. In the second step, the FP-Growth algorithm extracts frequent items from the FP-Tree. It starts from the bottom of the tree by finding all instances matching a given condition. Then, each prefix path sub- tree is processed recursively to extract the frequent itemsets. This construction method allows item sets that have several common features to arise naturally within the FP-Tree, instead of generating candidate items and testing them against the entire dataset. Indeed, using this technique, items that have most features in common share the same path in the tree. The root of the tree is the only empty node, separating items that have no features in common. The FP-Tree usually has a smaller size than the uncompressed dataset since items that share similar features are grouped together. Hence, this compressed version reduces significantly the amount of data that should be analyzed, while maintaining the characteristics of the items.
Show more

33 Read more

Show all 10000 documents...