By using a programming language for statistics, one has a lot more flexibil- ity to program algorithms. But this approach requires familiarity with the respective language and the resulting programs are usually script-based. This means that it is less convenient and more troublesome to use these algorithms compared to a software with a GUI for interactive modelling. Often even the programmer herself has problems getting a script running that she has not touched for a while. Furthermore, model building in econometrics is typically a multi-step procedure with a number of different algorithms involved. With a script-based approach combining these procedures can become quite a com- plex undertaking. It always requires text editing of sometimes lengthy source code. Furthermore, documentation is often quite sloppy, which requires to in- vestigate the algorithms themselves to know exactly how parameters need to be prepared and what the contents of the results are. Another problem is that the authors of these algorithms usually see themselves rather as Scientists in- stead of Programmers and they often do not reflect very much about software engineering techniques. The result is that software reuse is often limited to reusing single procedures written in some script language for statistics. More complex interactions or object-oriented design is only applied by experienced developers and can still not be considered a mainstream technique in that area.
In this approach, the contents of metanodes, either derived from topological structures or attribute information, were constructed and/or drawn on demand as the user explores the data. Grouse  took a large graph and hierarchy as input and was able to draw parts of it on demand as users opened metanodes. Appropriate graph drawing algorithms were used to draw the subgraphs based on their topological structure. For example, if the node contains a tree, a tree drawing algorithm will be used. GrouseFlocks  was created to construct graph hierarchies based on attribute data and progressively draw them. Search strings selected or categorized nodes and computed induced subgraphs based on attribute values that were placed inside connected metanodes. These metanodes could be drawn on demand with Grouse. However, often parts of a graph are near certain nodes and metanodes are of interest and certain metanodes can be too large to draw on demand. TugGraph  was created for these situations when topology near a node or metanode is interesting. Also, it can summarize specific sets of paths in the graph.
specify the whole system, the relations between these parts must be expressed and then the parts must be merged into a single UML model in a way that correctly reflects these relations. The area of metadata management has similar challenges due to the need to relate many schemas (i.e., models) in scenarios such as database integration, message mapping, data migration, etc. There, the field of Model Management  has emerged as a way to address these complexities by proposing that model relations be expressed as first class objects called model mappings and that generic operators be defined that could be used to manipulate models and mappings in a sound way to achieve various modeling goals. A key strength of this approach is a solid mathematical foundation .
• Service: Services adhere to a communications agreement, as defined collectively by one or more service description documents. Mobile test oriented services design needs to summarize the functions in the form of services in accordance with the mobile software requirement, and define specific services according to its implementation and invoking behavior. After analysis, the main test platform services are divided into two categories: user interaction services, testing-related services. User interaction service is responsible to provide GUI, but testing services differ in their ways of interaction with users, and for which it is difficult to propose a generic model for them. Therefore we need to provide a service, which manage all GUI element of testing services. Testing-related services design depends on the features of mobile software testing. These services include test management, test execution, process control, testing implementation, communications, results analysis, data management and some other services.
These challenges and other ones that exist in classical storage solutions have been studied by the authors in
Cecchinel et al. ( 2014 ) and motivated them to propose a
new software-based architecture to handle the Big Data which generated from the sensors and other objects in IoT network. This architecture based on the cloud computing to store this data instead of storing it in the physical appli- ances. Before they start to build their solution, they set up four design requirements that must be carried over by any storage solution architecture for IoT-based network. The new solution must be able to support different types and platforms of sensors, data and protocols, and heterogeneous hardware. Building a scalable solution either vertically to add an extra storage space, or horizontally to provide a good load balancing is also considered a mandatory requirement for any solution. In addition, a remotely re- configuration for the underling devises should be provided by that solution. Finally, it should have fine-grained user applications to let the end users to access and query the gathering data in a smooth way.
Context awareness is a property of a system that uses context to provide relevant information and /or services to the user, where rele- vancy depends on the user’s task.
There are three main categories by which context aware systems can be classified; these are device context, user context and physical context. The user context cat- egory deals with user driven actions and is the most appropriate type of context awareness for the proposed framework. The e fficient management of the context is supported and driven by the context model and its structure. The philosophy behind modeling context models follows two main objectives; namely, the own- ership of a flexible structure in which knowledge sharing is enabled, and logical reasoning in which reasoning over static data can occur. The success of the context aware systems directly depends on their ability to maintain these key objectives. The multi level ontological approach was selected to model the context for the framework. The upper layer within the ontological hierarchy models generic con- cepts and relations for the product basedsoftware certification. The lover levels within the ontology are used for the modeling of domain specific concepts and relations. This allows for the criteria, which occur commonly in lower levels, to be gathered in one location which would often be moved to upper levels within the ontological hierarchy without being redefined multiple times. This approach eliminates issues in which concepts or properties could be defined or evaluated differently in different domains .
V. F UTURE W ORK
For future work, there are many comparative study using many forensic tools such as Oxygen Forensic Suite , Andriller , Cellebrite UFED Physical Pro and XRY  that can be conducted. to get an overview on what forensic tool that best for digital forensic investigations. The comparison also can be conducted on forensic frameworks and parameters such as National Institute of Standard Technology (NIST)  , and Integrated Digital Forensic Investigation Framework (IDFIF) .
Community activity indicates how intensively a project is used. Strong user activity will usually lead to stronger development. A possible way to measure user activity is, for example, to look at the number of monthly questions in StackOverflow. Search trends indicate developers’ interest and relevance of the open-source projects. Interest tendencies give some insight into how developers focus and preferences are evolving. Regarding these two characteristics, Figure 1 (left side) illustrates the StackOverflow tagged monthly questions and Figure 1 (right side) shows the Google search trends (search statistics were collected from Google pertaining to the period from January of 2004 to June of 2014). As the charts in Figure 1 show, Django and Ruby on Rails have the highest activity although Grails and Play are slowly growing. Older frameworks will tend to have larger user bases even if there are new better ones. Developers have a tendency to adopt a framework they like and keep with it for some time. Projects that have been completed need maintenance or further development, meaning the framework chosen at start will continue to be utilized.
Different methods for teaching software development begin with an initial step of programming class. Time is not given to this security issues. Rather than this other courses contains computer network, data communication database management, analysis and design. Security methods are under the high level class and are considered to be an add in to the original software. The habits formed from initial programming can be for a long time. Having students focus repeatedly on issues of syntax, and primitive details of data structures, control structures, etc. forms habits associated with this level of concern. Higher level issues, testability, requirements, security and maintainability may be covered late in coursework, but never to the degree to form strong work habits. To change programmer behavior will continually run against initial habits formed early in their educational experience .
d) Identification of Materialized Views: For user oriented DW requirements engineering, it is also important to analyze that how user will efficiently interact with the DW system to perform the necessary analysis activities. Materialized views are the central issue for the usability of the DW system. DW data are organized multi dimensionally to support OLAP. A DW can be seen as a set of materialized views defined over the source relations. Those views are frequently evaluated by the user queries. The materialized views need to be updated when the source relations change. During the DW analysis and design, the initial materialized view need to be selected to make the user's interactions simple and efficient in terms of accomplishing user analysis objectives. In the proposed requirements engineering framework, the domain boundary has been drawn through identifications of Fact BOs, Dimension BOs, Actor BOs, and interactions between them in the first phase and which have been further refined in this phase. The list of analysis activities may be performed by Actor BOs based on their roles and also Event BOs have been identified in the same phase. Moreover, the feature tree concept explores the constraint requirements for the interest of domain. Based on those identifications, the different materialized views can be identified in this step. In this step the materialized views are used to represent semantically in the context of some Fact BO and in terms of actor along with their roles, analysis activities those may be performed, events those may be occurred, related Dimension BOs involved and the related constraints. Related to one Fact BO, there may exist several materialized views to minimize the views level dependency and to meet the analytical evaluation requirements of the stake holders. Semantically, a materialized view will be represented using View Template. The Interface Template will contain the information of View name, identification, analysis objectives, target Fact BO, Actor BO, roles, related activities, related Dimension BOs to realize the source relations, related Event BOs and related constraints. Any view template is reusable and modifiable through iterative process to accommodate the updatable materialized view.
The first research aspect mainly concerns the linkage between organizational strategy and software requirements analysis and definition. From the research  presents B-SCP requirements frameworkanalysis for validating an alignment between software requirements and organization strategy based on strategy, context and process. Another research  presents a framework for domain requirements analysis and architecture modelling in software product lines. Another researches  and  presents a requirements analysis method are Role Activity Diagrams (RADs) to represent the business process and Jackson context diagrams to represent requirements analysis in an interest domain. Both researches are present Role Activity Diagrams and Jackson Context Diagram in detail. That proposed requirements analysis method which cover business strategy and software requirements are used to validate and verify an alignment organizational IT to support the business strategy. The researches apply the method with case study Seven-Eleven Japan. Another research  presents a requirements analysis method called PALM (Pedigreed Attribute eLicitation Method). PALM is the methods that analyze requirements from business strategy in various points of view. Next interpret business strategy to Non-Functional Requirements (NFR) that important to software architecture. This research proposes the quality requirements analysis method from goal. Another research  presents a requirements analysis method from business strategy. First, Use goal oriented and i* Model to requirements analysis. Then use problem frame to observation and capture interest domain in process. Applying method with case study appointment system that show architecture and relation of make appointment. This research proposes requirements analysis in various technique that be able to apply together. Another research  presents a requirements analysis method from business strategy and scenarios based and then interpret to functional requirements which represent by use case diagrams. Applying method and tools with Home Integration System (HIS) used to identify and meaning to software requirements in product line based on the business goal, product marketing plan. This research proposes requirements analysis step from goal and key
In a typical environment of an enterprise system, 3 main levels exist, Device layer, Delivery Channels layer, and Back-end Systems layer. This architecture is observed to exist in large organizations in Malaysia such as financial institutions and public services. The observation is backed-up by a guided interview conducted with IT personnel of specific organizations and IT personnel from System Integrator (SI) Company. Please refer to Appendix A for the questionnaire used for the guided interview and analysis of the result. The description of each layer is as follows:-
1) GeneID Manager, Location Manager, Sample Manager, and Stop List Manager: Each of these manager classes is simply tasked with maintaining an in-memory mapping of data from each of the four data file types. The GeneIDManager class maintains a mapping from gene IDs to their corre- sponding RADTags. The LocationManager class maintains a mapping from organism IDs to their corresponding latitude and longitude pairs. The SampleManager class maintains both a mapping from loci to their corresponding RADTags and a mapping from RADTags to their corresponding loci. Finally, the StopListManager class maintains a map of all loci that are in the stop list. Each of these manager classes also maintains a list of files from which their data is collected. At parse time, each manager class also validates the data, excluding erroneous entries and generating informative error messages for the user. Each of these manager classes also maintains an update status that the master model class can poll to discover whether or not the data has changed.
Our team has proposed and developed a cloud-based BIM system called CloudBIM  that can perform storage and viewing on the data of massive BIMs. As shown in Figure. 1, it uses cloud computing technology to store massive BIM data and adopts IFC as the BIM file upload format of the CloudBIM system. Based on the IFC format, we developed a commonly-used BIM upload interface and then developed the Web interface for BIM viewing using WebGL, such that the BIM can be reached on any device through a standard web browser online viewing system. Though this system solves the problems caused by the project management mode of existing commercial BIM softwarebased on specific file formats and the compatibility of files between manufacturers, as its main function is only to use cloud computing technology on data storage and the three-dimensional visualization viewing level of massive BIMs, the storage of the data of massive BIMs in the CloudBIM system still has many possibilities for use in analysis. Some such examples are the use of cloud computing technology to conduct statistics and analysis on the property data of massive BIMs, or the addition of the dynamic data of BIM inside the buildings' space for joint operations, and other possibilities that are yet to be realized. If these functions can be added into CloudBIM, its functionality will be more complete.
Frameworks: A framework integrating a plethora of ana- lyses in order to support continuous quality monitoring is SQUANER . Like our approach, it proceeds incremen- tally, and updates are triggered by commits to a version control system. The goal, like ours, is also to provide rapid developer feedback. In addition, SQUANER presents advice for improving the analyzed code base, based on the findings of their analyses. However, the types of analyses supported differs: SQUANER, unlike our approach, focusses exclusively on object-oriented systems. Furthermore, the metrics calculated by SQUANER are file-based and thus limited to local analyses. Our approach, in contrast, supports local as well as global analyses. Additionally, we provide quality history of each file in the system at a per-commit granularity. The type of continuous quality control data provided by SQUANER could not be determined, as the corresponding web site was unreachable. As far as perfor- mance is concerned, we can not compare our approach to SQUANER, as no empirical data was available.
Understanding the taxonomic relation of Xanthomonas strains has become an awkward endeavor. In the early days of microbiology, each bacterial isolate identified from a host plant for which no member of this bacterial genus had been described previously was classified as a new species . Later many of these species were merged on the basis of in vitro tests, but the original name identi- fying the main host plant was conserved in the term "pathovar" . Incorporation of information derived from partial knowledge of DNA sequences, such as 16S rDNA sequences or RFLP patterns, led then to a reassess- ment of the Xanthomonas taxonomy , which is still in progress [45,46]. This phylogenetic analysis provides not only the basis for a systematic order of the Xanthomonas bacteria, but also a deeper understanding of the evolution of the Xanthomonas strains. However, all attempts so far to reconstruct the true evolutionary relationships between the Xanthomonads did not lead to a taxonomy that is gen- erally applied within the community. Instead, the differ- ing classifications of the strains resulted in inconsistent naming in the literature. Thus, exploiting the emerging genome data may now open the door to obtain a well- established Xanthomonas taxonomy on a definite basis. We have used EDGAR to assess this approach.
For example in ATAM, quality attributes are specified by building a utility tree, and results can be presented using a result tree, however, other methods do not use these techniques. Thus, any tool that claims to sufficiently support ATAM is expected to help build utility and result trees. Furthermore, if an architecture analysis method requires architecture be described in a certain description language (such as UniCon, Acme, Wright, and Rapide), that method should have a supporting tool technology to help create, maintain, evolve, and analyze the architectures specified in the required architecture description language. Moreover, a recent effort to assess three architecture analysis methods using the features analysis approach considered tool support provided by each method as an assessment criterion ( Griman et al. , 2006 ) . FOCSAAM compares architecture analysis methods based on the level of support provided by a tool. Such support may vary from non-existent to full support. However, there is a need for further research on the criticality of using tool support as a differentiation point for architecture analysis method.
A. Light Field Experiments
A light field image is a set of multi-dimensional array of images that are simultaneously captured from slightly differ- ent viewpoints. Promising capabilities of light field imaging include the ability to define the field’s depth, focus or refocus on a part of image, and reconstruct a 3D model of the scene . For evaluating SSketch algorithm accuracy, we run our experiments on a light field data consisting of 2500 samples each of which constructed of 25 8 × 8 patches. The light field data results in a data matrix with 4 million non-zero elements. We choose this moderate input matrix size to accommodate the SVD algorithm for comparison purposes and enable the exact error measurement especially for correlation matrix (a.k.a., Gram matrix) approximation. The Gram matrix of a data collection consists of the Hamiltonian inner products of data vectors. The core of several important dataanalysis algorithms is the iterative computation on the data Gram matrix. Examples of Gram matrix usage include but are not limited to kernel- based learning and classification methods, as well as several regression and regularized least squares routines .
FP-Growth is a data-mining algorithm that is used for discovering association rules between items in large datasets. It includes two main steps: FP-Tree Construction and Frequent Itemset Generation. In the ﬁrst step, the algorithm builds a compact data structure, called FP-Tree, using two passes over the dataset. In the ﬁrst pass, the algorithm scans the dataset and counts the number of occurrences of each item in the dataset. In the second pass, the FP-Tree structure is constructed by inserting instances from the dataset. Items in each instance are sorted in a decreasing order based on their frequency in the dataset, while infrequent items are discarded so that the tree can be processed quickly. In the second step, the FP-Growth algorithm extracts frequent items from the FP-Tree. It starts from the bottom of the tree by ﬁnding all instances matching a given condition. Then, each preﬁx path sub- tree is processed recursively to extract the frequent itemsets. This construction method allows item sets that have several common features to arise naturally within the FP-Tree, instead of generating candidate items and testing them against the entire dataset. Indeed, using this technique, items that have most features in common share the same path in the tree. The root of the tree is the only empty node, separating items that have no features in common. The FP-Tree usually has a smaller size than the uncompressed dataset since items that share similar features are grouped together. Hence, this compressed version reduces signiﬁcantly the amount of data that should be analyzed, while maintaining the characteristics of the items.