Management of different processes that handles the removal of running process from the CPU and selection of another process on the basis of particular strategy. Process scheduling is the essential process in multi programming operating system, such type of operating systems allows more than one process to be loaded into executable memory and then assign time for each process shares individual time for multiplexing events in scheduling of different operating processes. Scheduling queues refers to queues of procedures or gadgets. When the method enters into the system, then this system is placed right into an activity queue. This queue includes all procedures within the system. The working system also continues other queues which includes tool queue. Device queue is a queue for which multiple tactics are looking ahead to a selected I/O tool. Every tool has its personal tool queue.
For environmental dimension, the PPIAF framework requires gas emissions to be measured (e.g., hydrogen chloride, chlorine, hydrogen sulfide, carbon monoxide, carbon dioxide). Emission rate considerably affects the environment. Liquid and gaseous fuel handling and consumption are the method parameters of processing liquid and gaseous fuel (e.g., fuel temperature, pH value of fuel, consumed grease and oil). Waste disposal is the system of disposing post-production wastes. Other measures are not specific because they depend on the surrounding environment of the power plant. For PT. XYZ, the most suitable measure is the Material and Packaging used for the genset. Table 1 shows the adjustment from the Original PPIAF Framework to the Product Benchmarking Framework of PT. XYZ. Safety dimension is used to assess safety performance in a power plant. Lost-time accident rate is a frequency of occurrence that resulted in fatality, permanent disability, or time lost from work. Near misses are almost the same as LFTIR but only include loss of disability or time that has not yet occurred but is nearly happening. Vehicle accident rate is the frequency of occurrence in accidents related to vehicles, which are used around the power plant. Safety measure is excluded because the framework of PT. XYZ is for an individual genset instead of the entire power plant.
members of the PSD who are internally involved with the system development process. Three (3) sets of questionnaires were distributed to the targeted group of 10 business personnel, 10 IT management personnel and 10 IT technical personnel. Although COBIT is often used to measure maturity models, most users are too centred upon "the magical numbers". Hence, to effectively measure IT process maturity, it is imperative to firstly determine the purpose of the measurement namely in ensuring what needs to be measured and what should be done with the measurements obtained. As it is not an end goal, maturity measurement can be used to support other objectives, such as raising awareness, identifying weaknesses, and identifying priority improvements. The best way to choose a measurement method is to select one that best supports the set of identified goals or objectives. The ideal consensus should reflect where an organisation should be, and the results must be reviewed and ratified by the management in order to have improvements planned and implemented. To support this approach, the results can be compared to the results of ISACA’s Maturity Survey and plotted into spider-web charts .
19 Oil palm plantations in the tropics are rarely accompanied by extreme weather changes, so that the use of satellite imagery is deemed relevant. The use of local feature gives pretty good results [11, 13, 3]. This method of feature extraction local binary pattern (LBP) is an efficient descriptor in texture analysis. Oil palm is planted to form a pattern that represents characteristic area of the oil palm plantation. The exact pattern of these plants forms a fractal, so the use of fractal-based methods is considered suitable to recognize the oil palm plantations. This study is designed to use fractal- based method combined with the local feature extraction and local binary pattern. Therefore, this study is aimed to identify the age of oil palm trees with a fractal-based methods classifier based multilayer perceptron. Thus, the questions of this study are whether the age classification of oil palm trees can be done using fractal-based methods? Does the addition of texture-based methods such as Local binary pattern (LBP) and the local feature extraction improve the accuracy? This research is to take advantage of the panchromatic band in recognizing the age of oil palm trees by digging the information and classifies it using multi-layer perceptron.
In this research proposed work, being proposed a novel idea Ev-LCS (Evolution of Long-term Composed Services), which present the problem in LCS (Long-Term Composed Services). In autonomous web services, Long-Term Composed Services is dynamically collaborating, which provide the value-added services. Firstly, the presented formal method is for providing the support of grounding semantic for the change management automation. This work is presenting the techniques of Evolution of Long-term Composed Services (Ev-LCS), a framework that specifies in end-to-end and validates the changes in Long-Term Composed Services on top-down approach including the ontology of web services and Long-Term Composed Services (LCS) Schema. In this research work, the module of change management confirming the changes on two phases: Instance phase and schema phase. Then after define, the changes set for operating the top- down approach changes from formal model. Here, proposing a novel idea to make changes automatically in mid of the enactment process of change. Try to prove the presented work practically and did it with an expected output.
The main theoretical conclusion in  is that the required blending width to ensure coercivity of the linearized B-QCF operator is surprisingly small. For both 1D and 2D uniform expansion, the computational results of the linearized operators perfectly match the analytic predictions. In addition, the stability for a general class of homogeneous deformations of the 2D B-QCF operator becomes almost the same as that of the atomistic model by using a very small blending region, in contrast to the fact that the stability region of the force-based quasicontinuum (QCF) method, that is, the B-QCF method without blending region, is just a proper subset of the fully atomistic model. However, the critical strain error for the B-QCF operator applied to shear deformation seems to only linearly depend on the system size and is thus insensitive to blending width.
The system represents the transactional workload in the form of the graph. The graph partitioning algorithm divides the nodes from the graph into k balanced partitions such that the number of edges crossing the partitions is minimized. This graph partitioning problem has many applications in various fields like VLSI design, parallel scientific computing, sparse matrix reordering. However, this problem is identified as NP-complete . In last decades, many multilevel schemes are introduced to solve this problem. For finding solutions to the k way partitioning problem Recursive bisection method has been used. First it combines the random vertex and edges using the matching technique to form the smaller graph. This combined smaller graph is bisected initially, on which refinement algorithm is applied to form the two balanced partitions of the original graph, in such a way that it has min-cut. These steps are repeated until the k partitions of the original graph are formed. This whole process is composed of three phases as shown in Figure 3. Different methods for these phases are described in many papers [8, 9, 10]. We have selected some of this methods with minor changes in it. These phases are illustrated in detail as follows:
Business Intelligence (BI) is an innovative technology that facilitates analytics of big data. Deploying BI is a complex undertaking, expensive in nature, and time-consuming task as these software applications are high-risk/high-return projects. Improper implementation may lead to failure and in turn leave organizations into data rich and information poor. This study examines BI from the lens of innovation in which the traits of the innovation tool itself influence its successful deployment in organizations. Rooted from Diffusion of Innovation Theory (DOI), a model was developed and validated by decision makers and executives that involve in various levels of BI deployments in telecommunication industry. The primary data collected through quantitative method were analyzed via structural equation modelling technique. Findings of the study suggest that DOI offers valuable insights into characteristics of BI that influence its successful adoption. In line with the literature on DOI of other type of information systems success, BI characteristics namely relative advantage, complexity, compatibility, and observability are also found to be determinants in ensuring BI success. This study contributes significantly to the existing literature that will assist future BI researchers especially in terms of information system success. Practically, the model serves as guidelines for BI implementers to invest on the relevant skills and resources for fulfilling the requirements of BI successful deployment..
Although the personalization framework in the domain of m-learning is limited, the available frameworks for mobile device or mobile user are being reviewed to serve as reference for the development of our framework. Zhang  proposed a generic framework for delivering personalized and adaptive content to mobile users. It consists of user profile which is used for content personalization. The user profile may include (a) user information including user ID, background information, personal interest represented by either keywords or information/service categories, preferences (e.g. media preference, summarization method, and priorities among data items); (b) target device information such as screen size, screen resolution, network, battery, memory; (c) service profile including service restrictions and user availability; (d) wireless network information such as network ID, topology, and configuration.
According to NQC (2009), one of the most accepted methods of framework validation is the summative review method, which depends on discussing the model details with experts in the same field of research, and updating the model based on reviewers’ feedbacks and recommendations. We use the summative experts’ panel reviews to ensure the validity of the proposed MaRSMF. The expert panel of validation consists of 22 experts from Malaysian, Singaporean, and Jordanian Universities. The profiles of the Expert panel are attached in Appendix D1. The experts are selected based on their experiences, skills of research supervisions, and ICT background. All members of expert panel have good background of ICT domain for at least 5 years and they are involved in supervision activities for Master and PhD students.
and distribute them over cloud for reliability, and clients do no longer recognize the amount or maybe the existence of these backup copies), and in the end have the information disclosed if the encryption keys are abruptly received, each with the aid of manner of accidents or via malicious assaults. FADE, a comfy overlay cloud storage machine that ensures record assured deletion and works seamlessly atop these days’ cloud garage offerings. FADE decouples the look after encrypted information and encryption keys, such that encrypted data remains on zero.33-celebration (untrusted) cloud garage carriers, whilst encryption keys are independently maintained with the aid of a key supervisor provider, whose trustworthiness may be enforced the usage of a quorum scheme . FADE generalizes time-based completely report confident deletion (i.e., files are generally deleted upon time expiration) right into a extra high-quality-grained method referred to as coverage based totally document confident deletion, wherein documents are related to extra bendy record get right of entry to hints (e.g., time expiration, read/write permissions of prison users) and are assuredly deleted when the related file get entry to guidelines are revoked and come to be out of date. Statistics confidentiality isn't always the best safety requirement. Flexible and great-grained get right of entry to manipulate is also strongly desired inside of the organization orientated cloud computing adaptation. A fitness-care records system on a cloud is needed to restriction get right of entry to of blanketed scientific records to eligible docs and a consumer relation control gadget walking on a cloud may additionally allow get right of access to of customer statistics to excessive-level executives of the agency handiest. In the ones instances, get admission to manipulate of sensitive statistics is both required by using manner of regulation (e.g., HIPAA) or organization recommendations.
Pilloni et al. proposed a Distributed Lifetime Maximization Algorithm (DLMA) for distributed WSN. The algorithm used an iterative gossip and asynchronous local task allocation scheme. DLMA did not address the issues like scalability and robustness. The algorithm had better network lifetime in comparison with methods like centralized solution . Computation complexity was higher than TAN . Asynchronous local task allocation method facilitated finding faulty nodes. Communication overhead caused by iterative gossiping was high. Chen et al. proposed an asynchronous distributed task allocation algorithm based on Contract Net Protocol(CNP). Tasks generated by the task node are communicated to manager node. On receipt of the task, manager node initiates inviting bids from ordinary nodes. Each bid was constituted by the node’s residual energy, waiting time of task in the queue etc. Contract net utilizes bidding process to complete task negotiation. The use of C- MEANS clustering algorithm enhanced the contract
The rate and scale of statistics systems is increasing day by day. pc networks have emerge as ever pervasive and have made lifestyles easy and fast, however along side that it gives upward push to numerous threats to facts systems. A system containing data assets, whilst associated with the out of doors world, is exposed and is liable to assaults that might purpose lack of crucial records and sources. assaults to belongings are as a result of threats which have the capability to take advantage of the vulnerabilities associated with an asset. In great, assets serve the enterprise desires of an employer and any damage to these property in any shape reasons hazard and is of remarkable problem to that commercial enterprise enterprise. This requires a scientific technique to evaluate facts safety dangers and expand an appropriate safety method. officially, danger may be described because the functionality harm delivered on if a particular chance exploits a particular vulnerability to reason harm to an asset. hazard assessment is defined because the machine of identifying protection risks and determining their magnitude and impact on an organization [9, 10].
Since the conversion efficiency of PV arrays is very low, it requires maximum power point tracking (MPPT) control techniques. The purpose of this paper is to study and compare three maximum power point tracking (MPPT) methods in a photovoltaic simulation system using perturb and observe method, Incremental conductance method and Fuzzy Logic Control method. MATLAB/Simulink used to establish an Implementation of MPPT Algorithm for Grid Connected PV module.
There are two methods used for data collection and analyzing. Firstly, literature review analysis is done to investigate the factors that encourage academia in knowledge sharing by means of social media. The review is based on previous research that focused on the implication of social media towards knowledge sharing. Then, in order to show how Malaysian academia applies social media for knowledge sharing purpose, a content analysis is done. We choose this method because through content analysis, we get to view the social interaction between the academicians by looking directly at their communication via Facebook. Besides, this method allows both qualitative and quantitative analysis. The content analysis is done on a selected Facebook page administered by academia. In the process of selection, we first recognized several Facebook pages that are created for knowledge sharing purpose. Fishbowl technique is used through the selected Facebook page and it is made randomly. Content analysis is done through this Facebook page in which it analyzed the collection of the academician’s post from March to June 2015. The data collection is collected for four months because we want to analyze the consistency of the post. Based on rule in sampling, sample that is larger than 30 is acceptable and the overall post analyzed for this study are 75 posts. Firstly, we characterized the posts according to its date, post content and it respective number of like, comment and share. Secondly, we created a theme of topics discussed by the academician by analyzing the post content. Then, the post contents are divided to four topics and the entry is recorded for each months. Lastly, by analyzing the number of like, comment and share we studied which topics that are more favorable to be discussed by the academician. Excel is used in analyzing all the acquired data.
Automated workflow management tools and special- ized tools for organizing benchmarks provide sophisti- cated options for setting up benchmarks and creating a reproducible record, including software environments, package versions, and parameter values. Examples in- clude SummarizedBenchmark , DataPackageR , workflowr , and Dynamic Statistical Comparisons . Some tools (e.g., workflowr) also provide stream- lined options for publishing results online. In machine learning, OpenML provides a platform to organize and share benchmarks . More general tools for man- aging computational workflows, including Snakemake , Make, Bioconda , and conda, can be custom- ized to capture setup information. Containerization tools such as Docker and Singularity may be used to encapsu- late a software environment for each method, preserving the package version as well as dependency packages and the operating system, and facilitating distribution of methods to end users (e.g., in our study ). Best prac- tices from software development are also useful, includ- ing unit testing and continuous integration.
Python plus Tweepy is a method extracting real-time original data and has a great ability handling large-scale data. Hence, this method could be applied to the cyber security industry. Hypothetically, high end users such as the Department of Defense (DoD) could apply this method to extract keywords posted on Twitter in real time. As long as the terrorists publish a tweet with dangerous messages, the DoD could monitor the account activity and take appropriate actions. Moreover, this method satisfies the business need for high-end news-based trading in high-frequency trading (HFT) on Wall Street. HFT traders could use Python plus Tweepy to track company names, key words, and trading news on Twitter at any given time. For example, the Wall Street Journal posts a message that profit of Google this year goes up. The trader will use the Python plus Tweepy method to track the Wall Street Journal Twitter account and “Google’s profit goes up” information. Then, the Tweepy software can extract information in real time and transfer it to another program which will identify the keywords and semantics and further process the information to output the command on buying Google stocks. All these processes are carried out within microseconds or even nanoseconds automatically on computers. Therefore, for the business
ICT projects at the diffusion stage focused on the development of ICT and telecommunication infrastructures which includes telecommunication towers, fibre optic cabling, wifi towers, earth station satellites, public internet centres, broadband, computers and operating systems. The adoption stage focused on using the devices and equipment available from the infrastructure. At this stage there is positive reaction on the use of ICT up to a point where users feel the necessity of using ICT, just like the ubiquitous use of the social media presently using the cell phone infrastructure. Once this occurred, a “technology pull” has been reached where there is now a demand for the use of ICT and ICT services, a shift from the “technology push” phenomenon marked by the diffusion stage. ICT related spending will increase, thus contributing to a more significant value to the economy. The intensity of the ICT adoption stage, coupled with better and efficient infrastructure and high-speed broadband, will lead to more efficient service delivery, increased production and job performance, which will contribute to higher income and value creation. This is the stage targeted in the government transformation lab and Digital Malaysia program, so that the use of ICT and ICT services could be fully optimized to give maximum impact to the socio-economic development of the country.
Children are usually associated with play in which by playing, children are seen to have reached their first cultural and psychological achievement . Children’s play may take place indoors or outdoors  and playing activity can occur alone, in pairs or in groups . Playing activities can take many forms, including sports, creative or imaginative activities like arts and crafts, games, and social-relational activities like socializing with friends . Nowadays, children tend to play outdoors far less than they used to  where they typically played with physical toys and other objects to support playing activities . Their play has become influenced by television and computers, and children often choose to play indoors, engaging with technology instead . Children often have favorite playing partners, (e.g.