Today’s situation with respect to workflowmanagement software is comparable to the situation as regards to database management software in the early seven- ties. In the beginning of the seventies most of the pioneers in the field of DBMSs were using their own ad-hoc concepts. This situation of disorder and lack of con- sensus resulted in an incomprehensive set of DBMSs. However, emerging stan- dards such as the Relational Data Model [Cod70] and the Entity-Relationship Mo- del [Che76] lead to a common formal basis for many DBMSs. As a result, the use of these DBMSs boosted. There are many similarities between today’s workflowmanagement systems and the DBMSs of the early seventies. Despite the efforts of the WorkflowManagementCoalition a real conceptual standard is missing. As a result, many organizations are reluctant to use existing workflowmanagement software. In our opinion Petri nets constitute a good basis for standardization. We have just given three solid reasons for using a Petri-net-based workflow manage- ment system. Inspired by practical experiences, we have come to realize that many of the features of the Petri net formalism are useful in the context of workflow man- agement.
Unlike business WfMSs, scientific ones lack a recognized standard; as a consequence several workflow languages exist. Apart from the syntax, these languages differ for the formalism used to express the workflow model. Most of the graphical workflow languages are based on DAGs where the control flow can be described in terms of sequence, parallelism and choice. More power- ful than DAGs, formalisms such as Petri Nets and π-Calculus allow to define iteration (also know as loop or cycle). As a consequence of that variety of lan- guages and formalisms, WfMSs are incompatible. Furthermore a WfMS usu- ally address a small set of computational resources and without interoperability scientific workflows cannot fully take advantage from the distributed, hetero- geneous nature of the Grid. The WorkflowManagementCoalition (WfMC) encourages WfMSs standardization in its reference model which defines a set of APIs (called WAPI) and interfaces numbered from 1 to 5 in order to achieve interoperability. In particular, interface 4 describes different levels of work- flow coordination/cooperation. Unfortunately the WfMC has so far failed its standardization scope and no WfMS formally follows its reference model.
The discussion of workflow standardization has up to now focused on workflows across different enterprises which means in general also the interaction of different workflow engines. The efforts of different interest groups, mainly the WorkflowManagementCoalition, has lead to some theoretical models and a reference frame- work. It is far from a general solution, however. Meanwhile, the situation has become more complicated, as now workflow system have to interact not only with their coun- terparts in other organizations (and possibly with systems made by competing suppli- ers), but also with other collaborative solutions like knowledge management, distance learning, and web application platforms.
In this paper we provide an investigation of problems that are involved in Grid workflowmanagement and provide a solution to them, namely the Grid Workflow Infrastructure. This includes some background research on workflowmanagement in general and a de- tailed investigation of Grid workflow requirements. Furthermore we describe how the proposed infrastructure matches these requirements. The result is the definition of an open infrastructure for Grid workflowmanagement that is coherent with the standards of the Globus Alliance, the W3C and the WorkflowManagementCoalition. Finally we discuss an implementation of this infrastructure. The two main building blocks are the specifi- cation of the GWEL notation and the implementation of a workflow engine for the GT3 environment. With the help of a case study we demonstrate the feasibility of the claims of the Grid Workflow Infrastructure.
OnSite Companion is the industry leading workflowmanagement software, by Companion Systems. For over 10 years, we have been providing our clients with a simple yet powerful software solution that allows them to customise their workflows to suit their businesses and its people.
CMS is implemented in PHP and runs in a standard 3-tier ar- chitecture with multiple application servers connected to a high- performance database server. In Fall 2004, CMS is being used by more than 1900 students in 40 courses in computer science, en- gineering, and economics. Some of these courses are large, with more than 400 students. Although CMS was originally designed with large courses in mind, many small courses have also cho- sen to use CMS because of its comprehensive workflow manage- ment. A demonstration version of CMS is available for public use at http://www3.csuglab.cornell.edu/cmsdemo. Secure au- thentication is disabled in this demonstration installation.
There is another tendency which may complicate the problem enormously in the years to come. This conference is focused on automation in the translating business. However, there are neighbouring trades which work with the same documents as the translators, making use of software environments and tools some of which are shared with the translators and some of which are specific to their work. These are the technical writers and editors, the documentation engineers, the information managers and a number of other, new and not yet too precisely defined professions. A double tendency can be observed in this respect, which will have to have its reflexes in the automation solutions of the years to come. On the one hand, there is an integrating tendency, the distinctions between the professions of translators, technical writers, documentation engineers, information managers etc., are becoming increasingly blurred and the professional profiles overlapping more and more. On the other hand, within this common realm of multilingual business and technical communication, jobs and task profiles diversify. Technical writers engage in writing, updating and maintaining documentation in several languages in parallel, which takes them quite far afield from the classical profile of a monolingual text producer. Professional translators become more and more involved in handling the technical medium of the source and target documents, in an especially obvious way in the software localization business. And once one has embarked on managing resources and workflows in the translation business with tailor-made software environments, one will easily encounter the need to link up to the workflows preceding and following the translation work proper, ending up in full-fledged multilingual information management.
Monterey, CA 93943 email@example.com
The Workflow Reengineering Methodology (WRM) is a proposed methodology that uses workflowmanagement automation to enable Business Process Reengineering (BPR). Unlike published BPR methodologies that use historical and estimated process data gathered from workflow participants, WRM uses the more accurate, real-time process measurements, gathered by the workflow tool, to improve the efficiency, effectiveness and flexibility of the workflow. The methodology consists of five phases and 32 component steps, together with associated data collection forms to facilitate its implementation. Using the proposed methodology, a case study was conducted to improve the processing of on-line equipment manuals and electronic discrepancy reports for a Naval organization. Preliminary results indicate significant reduction in cycle time, costs as well as personnel required to manage the process.
The contractor should design an appropriate updating system for the Lease Management System as well as the WorkflowManagement System. A possible scenario would be for the Clerk in charge of Leases to furnish on a weekly basis data on transfers, assignments, mortgages or any other changes pertaining to leases that are brought to his/her attention. A card system may provide such information to the System Manager, who will in turn make the necessary entries in the system, after requisite approval. Similar card system has to be designed to feed the WorkflowManagement System. In this case, number of officers may have to furnish information on important actions taken in regard to a particular application.
First, a disclaimer: I will be mentioning some companies and sources of information as examples. This is not intended to be a comprehensive list of all options, as some very good companies will not be mentioned specifically, even though they may be very good options for your particular organization. There are hundreds of software applications that offer some form of automated workflow solution. You only need to do some basic web searching to find examples. These can be broadly categorized as:
Workflowmanagement is defined as the management of processes through the execution of software whose order of execution is controlled by a computerized representation of the process. The primary reason for the popularity of workflow technology is its support for the management trends including reinvention and revitalizing corporations through rightsizing and business-process reengineering. Current implementations of workflow systems such as Enterprise Resource Planning (ERP) systems automate core corporate activities and let companies share common data and practices across the enterprises. Workflow systems are designed to assist groups of people in carrying out work procedures, and contain organizational knowledge of where work flows. Workflow systems are defined as “ systems that help organizations to specify, execute, monitor, and coordinate the flow of work items in a distributed environment” . A WFMS provides the software tools to define, manage and execute workflows. WFMS have two main functions: a build time function and a run time function. Build time functions enable businesses to model their business procedures and activities, using scripting language. Run time functions help administer workflow process and run
In the WEP generation layer, the mapper reduces the abstract workflow by checking available intermediate data in the available computing nodes. The intermediate data can come from the previous execution of the same workflow or the execution of other workflows that contain several common activities. In addition, Pegasus inserts the data transfer activities, e.g. data stage-in, in the DAG for workflow execution. The mapper component can realize workflow partitioning through three methods [22, 23, 42]. As discussed in Section 2.2.3, Chen and Deelman  propose a workflow partitioning method under storage constraints at each site. This workflow partitioning method is used in a multisite environment with dynamic computing provisioning as explained in . Another method is balanced task clustering . The workflow is partitioned into several workflow fragments which have almost the same workload. This method can realize load balancing for homogeneous computing resources. The last method is to cluster the tasks of the same label . To use this method, the tasks should be labeled by users. In the WEP execution layer, the job scheduler may perform site execution based on standard algorithms (random, round- robin and min-min), data location and the significance of computation and data in the workflow execution. For example, the job scheduler moves computation to the data site where big volume of data is located and it sends data to compute site if computation is significant. At this point, Pegasus schedules the execution of tasks within a workflow engine such as DAGMan. In Pegasus, DAGMan sends the concrete executable tasks to Condor-G, a client tool that can manage the execution of a bag of related tasks on grid-accessible computation nodes in the selected sites. Condor-G has a queue of tasks and it schedules a task in this queue to a computing node in the selected site once this computing node is idle [54, 87]. Pegasus handles task failures by retrying the corresponding part of workflows or transfer the data again with a safer data transfer method. Through these mechanisms, Pegasus hides the complex scheduling, optimization and data transmission of workflows from SWfMS users.
Approaches addressing an automated selection and determination of appropriate response activities (R1 and R3) are manifold. A first type relies on so-called process repositories, i.e. alternative process models specified in ad- vance and selected in the case of an emergency. The selection is made at run-time and based on available context data (see, e.g., Fahland and Woith, 2009; Lin and Jun, 2008). A second type of modeling approaches is based on automation of process modeling: (Heinrich, Klier and Zimmermann, 2011) propose an automated modeling by predefined process fragments, ontology, and several new algorithms. However, these approaches have not yet been integrated into WfMS and, therefore, they do not provide the required run-time functionality. Furthermore, the consideration and analysis of interdependencies (R2) is at best mentioned parenthetically. Approaches analyzing interdependencies between activities, resources, and time (R2) are usually model-based and do not deal with an automated consideration in WfMS either. Nevertheless, they offer various methods addressing at least sub problems which are associated with DRWfMS, e.g. a formal representation and modeling of temporal and resource restrictions as prerequisite for automated processing (see, e.g., Hofmann et al., 2013). However, to the best of our knowledge, there are no approaches addressing spatial interdependencies in proces- ses (R2). Interdependency between place and activities, resources, or time is not explicitly formalized. Modern DRM strongly relies on spatial information, e.g. provided by geo information systems (GIS) in order to gain ac- cess to geo data from the place of the disaster. For instance, deNIS II plus and WebEOC are well known examples of DRM systems offering comprehensive functions to improve disaster response planning in general. Although spatial information is of particular importance for efficient and effective DRP, current approaches do not provide functions or methodical support for design, execution, and management of DRP taking these interdependencies and the resulting restrictions adequately into consideration.
Finally, the decision hierarchy describes, whether an activity can be passed on from a workflow performer to another delegate performer. This act of substitution is common during the absence of a resource (when a deputy takes over some or all of the functions of the assignee). From a security perspective, delegation is a potentially harm- ful function. If the assignee is free to choose the ultimate recipient of an activity, this might endanger workflow constraints such as the separation of duty. For example, if an accounting process requires two separate members of the accounting department to authorize a purchase, the workflow enactment service would assign the second autho- rization activity to a member of the department who has not performed the first activity. However, if this member is allowed to delegate the activity, and chooses the performer of the first activity (since he or she is a qualified member of the accounting department), the process constraint “separation of duty” is violated. For this reason some workflow systems provide flags within their activity specification, which allow excluding activ- ities from being delegated to performers who are not the original assignees. Wainer et al. have discussed the security problems of delegation and revocation in workflow systems in a recent technical report . In a related paper, Ahn et al. have proposed the use of existing role-based access control mechanisms to secure a web-based workflowmanagement system .
The OKBQA controller is a dedicated workflow manager for constructing a OKBQA pipeline by linking OKBQA modules as shown in Figure 1. The controller makes a pipeline work by transferring I/O of each module sequentially. The controller realizes and provides the key functions described in Section 1, which is detailed in the following sections.
Dynamic epistemic logics which model abilities of agents to make various announcements and influ- ence each other’s knowledge have been studied extensively in recent years. Two notable examples of such logics are Group Announcement Logic and Coalition Announcement Logic. They allow us to reason about what groups of agents can achieve through joint announcements in non-competitive and competitive environments. In this paper, we consider a combination of these logics – Coalition and Group Announcement Logic and provide its complete axiomatisation. Moreover, we partially answer the question of how group and coalition announcement operators interact, and settle some other open problems.
To introduce the logics we will be working with in this paper, we start with an example loosely based on the one from . Let us imagine that Ann, Bob, and Cath are travelling by train from Nottingham to Liverpool through Manchester. Cath was sound asleep all the way, and she has just woken up. She does not know whether the train passed Manchester, but Ann and Bob know that it has not. Now, if the train driver announces that the train is approaching Manchester, then Cath, as well as Ann and Bob, knows that they have not passed the city yet. To reason about changes in agents’ knowledge after public announcements, we can use Public Announcement Logic (PAL) . Returning to the example, let us assume that the train driver does not announce anything, so that Cath is not aware of her whereabouts. Ann and Bob may tell her whether they passed Manchester. In other words, Ann and Bob have an announcement that can influence Cath’s knowledge. An extension of PAL, Group Announcement Logic (GAL) , deals with the existence of announcements by groups of agents that can achieve certain results. Now, let us assume that Ann does not want to disclose to Cath their whereabouts and Bob does, i.e. Ann and Bob have different goals. Then, it is clear that no matter what Ann says, the coalition of Bob and Cath can achieve the goal of Cath knowing that the train has not passed Manchester, that is, Bob can communicate this information to Cath. On the other hand, if Ann and Bob work together, then they have an announcement (for example, a tautology ‘It either rains in Liverpool or it doesn’t’), such that whatever Cath says, she remains unaware of her whereabouts. For this type of strategic behaviour, another extension of PAL – Coalition Announcement Logic (CAL) – has been introduced in .
However with the rapid increase in complexity, volumes, and dimensions of heterogeneous data, the big data analytic workflows often goes beyond the scope of an individual for a successful data analysis process and hence, requires a collaboration of multiples scientists [199, 201, 165, 168]. As none of the existing SWfMSs supports real-time collaboration [201, 130, 165, 164], researchers often manually send (e.g., via e-mail and so on) or upload the workflows to some social shared spaces such as myExperiment  for collaboration [201, 199]. For example, around 3910 such scientific workflows have been shared among 10665 members (as last noted in August 2018) for collaboration in myExperiment . Realizing the compelling need of such sci- entific artifact collaboration, researchers have proposed several methods for collaborative SWfMSs in recent years [130, 164, 199, 201, 165, 167]. However, although the existing techniques show promising results (e.g., for consistency management and so on) from several computer generated simulated studies [199, 56, 201] or theoretical use-cases [165, 164, 167], none of the studies considered human factors, such as adapted work patterns, data analysis problem solving, challenges and so on for scientific experiments from collaborative SWfMSs perspective . Unlike collaborative text or graphics editing systems the scientific workflows are often more structured where one module can be highly dependent on another in their execution forming a hierarchical relation among them [201, 66, 130, 111]. Even any minor changes in any part of a workflow can significantly impact the other part of the collaborative workflow in execution and data manipulation [56, 55]- which often make the problem notably different than that of unstructured document collaborative systems, such as text or graphics editing systems [130, 201, 165]. Studying and understanding the human engagement or work patterns for collaborative data analysis, hence is important towards accelerating the emerging data analysis (e.g., by application of machine learning, data mining and so on) process with the aid of CSCW [68, 201, 91, 58].