NAMD  is a powerful parallel Molecular Mechanics(MM)/Molecular Dynamics(MD) code particularly suited for the study of large biomolecules. It is compatible with different force fields, making possible the simulation of systems of quite different characteristics. NAMD can be efficiently used on large multi-core platforms and clusters. The NAMD real life use case was provided by the CNR-ISOF  group located in Bologna, it consists of the simulation of a lipidic bilayer in a water box made up of about 36,000 atoms to be run for 70ns of simulated time. Using 16 cores on a dual AMD 6238 machine the simulation requires 85 days of wall clock time. The application was ported to exploiting two sites, one located in Naples and the other one in Bologna. Naples site is equipped with InfiniBand network connection while the Bologna site has gigabit interconnection. NAMD was ported to Grid by rebuilding the binaries on Scientific Linux 5 and linking OpenMPI v4.3 libraries. We performed a series of preliminary scalability tests to choose the number of computational nodes to run the full simulation. Results of these preliminary tests are shown in Figure2.
of research cited above. Some relevant contributions, which exploit Mobile Agents technology are [5, 6]. We aim at exploiting flexibility of Mobile Agent programming to manage legacy applications without changing the original code and without rewriting the application into another language or adopting a new programming paradigm. Some approaches which should be compared with the one presented in this paper are cited below.  presents an approach to support mobility of applications in a GRID environment. A mobile agent is described by an UML-like language called blueprint. The blueprint description is transferred together its status. The receiving platform translates the agent blueprint description in a Java or a Python application that implement the original functionalities Blueprint is a language for a high level specification of agent functionalities and of its operational semantics. A specification describes the behavior of an agent in terms of its basic building blocks: components, control flow and data flow. Agent factories interpret the agent blueprint description and generate executable code composed of, for example, Java, Python, or C components. To support migration of common user applications, in , authors provide the possibility to insert some statements, such as go( ) or hop( ), between blocks of computation, such as for/while loops. A pre-compiler, such as ANTLR for C/C++ and JavaCC for Java source code, is used to substitute these statements with code that implements mobility. A user program may be entirely coded in C/C++ and executed in native mode. As a result Java agent starts the native code using a wrapper implemented by the JNI technology. In our approach programmers who want to support migration do not need to deal with any models, but they have just to handle such events which ask to save or resume the application status.
Metadata and knowledge pervade the Grid and some of them already exist on and within the Grid. For instance, a Grid resource published as a Grid/Web service exposes its metadata in its WSDL1 file, which include information about the service location, signature, input/output argu- ment types and formats, interaction and invocation meth- ods, etc. Given that the emphasis of the Grid is to share and reuse distributed resources in a VO for coordinated prob- lem solving, it is reasonable to assume that domain- dependent application-specific scientific knowledge in a Grid application is also available. For example, in an engi- neering design search and optimization Grid application all design optimization algorithms and knowledge regarding their usage should already be there, though they might ex- ist in a diversity of formats. While metadata and knowl- edge that are currently not available could be required in order to help realize the Grid vision, the key issues for us- ing metadata, semantics and knowledge on the Grid are how (1) to acquire, formally model, explicitly represent, store, maintain and update them; and (2) to use them to support seamless resource sharing and interoperability, so as to achieve a high degree of automation.
The usefulness of grid portal technologies for computational science has been established by the number of portals being developed in the US, Europe, Asia and the Pacific Rim. More importantly, many of these portals are based on frameworks relying on reusable, sharable portlets and add-on technology provided by our consortium. Together these technologies comprise a component architecture that allows the scientific portal developer to build a portal from well-tested and highly configurable pieces. This architecture rests on several major foundational elements: standardized portlet containers based on the JSR-168 standard, Grid standards for security, file transfer and remote execution based on the Globus toolkit and accessed though the CoG programming interfaces and the emerging web-service architecture which allows science applications, hosted on remote resources, to be easily and securely accessed through the portal.
This section gives an overview of the related work and the state-of-the-art. As already mentioned, modeling in the Smart Grid and the simulation of its components during runtime is no unknown territory. Similar approaches have been in the focus of several research projects during the last decade. For example, the first steps of utilizing MBSE to model a power system from a SoS perspective in order to manage its complexity and address the concerns of different stakeholders has been set by Lopes et al. (2011). Although being in early stages of research, some proposed concepts find still usage at cur- rent times, like using planes for structuring a Smart Grid or applying SysML for modeling the system. Additionally, in Lampropoulos et al. (2010) a methodology specially focused on modeling the behavior of prosumers in the Smart Grid is introduced. By making use of simulating Electric Vehicles (EVs), first effects of their individual behaviors towards the complete power system could be investigated. A few years later, it has been recog- nized that standards for defining a Smart Grid need to be introduced in order to build a common foundation, which can exemplatory be seen in the IEC Smart Grid Standards Map 3 . Thus, a framework for Model-Driven Engineering (MDE) in the Smart Grid imple- menting the Common Information Model (CIM), the IEC 61850 as well as the IEC 61499 has been developed by Andrén et al. (2013). However, a more complete approach inher- iting an underlying architectural framework and introducing an adequate development process has already been proposed by the authors of this paper (Neureiter et al. 2016). Moreover, this methodology has been in constant development resulting in the proposal of DSSE, which is observable by the continously updated SGAM Toolbox. Since this paper deals with further enhancing its functionality by developing an interface to Mosaik, used technologies like the SGAM and the particularities of this approach itself are described in more detail in the following, with a short introduction of Co-Simulation in the Smart Grid.
ABSTRACT: The paper proposes a method of modelling and simulation of Solar Photovoltaic arrays (SPY), interface between two dissimilar toolboxes in MATLAB and different control schemes for the converter control.The main objective is to perform the analysis of the renewable based standalone system for smart mini gridapplications by developing a hybrid model.The method finds the I-V output waveform for the single diode photovoltaic model including the effect of the series and parallel resistances. Simulation of the standalone system has been carried out using two different DCDC converters one with push pull converter topology and the full bridge converter topology. A PWM inverter model has been used for the complete model simulation. Finally, hybrid model have been developed so as to perform the analysis of the renewable based standalone system for smart mini gridapplications.
Abstract Distance visualization of large datasets often takes the direction of remote viewing and zooming techniques of stored static images. However, the continuous increase in the size of datasets and visualization operation causes insufficient performance with traditional desktop computers. Additionally, the visualization techniques such as Isosurface depend on the available resources of the running machine and the size of datasets. Moreover, the continuous demand for powerful computing powers and continuous increase in the size of datasets results an urgent need for a grid computing infrastructure. However, some issues arise in current grid such as resources availability at the client machines which are not sufficient enough to process large datasets. On top of that, different output devices and different network bandwidth between the visualization pipeline components often result output suitable for one machine and not suitable for another. In this paper we investigate how the grid services could be used to support remote visualization of large datasets and to break the constraint of physical co-location of the resources by applying the grid computing technologies. We show our grid enabled architecture to visualize large medical datasets (circa 5 million polygons) for remote interactive visualization on modest resources clients.
Abstract: The demand for minute-scale forecasts of wind power is continuously increasing with the growing penetration of renewable energy into the power grid, as grid operators need to ensure grid stability in the presence of variable power generation. For this reason, IEA Wind Tasks 32 and 36 together organized a workshop on “Very Short-Term Forecasting of Wind Power” in 2018 to discuss different approaches for the implementation of minute-scale forecasts into the power industry. IEA Wind is an international platform for the research community and industry. Task 32 tries to identify and mitigate barriers to the use of lidars in wind energy applications, while IEA Wind Task 36 focuses on improving the value of wind energy forecasts to the wind energy industry. The workshop identified three applications that need minute-scale forecasts: (1) wind turbine and wind farm control, (2) power grid balancing, (3) energy trading and ancillary services. The forecasting horizons for these applications range from around 1 s for turbine control to 60 min for energy market and grid control applications. The methods that can be applied to generate minute-scale forecasts rely on upstream data from remote sensing devices such as scanning lidars or radars, or are based on point measurements from met masts, turbines or profiling remote sensing devices. Upstream data needs to be propagated with advection models and point measurements can either be used in statistical time series models or assimilated into physical models. All methods have advantages but also shortcomings. The workshop’s main conclusions were that there is a need for further investigations into the minute-scale forecasting methods for different use cases, and a cross-disciplinary exchange of different method experts should be established. Additionally, more efforts should be directed towards enhancing quality and reliability of the input measurement data.
On the other hand, if the applications (A and B) are coscheduled, then initially the application executes with thread allocation AB-AB (CPU sharing). At the time of checking for adaptation, if the adaptation controller realizes that applications do not coschedule well (by taking slowdown feedback from monitor), then the number of threads per application reduces to 1 and the thread allocation is changed to A-B (node sharing). The adaptation framework also provides the option of self coscheduling, i.e. AA-AA. But, in general, threads from the same applications may require similar type of resources, and this might lead to resource conflict, thus degrading the performance rather than benefit
The second challenge we faced in implementing the above application is the use of ROOT data analysis framework to process data. This is also a common requirement in scientific analysis as many data analysis functions are written using specific analysis software such as ROOT, R, Matlab etc. To use these specific software at DryadLINQ vertices, they need to be installed in each and every compute node of the cluster. Some of these applications only require copying a collection of libraries to the compute nodes while some requires complete installations. Clusrun is a possible solution to handle both types of installations, however providing another simple tool to perform the first type of installations would benefit the users. (Note: we could ship few shared libraries or other necessary resources using DryadLINQ.Resources.Add(resource_name) method. However, this does not allow user to add a folder of libraries or a collection of folders. The ROOT installation requires copying few folders to every compute node)
To maintain the operation in island mode and reconnection through the UPQC, communication process between the UPQC micro grid and micro grid system is mentioned in . In the present work, the control technique of the presented UPQC micro grid in  is enhanced by implementing an islanding and novel reconnection technique with reduced number of switches that will ensure smooth operation of the micro grid without interruption. Hence it is termed as UPQC micro grid –IR. The objective of this paper is to study the various types of power quality problems and their effects in distribution generation basedgrid connected microgrid system, to investigate that the mitigation techniques are suitable for voltage sag/ swell and interruptions in the event of a fault in a distribution generation basedgrid connected microgrid system.To observe the effect on the characteristic of voltage sag / swell and interruption for the techniques. Construct the micro grid control for all modes of operations, with more emphasis on: voltage sag / swell, harmonic and reactive power compensation and active power transfer to the load in the interconnected mode and how they can be mitigated with the use of the Unified Power Quality Conditioner (UPQC), which are also called custom power devices.
To represent contexts, we have found it useful to borrow an established method from traditional business processes – “our-ref”, and “your-ref”. This is a simple but effective means of allowing two communicating parties keep track of a particular subject. The key point is that in each bipolar communication, there are two references, one owned by each side respectively. Each side owns, uses and manages one reference ID (our-ref), and quotes the other (your-ref) back to its owner. Because each side is responsible for their own reference, they can ensure that this reference is unique within their own domain. Thus we do not need a global scope or schema for representing context. Of course, this tried and tested method has worked for many years in paper-based business processes.
Abstract. There have been a number of e-Science projects which address the issues of collaboration within and between scientific communities. Most effort to date focussed on the building of the Grid infrastructure to enable the sharing of huge volume of computational and data resources. The ‘portal’ approach has been used by some to bring the power of grid computing to the desk top of individual researchers. However, collaborative activities within a scientific community are not only confined to the sharing of data or computational intensive resources. There are other forms of sharing which can be better supported by other forms of architecture. In order to provide a more holistic support to a scientific community, this paper proposes a hybrid architecture, which integrates Grid and peer-to-peer technologies using Service Oriented Architecture. This platform will then be used for a semantic architecture which captures characteristics of the data, functional and process requirements for a range of collaborative activities. A combustion chemistry research community is being used as a case study.
How to use electrical networks is one of the most important issues of electrical engineers and it is always the effort of the designers and users of the network to design and operate the network to provide better service and provide customer satisfaction. Subscribers' low power factor is one of the concerns of distribution network operators as it increases network losses and reduces the limited capacity of line and transformers operation and also reduces the quality of delivery power. Local reactive power generation can also reduce power losses and free up grid capacity, which can increase power quality if the reactive power compensation level is not met.
During the last decade there has been a growing interest in Grid computing in- fluencing the way large-scale scientific and engineering research is conducted from a tightly coupled computing environment to a geographically distributed resource sharing approach. Scientists continue to use High Performance Computing (HPC) systems such as cluster of shared-memory multiprocessors, parallel vector machines and cluster of workstations connected by high-speed communication stack (low- latency and high-bandwidth networks, OS-bypass protocols and message passing software libraries). With scientific projects becoming increasingly collaborative, and people and the resources drawn from multiple organizations, Grid computing has become the next step in the evolution of HPC. The collaborative effort at the na- tional and international level resulted in building Grid infrastructures such as NSF’s Tera Grid in the USA, e-Science Grid in the UK, European Union DataGrid, Large Hadron Collider (LHC) Computing Grid at CERN in Geneva and Asia Pacific Grids. Grid infrastructures are being utilized by many application areas, ranging from high- energy physics, life sciences and geosciences to astronomy, earthquake engineering experiments and aerospace engineering. The characteristics of these applications are also wide ranging with some being data-intensive, some are compute-intensive and some requiring remote steering of experiments. This makes the development of Grid middleware services supporting data movement, data access, processing and workflow support for a given application domain challenging.
Abstract. Cloud Computing becomes an important technology for the supplying of resources that can be used to execute large-scale scientificapplications. It also provides lower-cost Computing resources access with personalized configurations. The user does not have to invest much in the acquisition and management of hardware such as storage, computing, databases, and networking, usually issued by the cloud provider. Users typically pay only for cloud services they use. Scientificapplications usually represented as Directed Acyclic Graphs (DAGs) are an important class of applications that lead to challenging problems for resource management in distributed computing. With the advent of Cloud Computing, particularly the Infrastructure as a Service (IaaS) offers on-demand virtual machines executing multiple tasks. These tasks consist of a large number of DAGs requiring elaborated scheduling and resource provisioning policies. In goal of optimization and fault tolerance, DAGs applications are generally partitioned into multiple parallel DAGs using clustering algorithm and assigned to Virtual Machine (VM). In this work, we investigate through simulation, the impact of clustering for both provisioning and scheduling policies in the total makespan and financial costs for execution of user’s application. We implemented four scheduling policies well-known in distributed computing systems, and adapted clustering algorithm to our resource management policy that leases and destroys dynamically VMs. We show that dynamic resources management policy can achieve better performance in term of makespan and budget cost than static management policies when partitioning workflow applications into grouped tasks before mapping them on virtual machines (VMs). The execution time and budget cost can be considerably reduced by managing efficiently VMs in order to maximise resources utilization and reduce the number of under-loaded VMs.
Today and in the future, the increase of fuel price, deregulation and environment constraints give more opportunities for the usage of the renewable energy sources (RES) such as wind and solar are used in power systems. The integration of RES into a micro grid can cause challenges and impacts on micro grid operation. The Photovoltaic system consists of PV panels connected through DC-DC converter and DC-AC inverter to the grid and maximum power point tracking is obtained by using MPPT algorithm. The control strategy of 100kW photovoltaic grid-connected inverter is proposed such as MPPT, current control and voltage control. The precision and effectiveness of this control strategy are verified by the computer simulations using MATLAB/SIMULINK. The system is proved to be effective with reduced harmonics while injection of power from PV to grid by maintaining constant DC bus voltage using PI controller.
loads do not contribute to the energy powering smart meters or spent during bi-directional communication with utility servers, hence such energy is not billable. It is therefore compelling for utilities and hardware manufacturers to ensure that energy budget with respect to communication activities is very low. While efficient electronic design is one way of achieving that goal, ultra-low power Digital Signal Processing (DSP) and energy-efficient communication protocols are also of immense values. The foundation has been laid in LoWPAN devices (IEEE 802.15.4), considering the scale envisaged in terms of number of hardware and processes that would make a functional smart grid, every unit of energy saved is a tangible achievement that would eventually have far-reaching effects. Therefore, a sustained campaign for low-power smart grid communication is an absolute necessity. Different authors [10- 12] have written in favour of 802.11 (WLAN), 802.15.4 (ZigBee), 802.16 (WIMAX) etc, for various segments of smart grid but we argue here that bearing in mind the several scores or hundreds of indoor meters required to connect to each other in a mesh topology, there are many odds against low-power wireless connectivity. In power distribution network, electricity meters are installed per household many of which may be separated by concrete building walls and fences. The implication is that wireless signal will experience building penetration losses (due to attenuation and absorption) as it propagates the walls and that can only be compounded by the very low antenna gain and low transmit power typically found in low-power wireless devices. Secondary losses like signal reflection, refraction and diffraction can also occur, these forms of losses combined can result in network black holes, which are unfavourable for applications. This may even be worse in smart gridapplications with strict QoS requirements. Spectrum licensing is another concern; we therefore propose 6LoPLC as an alternative using NBPLC.