geographically distributed software development

Top PDF geographically distributed software development:

Communication Networks in Geographically Distributed Software Development

Communication Networks in Geographically Distributed Software Development

The development tasks were identified by modification requests (MRs) which represent defects in the software or functionality enhancement requests. Software developers communicated and coordinated their development tasks in several ways. Opportunities for interaction exist when working in the same formal team or when working in the same location. For instance, all the development teams had periodic meetings as frequent as once or more times a week. Developers also used a range of communication tools to interact and coordinate their work such as email, an on- line-chat system (Internet Relay Chat - IRC), video confer- ence, and a development task tracking system. One of the authors interviewed several developers, who identified the online-chat system (IRC) as the primary communication means for development and debugging work. The second most commonly used tool was the MR tracking system which not only tracked requests as they were opened, as- signed, and resolved, but provided text chat capability for each request. In addition, developers indicated that they used email and video-conferences, but primarily for design and architectural definition activities. Given those patterns of communication means usage, we collected communica- tion and coordination information from the online chat and the MR-tracking system. In the rest of the document, we will refer as IRC communication or coordination as such activities carried out using the online-chat system. We will refer to as MR communication or coordination when those activities took place using the MR-tracking system.
Show more

10 Read more

Team Knowledge and Coordination in Geographically Distributed Software Development

Team Knowledge and Coordination in Geographically Distributed Software Development

at a single organization with one large, relatively stable software development team. Some of our fi ndings may not readily generalize to other kinds of organizations or to software teams that have a high turnover during the execution of a project, although we speculate that problems stemming from lack of team knowledge will be exacerbated when turnover is high. In addition, although we selected two sites that had only one hour of time zone difference and fl uency in English, cultural differences exist between developers in Germany and the United Kingdom, so it is possible that some of the observed effects were due to cultural differences. However, our on-site experience suggests that the German developers were very skilled in their use of the English language and that most developers at both sites were knowledgeable of each other’s cultures and traveled frequently to each other’s sites. Nevertheless, our study provided much needed insights into coordination issues relevant to the global software develop- ment context, and our propositions can be validated in studies of other organizations. Also, while the high reliability of coding with an independent coder is reassuring, the interpretation of results is always subject to construction by the researcher. This problem was mitigated by the fact that four researchers discussed these interpreta- tions and reached the same conclusions. Finally, our study is limited to a few types of team knowledge, but other ones such as “collective mind,” “mutual knowledge,” and “environmental awareness” may also have an effect on coordination. Despite these possible limitations, we believe that this research makes signifi cant contributions to both the research literature and the practitioner community, as we discuss below.
Show more

35 Read more

DEPENDENCIES IN GEOGRAPHICALLY DISTRIBUTED SOFTWARE DEVELOPMENT: OVERCOMING THE LIMITS OF MODULARITY 1. Marcelo Cataldo CMU-ISRI

DEPENDENCIES IN GEOGRAPHICALLY DISTRIBUTED SOFTWARE DEVELOPMENT: OVERCOMING THE LIMITS OF MODULARITY 1. Marcelo Cataldo CMU-ISRI

The traditional view of software dependency, syntactic dependencies, had its origins in compiler optimizations and they focus on control and dataflow relationships (Horwitz et al, 1990). This approach extracts relational information between specific units of analysis such as statements, functions or methods, as well as modules, typically, from the source code of a system or from intermediate representations of software code such as bytecodes or abstract syntax trees. These relationships can represent either a data- related dependency (e.g. a particular data structure modified by a function and used in another function) or a functional dependency (e.g. method A calls method B). The pioneering work by Basili and colleagues (Hutchins & Basili, 1985; Selby & Basili, 1991) represents the first attempt to use of this type of data in the context of failure proneness of a system. Building on the concepts of coupling and cohesion proposed by Stevens, Myers and Constantine (1974), Hutchins and Basili (1985) presented metrics to assess the structure of a system in terms of data and functional relationships which were called bindings. The authors used clustering methods to evaluate the modularization of a particular system. Selby and Basili (1991) used the data binding measure to relate system structure to errors and failures in a software system. Using a comparison of means
Show more

200 Read more

guest editors introduction James D. Herbsleb and Deependra Moitra, Lucent Technologies

guest editors introduction James D. Herbsleb and Deependra Moitra, Lucent Technologies

In our experience, training programmers to think and behave like software engineers is an uphill task. Many educational pro- grams now include team-oriented work, and with globalization so pervasive, they also need to train their students with geo- graphically distributed development in mind. Jesús Favela and Feniosky Peña- Mora, in “Geographically Distributed Col- laborative Software Development,” describe a project-oriented software engineering course and show how students in two dif- ferent countries collaborated using an Inter- net-based groupware environment. While their objective in designing the project was educational, their experiences are signifi- cant to the business community.
Show more

5 Read more

FPGA Based Binary Heap Implementation: With an Application to Web Based Anomaly Prioritization

FPGA Based Binary Heap Implementation: With an Application to Web Based Anomaly Prioritization

develop a deeper understanding of the other’s perspective and to work toward developing a system that exploited technology, while maintaining a clear focus on customer needs. The frequency of communication facilitated by the use of the prototype also helped to mitigate the tensions involved in achieving effective communication. In particular, agile planning helped to mitigate control tensions by providing the flexibility and structure essential to Project MAGENTA. It helped the development team maintain some control over the critical aspects of development, but without unduly slowing down the development process. Moreover, constant communication helped to mitigate team cohesion tensions by minimizing the impact of the development team’s distribution and cognitive distance, thereby facilitating the maintenance of a unified team. Specifically, constant
Show more

94 Read more

ENSEMBLE SELECTION AND OPTIMIZATION BASED ON SOFT SET THEORY FOR CUSTOMER CHURN 
CLASSIFICATION

ENSEMBLE SELECTION AND OPTIMIZATION BASED ON SOFT SET THEORY FOR CUSTOMER CHURN CLASSIFICATION

understand architectural challenges that participate in the adoption of agile approaches and industrial practices, to be able to develop huge and architectural challenging systems. We always will get to the question: Are we creating the right product? This is the moment where validation takes palce, and requires all stockholder participation for requirement specification and development. Validation is the process of making real tests in source code. Through validation, we guarantee that our product is designed covering the needs of the client, monitoring them through a check list using inspection meetings, comments, documents, to obtain a product that is compliant with the initial objectives set by stackholders. 27
Show more

9 Read more

Architecture of a Software Configuration Management System for Globally Distributed Software Development Teams

Architecture of a Software Configuration Management System for Globally Distributed Software Development Teams

Several techniques (e.g. branching, merging etc.) are used to manage the work products of a software project on configuration management repository. Configuration items consist upon deliverables and non-deliverables of a project. The deliverable work products are important configuration items; hence, those require extra considerations during their development. The delay in completion or any conflict in such work products leads to more risky state of project. Scott and Nisse (2001) [14] believed that all items, on which multiple practitioners are expected to work on, should be maintained under configuration control to manage the changes in controlled manners. Software project developed by collocated teams usually consists upon single configuration management server. Multiple, collocated members of software development team interact with configuration management server to accomplish their day to day tasks. Access rights are implemented on repositories of configuration management system by providing certain rights to users of configuration management system. These rights are managed by configuration engineer or configuration administrator. A simplified example of a configuration management system is shown in Figure 2.
Show more

5 Read more

Evaluation of Aspect Oriented Software Development for Distributed Systems

Evaluation of Aspect Oriented Software Development for Distributed Systems

Concerns are the reason for organising software into modules that are understandable and manageable. Many different kinds of concerns can be relevant to different developers in different roles, at different stages of the software lifecycle. Data and classes can be concerns, as can features such as persistence and aspects such concurrency control. In MDSOC terminology, a kind of concern is referred to as a dimension of concern. Hence, separation of concerns involves decomposition of software according to one or more dimensions of concern. Separation along one dimension of concern may promote some goals whilst impeding others. It is difficult to discover the relevant set of concerns to separate as they are context sensitive and vary over time. This means that any set of criterion for decomposition and integration will be appropriate for some set of requirements but not for all. Additionally, multiple dimensions of concern may be relevant simultaneously, and they may overlap and interact, as features and classes do. Thus, modularisation according to different dimensions of concern is needed for different purposes: sometimes by class, sometimes by feature, sometimes by aspect or other criterion. These considerations imply that developers must be able identify, encapsulate, modularise and manipulate multiple dimensions of concern simultaneously, and to introduce new concerns and dimensions at any point during the software lifecycle without suffering the effects of invasive modification and rearchitecure. As mentioned earlier, modern languages suffer from a problem termed the "tyranny of the dominant decomposition" [23] - they permit separation and encapsulation of only one kind of concern at a time. It is therefore impossible to obtain the benefits of different decomposition dimensions throughout the software lifecycle. Changing the unit of modularisation during the evolution of a system can have disastrous effects, some languages don't even let you change your dominant decomposition e.g. you must use classes in Object-Oriented programming.
Show more

112 Read more

Challenges to Practice Agile Methods in Global Software Development – A Review of Literature

Challenges to Practice Agile Methods in Global Software Development – A Review of Literature

There are no distinct roles such as Quality Assurance (QA), project manager (PM) or Developer in agile methods. Maintaining software quality would be the whole team’s responsibility. It is important that entire team contribute in definition and execution of acceptance tests. Building this mind set in the team is considered a challenge [24] [19] [26]. In order to complete testing within a sprint developer contribution to testing plays an important role. Testing should start in parallel to development. Developers should practice Test Driven Development (TDD) and should conduct unit tests to test the functionality. This will save time for testers to focus on acceptance tests and regression tests [24, 26, 27, 28]. Developers need to build and deploy their code early in order for testers to start, which otherwise delay testing process [19, 24, 29].Requirements should be clearly defined to define the acceptance criteria of tests. This could be challenging specially in offshore software development, in the absence of continuous customer engagement [12] [19] [17].At the same time, testing personnel should be technically competent to help the developers with integration testing and API testing [24] [30].Implementing test automation is essential to complete testing on time [21] [19] [26].
Show more

6 Read more

Requirement Engineering Process in Agile Software Development: Review

Requirement Engineering Process in Agile Software Development: Review

Non-Functional Requirements: Non functional requirements give rise to serious issues in agile development. This is because customers want best and fast results and they always ignore performance, workloads, efficiency, and safety issues. Requirements are elicited in agile approaches without keeping an eye on portability, maintainability, and many other non- functional issues. Mostly non-functional requirements or NFRs are closely related and equally important to that of functional requirements and are almost not expressible in many cases. For example, customer in agile development environment wants you to develop a gaming system, which must be a third person shooting game. Though all requirements will be provided by him like gaming modes, strategies, rules, processors, and graphics etc yet he will always ignore minor requirements that may be any ethical or organizational or any requirements like it should not be a copy of any idea of other game, which standard should it follow for gaming, its usability, flexibility, serviceability, quality, and many other factors. Therefore, NFRs in both traditional developments as well as agile developments must never be ignored in any case. We do not mean to consider all of NFRs but at least all those should be included, which affect system’s performance in any way especially when final product is released.
Show more

15 Read more

THERMAL MANAGEMENT CONSIDERATIONS FOR GEOGRAPHICALLY DISTRIBUTED COMPUTING INFRASTRUCTURES

THERMAL MANAGEMENT CONSIDERATIONS FOR GEOGRAPHICALLY DISTRIBUTED COMPUTING INFRASTRUCTURES

data centers spent most of the year running at 10% to 20% of their maximum capacity. By contrast, in just the same manner that distributed power generating plants pool together resources to deliver power more efficiently to end users through an interconnected grid (rather than having a power plant dedicated to supporting each neighborhood), service providers are now pooling together distributed resources from different environments to support user needs on demand. Thus, an online retailer can now simply ‘rent’ additional capacity for peak shopping periods and build a much smaller data center for year- round use to save on depreciation costs (or even eliminate the need for their own data center entirely). However, due to this elasticity, the ability to simply bring additional computing capacity online or offline whenever demand is forecasted to spike or diminish can be quite beneficial. From a thermal management perspective, this implies a larger number of discrete systems in the control volume, which in turn suggests a higher number of degrees of freedom in the system (i.e., higher value of i but also greater dependence on  i ).
Show more

10 Read more

Information Systems Methodologies. Assessment 4. An Essay on Extreme Programming F21IF. Boris Mocialov. Assem Madikenova.

Information Systems Methodologies. Assessment 4. An Essay on Extreme Programming F21IF. Boris Mocialov. Assem Madikenova.

dissatisfaction  of  software  development  members  who  were  working  with  waterfall  methods  before.   Waterfall  methods  were  not  allowing  them  to  get  necessary  deliverables  that  software  development   team  considered  significant.  They  faced  a  number  of  drawbacks  coming  from  heavy  accent  on  process   and  documentation.  Afterwards  those  former  users  of  waterfall  methodologies  decided  to  establish   Agile  Alliance,  new  approach  of  problem  solving  in  software  development  which  gives  team  members   more  flexibility  on  their  work  and  lets  them  avoid  enormous  documents  which  are  sometimes  even   useless.  In  2001  in  Utah,  USA  seventeen  members  of  the  alliance  negotiated  four  core  values,  that  they   felt  all  agile  projects  should  have  in  common.  They  called  the  collection  of  these  values  the  Agile   Manifesto  (  (Highsmith,  2001).  These  core  values  are  the  basis  for  agile  methods:  
Show more

14 Read more

Study of the Implementation of Cloud Computing: Applications and Challenges

Study of the Implementation of Cloud Computing: Applications and Challenges

In current IT/ITES market where most of the organization started to implement the “Integration of Service based Methodology” for their clients and also for the internal structure of organization (especially for the dedicated IS team of the organization).The “Cloud Concept” built upon the three pillars of current computing system such as “Infrastructure”, ”Platform” and the “Software” or applications. The business benefit of “Cloud” is it use to provide “On demand Service” which helps to full fill the demand of chain execution can be the cause of reducing expenses of implementation of multiple processing units. Cloud provides the concept of updating of resources without affecting the underlying infrastructure, which reduce the need of backup system and encourage the continuous execution of application. Cloud provides potential “Reliability” and “Scalability” for the applications either deployed or are running on cloud. Since, cloud use to assure out most security for any business application, it provides a “Private Cluster” for each application
Show more

7 Read more

Modelling Software Reliability Growth Phenomenon In Distributed Development Environment

Modelling Software Reliability Growth Phenomenon In Distributed Development Environment

Software testing-effort expenditure is measured by resources such as man power spent during testing, CPU hours, number of test cases etc. Musa [2] indicated that the testing effort index or the execution time is a better time-domain for software reliability modeling than the calendar time. About half of the resources consumed during the software development cycle are testing resources. These testing resources spent in testing appreciably affect software reliability. The consumption curve of these resources over the testing period can be thought of a testing-effort curve. In other words, the function that describes how testing resources are distributed is usually referred to as testing-effort function and it has been incorporated into software reliability modelling. Various forms of testing-effort functions have been used in the literature viz., exponential, Rayleigh, Weibull, logistic etc. to represent effort consumption [16], [17], [18], [19], [20], and [21]. The rest of the paper is organized as follows: Section 2 reviews some of the well-documented and established non-homogenous Poisson process (NHPP) based software reliability model for software quality/reliability measurement and assessment in a distributed development environment. Section 3 proposes a newly developed quantitative technique for software quality/reliability measurement and assessment model. Section 4 defines the technique that has been employed for parameter estimation and software reliability data analyses, and provides the comparison criteria used for validation and evaluation. Section 5 presents the applications of the proposed integrated modelling approach to actual software reliability data through data analyses and model comparisons. Section 6 concludes and identifies possible avenues for future research.
Show more

9 Read more

Synchronized Encrypted Database Structure Management Method for Remote Storage Environment

Synchronized Encrypted Database Structure Management Method for Remote Storage Environment

Engg College, Gobi prakades@gmail.com Abstract - Cloud computing enables highly scalable services to consumed over the Internet. Cloud services are provided on user request basis. DataBase as a Service (DBaaS) model is used to manage databases in cloud environment. Secure DBaaS model provides data confidentiality for cloud databases. Secure DBaaS is designed to allow multiple and independent clients to connect to the cloud without intermediate server. Data, data structures and metadata are encrypted before upload to the cloud. Multiple cryptography techniques are used to convert plain text into encrypted data. Table names and their column names are also encrypted in the cloud database security scheme. The system supports geographically distributed clients to connect directly to an encrypted cloud database. The client can perform concurrent query processing on encrypted databases. RSA and Advanced Encryption Standard (AES) are used in the system. The Secure DBaaS framework is enhanced to support concurrent database structure modification scheme with minimum overhead. Digital signature based data integrity verification mechanism is integrated with the system. Encrypted query submission model is used to secure the query values. Access control mechanism is used to allow users to grant permissions for other users.
Show more

7 Read more

Contents. Bibliografische Informationen digitalisiert durch

Contents. Bibliografische Informationen digitalisiert durch

1.1 Introduction 3 1.1.1 Distributed Software Development 3 1.1.2 Agile Software Development 4 1.2 Merging Agility with Distribution 4 1.2.1 Potential Issues 5 1.2.2 All or Nothing versu[r]

10 Read more

A modified genetic algorithm for time and cost optimization of an additive manufacturing single-machine scheduling   Pages 423-438
		 Download PDF

A modified genetic algorithm for time and cost optimization of an additive manufacturing single-machine scheduling Pages 423-438 Download PDF

The first source is Li et al. (2017), a work about production planning of distributed AM machines to fulfil demands received from individual customers in low quantities. The aim of the paper is to understand how to group the given parts from different customers and how to allocate them to various machines to minimize average production cost per volume of raw material. The authors recognized that the problem is not resolvable in acceptable time by a normal CPU, so they preferred to create two different heuristics. The heuristics take into account the fact that AM machines are different, located in several parts of the globe, and two main data for the products to be realized are available, i.e., maximum part height and production area of the machine. It is worth noting that this is a good way to optimize the problem, but it neglects the important fact that, sometimes with support structures, the machine chamber allows a part on top of each other. Moreover, the aim of this paper is to investigate the scheduling of a single machine in a specific production system, not in a geographically distributed environment, so that the paper can give some advice on the problem, such as the complexity and the mathematical model, but it is a different problem from the one discussed in this paper.
Show more

16 Read more

White Paper. Cloud Performance Testing

White Paper. Cloud Performance Testing

Cloud environments could be shared with the development team for debugging purposes. With the Cloud, testing team can say -- We have tested this software in a real environment in the Cloud. Here is the defect and here is a link to the environment that was used for testing. Now developers can access that URL, to see the defects and fix them.

10 Read more

Distributed empirical modelling and its application to software system development

Distributed empirical modelling and its application to software system development

8.3.1 Possible Applications of DEM The framework for OEM proposed in this thesis promises to provide a distributed environment of human interaction for facilitating mutual knowledge expl[r]

311 Read more

Development Process Patterns for Distributed Onshore/Offshore Software Projects

Development Process Patterns for Distributed Onshore/Offshore Software Projects

and whether this occurs before or after assembly test (indicated by the red transition points) needs to be carefully considered. Transition prior to assembly test means a change in team and ownership, but may be required due to technical testing constraints (e.g., cross- platform environments) or contractual obligations (e.g., only delivering one part of the application). However, where possible, execution of assembly test is more effectively performed by the development team prior to any significant handover or transition to another organization (e.g., the formal onshore test team). There are circumstances where even the most basic distribution models cannot be executed and require all tasks to execute at the customer sites. For instance, if the customer is uncomfortable or unwilling to see part of the effort executed at a delivery/development centre or has a particular environment, the delivery centre personnel can work at the customer site.
Show more

19 Read more

Show all 10000 documents...