Analytical Survey for Assuring and Maintaining Quality of Mobile Applications

10 

Loading....

Loading....

Loading....

Loading....

Loading....

Full text

(1)

Analytical Survey for Assuring and Maintaining Quality of

Mobile Applications

Kashmala Ilyas Software Engineering Fatimah Jinnah Women University

Rawalpindi, Pakistan Kashmalah.ilyas@gmail.com

Alefya Fida Ali Software Engineering Fatimah Jinnah Women University

Rawalpindi, Pakistan alefya-fida@hotmail.com Mehreen Sirshar

Software Engineering Fatimah Jinnah Women University

Rawalpindi, Pakistan msirshar@gmail.com

Abstract— Smart phone applications are the most widely

developed and used applications these days. Their nature comprises from entertainment applications to the real-time applications. Due to this nature it is very necessary to ensure that quality applications are being developed and deployed. Survey of different techniques, methodologies and frameworks, devised for quality application development in past few years, are summarized in this paper. These techniques are mapped onto quality evaluation criteria to evaluate the effectiveness of techniques in controlling the quality in development of smartphone applications; and have identified the attributes that have been missed out in these methodologies necessary to be kept in mid while developing the . A conclusion is drawn based on the strengths and weaknesses of techniques and general improvement suggestions are also provided by figuring the limitations in them.

Index Terms—Smartphones, quality assurance, performance,

development

I.

INTRODUCTION

Software products play an increasingly important role in our life and every software product faces a set of quality issues that affect us in different ways. Over the past few years, software quality has become more essential in software engineering, therefore it is important for each software project to define its specific meanings of quality during the planning phase. [11] Cellular technology has drastically revived in the past few years after the emergence of IPhone. The ability of smart phones to give us the features that could be accessed only through personal computers in past are the major reason behind the popularity of Smart phones [1].

It is focused that the quality of the overall smartphone application depends on the quality of experience that a user has

using it and how much it is being accepted by the target user. Measuring software usability is a major factor in judging the quality of any software or product. Usability metric is a well-known metric to estimate the attractiveness, understand- ability and learnability, operability of the software products and it is a part of software metric. Software metric is a measure of some property of a piece of software or its specifications. [12]

With the growth of different companies introducing their own platforms for Smartphone applications, it is becoming easier to develop the applications but tougher to maintain the standards that the applications should follow. As HTML5 is emerging as a viable option for building cross platform applications, experts are debating their quality and cost effectiveness according to ISO 9126 [1].

With more emerging applications of similar functionalities (e.g., various web browsers), performance and user experience has gradually become a dominant factor that affects user loyalty in application selection. Many Smartphone applications suffer from bugs that cause significant performance degradation, thereby losing their competitive edge [2]. Research has shown that work on bug avoidance, testing, debugging and analysis for Smartphone applications should be enhanced in order to overcome this specific issue.

The survey for better understanding the practices for mobile applications has shown that developers although adhere to the given best practices for development but do not tend to follow a specific standard process. It was found out that they did very little organization for tracking of their development efforts and gathered few metrics [3] whereas maintaining usability and quantifying the testability of the quality aspect of the Smartphone applications is highly demanded in our rapidly changing environment.

With the emergence of wireless networks, unique requirements and constraints of mobile systems bring new

(2)

challenges to software development [5]. To develop and deploy applications in such environments require a high degree of creativity and testing for quality of these applications is different from the conventional software testing. With mobile application development testing, new challenges associated with mobile technologies and device characteristics, have arisen [9]. To address all mentioned issues, the paper surveys various techniques proposed for assuring quality of mobile applications. The surveyed techniques are then analyzed on different parameters and conclusion is drawn at the end.

The rest of the paper is as follows: Section 3 discusses the existing techniques and methodologies for assuring quality and performance for Smartphone applications in detail. A thorough analysis of some trends found in the reported techniques is presented in Section 3. Section 4 concludes the paper and provides the suggestions for aspects that can be researched further.

II.

TECHNIQUES FOR DEVELOPING QUALITY

SMART-PHONE APPLICATIONS

Smart phones are a huge impact in our society. Due to the dramatic growth of smart phone market, mobile application development has also seen a huge surge [1]. As the mobile application industry grows to provide critical applications, it will be essential to apply software engineering processes to assure the development of secure, high-quality mobile applications [3]. The application markets are well aware of the high risks associated with installing and running mobile apps, especially malicious ones [10] which raise challenges for maintaining the security of mobile applications. This survey provides us insight of how quality smart-phone applications can be developed. A critical analysis for this purpose has been carried out under the defined evaluation criteria in table 1, for comparing different techniques, discussed in analyses. In our analysis we have figured out in the limitations and strengths of techniques. The following presents the review of the techniques that are surveyed.

A. Smart Phones Application development using HTML5 and related technologies: A tradeoff between cost and quality (Hasan et al, 2012)

There are six major quality attributes of HTML5 based smart phone applications rendering to ISO 9126 namely functionality, usability, reliability, efficiency, maintainability and portability. These attributes have been tested and evaluated on these applications using Employee Management application which used questionnaire forms for customer feedback for evaluating navigation between apps, controls, touch

responsiveness to judge the usability and reliability. Portability was tested by deploying the application for multiple platforms. Efficiency was, however, evaluated using performance was tests measuring the response and processing time. Different companies have introduced good debugging and code inspecting tools which helps maintaining the applications. Due to these applications there is cost effectiveness. This approach is very good as customer involvement is there but it might not be very feasible at many situations because taking customer feedback to ensure quality can be expensive and not very efficient.

B. Characterizing and Detecting Performance Bugs for Smartphone Applications (Liu et al, 2014)

This paper focuses on bug performance from real-world Smartphone applications due to either poor implementation or lack of dedicated quality assurance, via an empirical study it answers questions about bug types and impacts; bug manifestations; and debugging and bug-fixing efforts. By identifying common bug patterns, guidelines can be formed to avoid performance bugs in application development and generate tools for optimizing opportunities for testing and maintenance. In this paper a static code analyzer, PerfChecker, was developed on Soot to detect bug patterns in applications. GUI lagging, energy leaks, memory bloating were the common performance bugs noted. For performance testing and analysis, automated oracles are desired along with device variety as performance bugs can be platform dependent. For debugging profile analysis, using profiling tools and performance measurement tools, is suggested but it can be time-consuming and complex. This research can be error prone because it was done by manual inspection and also done only on android and generalized it to all smartphones.

C. Software Engineering Issues for Mobile Application Development (Anthony I. Wasserman, 2010)

There are many issues when it comes to mobile application development. In order to overcome these issues it is important to understand the development practices. System development processes, considered the best practices, are moving from process-intensive approach for development towards agile approach with Scrum approach. The developers should know the best practices which are also provided by the World Wide Web, Apple and Android for web mobile based applications, and other mobile application development. The Mobile applications have unique qualities and should have its own software engineering process which caters the project management issues and provide a management framework. The developers should follow the platform standards and keep in mind the non-functional requirements like performance,

(3)

reliability and security. Testing of the applications should be done not only on emulators but on android devices running different operating systems and several networks.

D. Factors Influencing Quality of Experience of Commonly Used Mobile Applications (Ickin et al, 2012)

This paper discusses the related work and factors influencing Quality-of-Experience and role of QoS in mobile applications. The method they employed includes quantitative and qualitative aspects. Online survey was used to recruit participants after collecting information of their mobile usage. A Context Sensing Software was used to gather continuous context data on the user’s mobile phone. The QoE was taken by an Experience Sampling Method. Additionally a weekly interview was done with the user along the Day Reconstruction Method. Using this data factors influencing user’s QoE was found to be application interface design, application performance, battery, phone features, apps and data connectivity cost, user routine and lifestyle. Factor influencing the QoS is which wireless access technology is used and the phone charging behaviors of the participants as WiFi 3G-4G drains battery. The limitation in this study comes because the participants were self-selected and analysis was done in short time. Also the extreme conditions could not be evaluated.

E. Mobile Application Testing: A Tutorial (Jerry Gao, 2014)

This paper is a tutorial on testing mobile application by testing requirements and current approaches for both native and web-based applications. It first summarizes the work already done in mobile application testing and then compares four popular testing approaches against the required goals and activities of a mobile application (like functionality and behavior, interoperability, usability, QOS, etc.). First approach is the Emulation-based testing by creating a virtual mobile environment. It is in expensive and easy but it limits the features that can be tested e.g. device-specific functions etc. Second is Device-based testing which provides the testing for underlying amongst other functionalities but the problem is coping with the rapid changes in mobile technologies which require purchase of many devices. Third is cloud testing which involves building a mobile device cloud for tests in large-scale. It is rentable making it cost-effective and diverse. Lastly, crowd-based testing involves contracting of testing teams. It does not require expensive lab purchase but might have poor testing quality and validation schedule due to lack of automation. Native and web applications have differences which should be kept in mind while testing. The test process should include component-testing, function- component-testing, QoS component-testing, feature testing and

service testing for native and component, system integration, function, system and feature testing for we-based applications. It also summarizes the popular mobile testing tools. There are still issues and challenges relating test environments and test automation.

F. Performance Evaluation of Mobile Software Systems: Challenges for a Software Engineer (V. Rahimian and J. Habibi, 2008)

This paper evaluates the existing performance evaluation approaches of mobile applications and the difficulties in doing so and devises performance measures and evaluation techniques for mobile systems. The approaches are related to design and architecture levels like extending UML for performance analysis for software architecture analysis, performance analysis of application adaption, evaluation of the effect of mobile agents, evaluating productivity and usability of application, scenario oriented performance evaluation etc. by making performance metrics. While evaluating these end-user’s perspective should be kept into account and be arranged in semi or completely real environment. Disadvantage of traditional performance evaluation metrics is weighing all applications equally rather giving event-based applications more attention. Considering special characteristics of mobility the measures for evaluation devised are structural measures, responsive measures, productivity measures, resource utilization measures and dependability measures of mobile softwares for developing high quality mobile software.

G. Usability Metric Framework for Mobile Phone Application (Azham Hussai and Maria Kutar, 2009)

This paper develops a conceptual model for evaluation of mobile phone applications from the existing desktop computing model. The research done in two phases: first carrying out systematic literature review and was found that efficiency, effectiveness, and satisfaction were employed as usability quality goals; just as in ISO 9241-11 standard. Secondly, usability metric was developed using the Goal Question Metric which is used for quality improvement and project planning. Several measurable questions were created from the goals and answered; moving the research from qualitative to quantitative level. After this refinement metrics was defined to assess mobile application. The limitation of this metric is that it has to be evaluated and validated for mobile application.

(4)

H. Where Has My Battery Gone? Finding Sensor Related Energy Black Holes in Smartphone Applications (Liu et al, 2013)

The writer has discussed the penetration of various sensor-based smartphones applications in today’s market and raised the issue of huge energy consumption. This paper discusses that some applications suffer seriously from energy inefficiency problems because they use Android sensors and their data ineffectively. Hence in order to resolve this issue, this paper focuses on the creation of a tool (GreenDroid) that locates energy inefficiency problems in Android applications for the energy consumption optimization. This paper discussed the study that if Android application developer misuses the Sensor listeners and/or underutilize them, then it can lead to energy waste. Later it is discussed about the application execution model (AEM) that they derived according to Android specifications. This model analyses the previous and current events and thus predicts the future events. In this way, the tool can create mock sensory data to the application that is being tested and check for registering and unregistering of sensors. The second step is to analyze the utilization of sensory data by executing the application finite number of times and generating a distinct sequence of user interaction events for each execution which helps to explore the different states of the application systematically. This tool was created based on real mathematical formulas to assess the data. It was tested on six real Android applications and proved to find energy leakages as well as cost a lot less time and memory during the process. The only limitations are that it cannot simulate complex inputs and hence create mock sensory data for analysis and secondly it doesn’t ensure the diagnosis of unknown types of energy waste.

I. Towards Scalable Evaluation of Mobile Applications through Crowdsourcing and Automation (T. Schweighofer, 2013)

With the increase in use of smartphones in our lives the threat of security risks has become very high. This paper defines a way to analyze mobile applications via an automated cloud-based application AppScanner which would help users to obtain applications providing their desired features and having the privacy expectations they have. It has four subsystems. App Mapper gathers the sensitive information used by an application then CrowdScanner takes that information to apply crowdsourcing techniques to collect privacy concerns of the users. The privacy Evaluator estimates what can be inferred from the information. Finally, the Privacy Summarizer after collecting people’s perceptions summarizes a mobile application’s privacy behavior. They have evaluated the system by concerning to the experts. The limitation is, however, the use

of information gathering techniques are complex and needs to be improved in AppScanner. Another challenge with respect to crowdsourcing people’s perceptions of mobile application behavior is the irregularities associated with what is normal vs. abnormal application behavior.

J. Mobile Device and Technology Characteristics’ Impact on Mobile Application Testing (Amini et al, 2012)

As nowadays smartphone users are increasingly aware they expect high quality mobile applications that do not crash or loose data. In this paper a testing procedure has been devised for a reliable and quality mobile application. Mobile application testing is different from traditional software testing due to different engineering procedures and characteristics of mobile devices including connectivity, convenience, user interface, supported devices, touch screens, new programming languages, resource constraints, context awareness and data persistence. the testing procedure that is devised is that after the release of the application for testing the Quality Assurance team start testing based on recorded test scenarios in which shows different mobile characteristics. If bug found they report it to web-based tracking system which are fixed by the development team. If the characteristics of mobiles are kept in mind then the potential users can be offered quality products.

K. A Quality Evaluation of an Android Smartphone Application (Aida Niknejad, 2011)

This paper aim to assess the quality of a prototype of a time management application developed on Android. The research covers two issues- development and quality of a prototype. Literature reviews and interviews helped in developing a prototype, later the quality problems were identified, a quality model ISO/IEC 9126 was selected to evaluate the usability and proper metrics were defined to measure the quality of the prototype. It was evaluated by observing users with different backgrounds using the prototype and comparing those results with the desired results. Only a limited number of quality attributes have been chosen in the research paper which addresses the quality of general software but the ones chosen have been described properly with their metrics and methods of calculation. This prototype can assist users to manage the time according to Pomodoro technique on their personal smartphones. Also these quality metrics discussed can be applied to assure the quality of other smartphone based applications and prototypes.

(5)

L. Usability Analyzer Tool: A Usability Evaluation Tool for Android Based Mobile Application (Babita Shivade and Meena Sharma, 2014)

This paper is about a software usability evaluation tool that helps in software development organizations to calculate the usability factors. User responses are gathered using an android based questionnaire application, these readings are then used to calculate the usability factors, graphs are generated from those results and the tool can easily predict which application is likely to have its quality assured. Method devised to calculate the usability is based on mathematical formulas which effectively and accurately calculate the usability factors according to provided customer feedback. The writer has only taken account into the attributes of application base on customer feedback, whereas customer smartphones might have some problems which appear to lower the quality of application. This tool will facilitate the early detection of usability defects of android apps the quality of the software will be assured.

M. Usability-Improving Mobile Application Development Patterns (Bettina Biel and Volker Gruhn, 2010)

This paper introduces two usability improving mobile application development patterns for software designers of mobile applications that run on mobile devices without accessing remote logic or data storage. The Client-Side Multi-Screen Support pattern describes how efforts to design a usable application can be optimized. Mobile Application Usability Test Suite shows how to test the application designed by using the pattern Client-Side Multiscreen Support during development and afterwards automatically. The pattern End User Test describes how to conduct a test with end-users and what can be learned from the data. Two different scenarios are considered relating to above patterns and a solution for each problem is devised with the positive and negative consequences of the solution.

N. Providing a Software Quality Framework for Testing of Mobile Applications (Dominik Franke and Carsten Weise, 2011)

The goal of this research is to evaluate the usage of existing common source code metrics, used for desktops software development, in the development of mobile software and based on this evaluation to define quality attributes for mobile applications. Eclipse Metrics plug-in 1.3.8 is used for extracting values about source code metrics, which includes 23 different metrics based on Object-Oriented Metrics by Brian Henderson-Sellers and Agile Software Development Metrics by Robert C. Martin. The research also seeks for the design patter

most suitable for mobile application development and testing. The paper well presents how methods and tools can be used to improve and test a key quality of mobile applications like data persistence, but everything explained is limited to literature and practical work is not sufficiently explained. The framework provided can be used to realize Eclipse plugin to evaluate quality attributes for Eclipse Android software.

O. A Mobile Software Quality Model (Dominik Franke and Stefan Kowalewski, 2012)

The existing models are so generic that it is impossible for them to cover the special needs of specific software systems e.g. mobile devices. Also they only target the quality of final product. So a new software quality model for mobile applications is proposed in this paper which focuses the most important quality attributes in the area of current mobile software. Physical limitations and current status quo of developing mobile devices are taken under consideration. During the analysis of quality model two Android applications from the Android market with a significantly different ranking were selected, to show where the differences in quality are, with respect to the proposed quality model. Qualities that contribute directly to the end-user experience were more focused than code-related qualities. Finally the model is not restricted to the given software qualities, but is easily extensible for any application-specific needs.

P. On the Use of Software Quality Standard ISO/IEC9126 in Mobile Environments (Ali Idri and Karima Moumane, 2013)

The use of the ISO 9126 software quality standard, especially the External Quality model, to the limitations of the mobile environment, such as Frequent Disconnection, Lower Bandwidth, and Limited Energy Autonomy is presented in this paper. Recommendations of ISO 25010 were also considered during the analysis. The process proposed for an analysis consisted of 3 steps: analyze external metrics; check the influence of mobile limitations on external metrics; calculate the degrees of influence using the coverage rate of the external metrics of each external quality characteristic that is influenced by some limitations. Additional external metrics, especially for the Stability and Testability attributes have been suggested to become a part of ISO 9126 software quality standard but its need has not been well justified. This framework may help developers, evaluators, and quality managers deal with the limitations of mobile devices, and achieve a high level of quality in the software that operates in mobile environments.

(6)

Q. Using Software Quality Standards to Assure the Quality of the Mobile Software Product (Lius Corral, 2012)

The purpose of the processes defined for developing mobile phone application is to develop such software that perform satisfactorily in delivering value to the end-users having limited resources within the environment, but no such mobile software lifecycles exist that can establish an environment-specific measurement plan that links the overall quality goals of the mobile software market. Therefore, standard-based strategy is proposed to develop the capability to associate the market requirements and success factors of a mobile product with quality characteristics that can be measured and to provide the feedback information to the development processes for continuous improvements. This approach consists of three steps: identify the most important quality requirements relating to mobile apps; conduct an analysis that can be used to relate mobile-specific characteristics with their corresponding measurable quality requirements; use Goal-Question-Metric methodology to evaluate and effective strategy to measure product quality using the information from the previous stages.

R. Standard-based Strategy to Assure the Quality of the Mobile Software Product (Lius Corral, 2012)

This paper has proposed a strategy to assess the quality of mobile applications based on metrics that are calculated by the demands set by application stores and execution environments. Those development strategies should be made in practice that considers several quality drivers such as mobile environment, end-user’s expectations, and application markets’ policies. For this research an experiment was conducted consisting on the retrieval of a sample of different mobile applications from Google Play (Android OS market) and calculating from them the mobile software quality metrics defined by the goal, question, metric. The source code and compute user-defined metrics were analyzed automatically. This let the execution of a multi-dimensional evaluation of the code without adding human-operator bias.

S. Addressing mobile applications risk: A software quality focus (Amman et al, 2014)

This paper provides a perspective on the specific software quality attributes to consider when an organization formulates a risk-based approach to mobile application software testing. It is proposed that mobiles should be tested according to risk-based approach to ensure its success. For such testing different quality characterizes such as interoperability, recoverability, efficiency, security, fault tolerance and usability

etc should be focused. Need of different testing techniques and tools is discussed that may help companies quickly develop and redesign secure, stable, functional mobile application and lessen the business and operational risks specific to that application. The theory aspect of risk based analysis is wholly covered whereas the practical aspect is totally missing.

T. An Efficient and Effective New Generation Objective Quality Model for Mobile Applications (Zahra et al, 2013)

It’s a big challenge for smartphone application developers to develop self-adaptive applications due to different phone characteristics such as different screen sizes; battery life etc. For that purpose a general quality model is proposed to help mobile development organizations to develop high quality mobile applications. This model defines some of the quality attributes extracted from ISO 9126 quality model. Some tasks for QA team also have been pointed out such as validating End user Requirement, Monitoring Mobile application production, Validating performance of Mobile application, and tracking quality of mobile application after deployment. The paper consists of only brief definitions of quality attributes and issues related to them in smartphone application whereas their metrics and solutions are not properly defined and presented.

III.

ANALYSIS

The analysis results according to the evaluation criteria of Table 1 are shown in Table 2 and Table 3. Twenty techniques have been evaluated using twelve evaluating parameters. Analysis of Table 2 and Table 3 reveals that although most of the papers focus on development ease of evaluation techniques through integrating tests within android applications but Schweighofer and Hericko[9] presents the evaluation of a smartphone application through crowd sources via automated cloud base application and Franke et l. [14] uses Eclipse Metrics plug-in 1.3.8 for extracting values about source code metrics which may be a solution to compatibility issue but can be over budgeted as these are not easily available and cheap tools.

Quality of an application can mainly be defined by user satisfaction and quality of experience (QoE). Therefore, testing quality of an application, user feedback can play a significant role. Only Ickin et l. [4], Rahimian et l. [6], Schweighofer et l. [9], Shivade et l. [12], and Franke et l. [15] has focused on user feedback whereas other mentioned techniques have missed out this important factor. As smartphone business is wholly depended upon the quality of service that is provided to the customers therefore this key element should be considered at all

(7)

times apart from other factors like reliability, interoperability etc.

Performance of an application highly defines the quality of that application. All the techniques except Schweighofer et l. [9] and Biel et l. [13] have discussed performance attributes should be evaluated amongst others.

Biel et l. [13] has discussed resource optimization in terms of size of screen. Others like Hasan et l. [1], Wasserman. [3], Hussain et l. [7], Schweighofer et l. [9] , Niknejad. [11], Shivade et [12], Ammann. [19] do not discuss the important factor of how battery consumption and energy leaks within a mobile application affects the performance of the application. It has been noted that none of the techniques discussed the limitations that are faced for ensuring the quality of the application due to devices’ hardware such as the processor, RAM etc.

Maintainability ensures the future success of the application. In today’s cyber world changing technology is an important aspect which should be considered for the survival of any application. Thus, the quality of an application calls for the ability of a technology/application to be maintainable. Liu et l. [2], Wasserman. [3], Ickin et l. [4], Liu et l. [8], Schweighofer et l. [9], Niknejad. [11], Shivade et l. [12], Biel et l. [13], Franke et l. [14] have not discussed maintainability aspect of mobile application, which we consider one limitation of their research papers.

Some techniques such as Hasan et l. [1], Liu et l. [2] are based on elaborative metric and formula based evaluation which leads to good testability grounds. Testability ensures that a technique that has been proposed is verifiable for an application.

Schweighofer et l. [9] has discussed in detail the security risks that a mobile application can encounter and how important it is to be considered while developing a mobile application. Niknejad. [11], Biel et l. [13], Franke et l. [14] [15] have explained their technique using case studies which help understanding the concept of quality to be evaluated and maintained easily.

IV. CONCLUSION

Smartphone application development is highly emerging field due to the embedding of smartphones in our daily lives. Also with the introduction of highly equipped tools for development, application development has become very easy and cheap due to which inexperienced people are also attracted towards this field and tend to forget the technical aspects to ensure the quality of the smart phone application. This seeks for establishment of standardized frameworks for the smartphone application development.

Our survey provides the analysis of number of frameworks that could be used for the production of quality application. Aspects such as security, reliability, efficiency, usability etc. should be kept in mind to provide the quality services to the customers. Thus far it is noted that all the frameworks devised for the quality production of applications are not mobile application specific, but derived from the desktop applications. Due to this practice a number of issues may arise such as overconsumption of resources and battery drainage

More work should be done on maintaining the performance of the application by keeping in mind the hardware constrains of the smartphone device. It is noted that a major reason a user tends to uninstall an application, amongst many others is if the battery consumption is beyond certain limit. Work should be done on defining that limit and applications should be built keeping in mind all of the above aspects. Also public awareness programs can be organized for explaining the short comings of using low quality applications and how to ensure that a quality product is built by a developer and installed by the user.

REFERENCES

[1] Yousuf Hasan, Mustafa Zaidi, Najmi Haider , W.U.Hasan and I.Amin. “Smart Phones Application development using HTML5 and related technologies: A tradeoff between cost and quality.” JCSI International Journal of Computer Science Issues, Vol. 9, Issue 3, No 3, May 2012

[2] Yepang Liu, Chang Xu*, Shing-Chi Cheung. “Characterizing and Detecting Performance Bugs for Smartphone Applications.” ICSE 2014 Proceedings of the 36th International Conference on Software Engineerin,g Pages 1013-1024

[3] Anthony I. Wasserman. “Software Engineering Issues for Mobile Application Development.” FoSER 2010 (2010). [4] Selim Ickin, Katarzyna Wac, and Markus Fiedler, Lucjan

Janowski, Jin-Hyuk Hong, and Anind K. Dey. “ Factors Influencing Quality of Experience of Commonly Used Mobile Applications.” IEEE Communications Magazine,

April 2012

[5] Jerry Gao, Xiaoying Bai, Wei-Tek Tsai, Tadahiro Uehara, "Mobile Application Testing: A Tutorial", Computer, vol.47, no. 2, pp. 46-55, Feb. 2014, doi:10.1109/MC.2013.445 [6] V. Rahimian, J. Habibi. “Performance Evaluation of Mobile

Software Systems: Challenges for a Software Engineer”, 5th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2008)

[7] Azham Hussain, Maria Kutar .“Usability Metric Framework for Mobile Phone Application” Proc. The 10th Annual

Conference on the Convergence of Telecommunications, Networking & Broadcasting Liverpool, UK, 2009

[8] Yepang Liu, Chang Xu , S.C. Cheung, and Wenhua Yang. “Where Has My Battery Gone? Finding Sensor Related

(8)

Energy Black Holes in Smartphone Applications” IEEE International Conference on Pervasive Computing and Communications (PerCom), San Diego (18--22 March 2013) [9] T. Schweighofer, “Mobile Device and Technology

Characteristics’ Impact on Mobile Application.” In: Z. Budimac (ed.): Proceedings of the 2nd Workshop of Software Quality Analysis, Monitoring, Improvement, and Applications (SQAMIA), Novi Sad, Serbia, 15.-17.9.2013, published at http://ceur-ws.org

[10] Shahriyar Amini, Jialiu Lin, Jason Hong, Janne Lindqvist, Joy Zhang. “Towards Scalable Evaluation of Mobile Applications through Crowdsourcing and Automation”

CyLab Carnegie Mellon University Pittsburgh, PA 15213, February 29, 2012

[11] Aida Niknejad. “A Quality Evaluation of an Android Smartphone Application.” Master Thesis in Software Engineering and Management, University of Gothenburg, Sweden, June, 2011.

[12] Babita Shivade and Meena Sharma, “Usability Analyzer Tool: A Usability Evaluation Tool for Android Based Mobile Application,” International Journal of Emerging Trends & Technology in Computer Science (IJETTCS), Volume 3, May-June, 2014.

[13] Bettina Biel and Volker Gruhn, “Usability-Improving Mobile Application Development Patterns,” in Proc. of EuroPLoP. ’10 European Conference on Pattern Languages of Programs, Bavaria, Germany, July, 2010

[14] Dominik Franke and Carsten Weise, “Providing a Software Quality Framework for Testing of Mobile Applications,” in

Fourth IEEE International Conference on Software Testing, Verification and Validation, 2011. pp. 431- 434.

[15] Dominik Franke and Stefan Kowalewski, “A Mobile Software Quality Model,” in 12th International Conference on Quality Software, 2012. pp. 154-157.

[16] Ali Idri and Karima Moumane, “On the Use of Software Quality Standard ISO/IEC9126 in Mobile Environments,” in

20th Asia-Pacific Software Engineering Conference, 2013. [17] Luis Corral, “Using Software Quality Standards to Assure

the Quality of the Mobile Software Product,” SPLASH’12, October 19–26, 2012.

[18] Luis Corral, “Standard-based Strategy to Assure the Quality of the Mobile Software Product,” SPLASH’12, October 19– 26, 2012.

[19] Christopher Ammann, Kirsten Hill, Ryan Burns. “Addressing mobile applications risk: A software quality focus.” 2014 KPMG LLP, a Delaware limited liability partnership and the U.S. member firm of the KPMG network of independent member firms affiliated with KPMG International Cooperative (“KPMG International”), a Swiss entity. [20] Sobia Zahra, Asra Khalid and Ali Javed, “An Efficient and

Effective New Generation Objective Quality Model for Mobile Applications,” I.J.Modern Education and Computer Science, April 5, 2013. pp. 36-42

[21] Mehreen Sirshar, “Embedded System Design through UML”,

International Conference on Intelligence and Information Technology, Lahore Pakistan, 2010

TABLE I. PARAMETERS FOR EVALUATION SURVEYED TECHNIQUES

Evaluation Parameters Meaning Possible Values

1 Development ease Development tools availability and developer’s friendliness with the programming language

Yes, No

2 Compatibility and portability Support of different platforms or not Yes, No

3 Case Study Technique explained with an example Yes, No

4 Data connectivity Use of removable media, WEB, email etc Yes, No

5 Performance Approaches’ responsiveness and stability Yes, No

6 Resource optimization Checks battery consumption and not drain repositories other than official repository.

Yes, Partial, No

7 Testability Proposed system is tested or not Yes, No

8 Security Effort for integrity, confidentiality and authentication of service Trusted, Untrusted, Deceptive

9 Maintainability product can isolate and correct defects without disturbing non faulty components Yes, No

10 Cost Effectiveness Is it cheap and under budget Yes, No

11 Interface Design Is the user considered in developing design Yes, No

(9)

TABLE II ANALYSIS OF EXISTING TECHNIQUES FOR DEVELOPMENT OF QUALITY SMART-PHONE APPLICATION

S# Techniques Development ease Compatibility and

portability User Feedback Case Study Performance

Resource optimization

1 Hasan et l. 2012.

No Trusted No No Yes No

2 Liu et l. 2014

Yes Untrusted No No Yes Yes

3 Wasserman. 2010

Yes Trusted No No Yes No

4 Ickin et l. 2012.

Yes Trusted Yes No Yes Yes

5 Gao et l. 2014

N/A Trusted No No Yes Yes

6 Rahimian and Habibi. 2008

N/A Trusted Yes No Yes Yes

7 Hussain and Kutar.

2008 No Trusted No No Yes No

8 Liu et l, 2013 Yes Untrusted No No Yes Yes

9 Schweighofer and

Hericko. 2013

No Trusted Yes No No No

10 Amini et l. 2012

Yes Trusted No no Yes Yes

11 Niknejad. 2011

Yes No N/A Pomodoro protoype Yes No

12 Shivade et l. 2014

Yes No Yes No Yes No

13 Biel et l. 2010 N/A N/A No Different examples mapped on different patterns No Yes 14 Franke et l. 2011. No Yes No Analysis of Mobile Application Lifecycles Yes Yes 15 Franke et l. 2012

N/A Yes Yes

Apply the mobile quality model to

two Android

applications

Yes Yes

16 Idri et l. 2013.

N/A N/A No No Yes Yes

17 Corral. 2012 N/A N/A No No Yes Yes

18 Corral. 2012 N/A N/A No No Yes Yes

19 Ammann. 2014.

N/A Yes No No Yes No

20 Zahra et l. 2013

(10)

TABLE IIIANALYSIS OF EXISTING TECHNIQUES FOR DEVELOPMENT OF QUALITY SMART-PHONE APPLICATION

S# Techniques Testability Security Maintainability Cost Effectiveness Interface Design Reliability

1 Hasan et l. 2012.

Yes Untrusted Yes Yes Yes Yes

2 Liu et l. 2014

Yes Trusted No No Yes Yes

3 Wasserman. 2010

No Trusted No Yes Yes No

4 Ickin et l. 2012.

Yes Trusted No Yes Yes No

5 Gao et l. 2014

No Deceptive Yes N/A Yes No

6 Rahimian and Habib.

2008 Yes Trusted Yes No No No

7 Hussain and Kutar.

2008 No Deceptive Yes Yes No Partial

8 Liu et l. 2013 No Deceptive No Yes Yes Yes

9 Schweighofer and

Hericko. 2013 No Untrusted No No No No

10 Amini et l. 2012 No Untrusted Yes Yes Yes Partial

11 Niknejad. 2011 Yes Trusted No Yes Yes Partial

12 Shivade et l. 2014

Yes Untrusted No Yes Yes No

13 Biel et l. 2010 No Untrusted No No Yes Yes

14 Franke et l. 2011.

Yes Trusted No No No No

15 Franke et l. 2012

Yes Untrusted Yes No Yes Partial

16 Idri et l. 2013. N/A Trusted Yes No Yes Yes

17 Corral. 2012 Yes N/A Yes No N/A Partial

18 Corral. 2012 Yes N/A Yes No N/A Partial

19 Ammann. 2014.

Yes Trusted Yes No Yes Yes

Figure

Updating...

Related subjects :