• No results found

Mobile Application Testing

N/A
N/A
Protected

Academic year: 2021

Share "Mobile Application Testing"

Copied!
25
0
0

Loading.... (view fulltext now)

Full text

(1)

Fachhochschule Köln

University of Applied Sciences Cologne 07 Fakultät für Informations-,Medien-und

Elektrotechnik

Studiengang: Master Technische Informatik

Mobile Application Testing

Serminar Informatik

Student: Melissa D. Mulikita Matr.Nr. : 11047303

Betreuer: Prof. Dr. Hans W. Nissen 21.06.2012

(2)

2

1 Introduction ... 3

2 Challenges of mobile application testing ... 3

2.1 Device Challenges ... 4

2.1.1 Device simulator/emulator vs. Real target device ... 4

2.2 Software Complexity/ Network Challenges ... 5

3 Testing methods and Guidelines for testing ... 6

3.1 Methods of mobile application testing ... 6

3.2 Testing recommendation ... 8

3.2.1 Weighted Device Platform Matrix ... 8

4 Test Automation Tools & Frameworks ... 10

4.1.1 Android Testing Framework ... 11

4.1.1.1 Android Instrumentation Framework ... 11

4.1.1.2 Positron Framework ... 13

4.1.1.3 Android Instrumentation vs. Positron Framework... 14

4.1.2 Sikuli ... 14

4.2 JaBUTi/ME: White Box Testing ... 16

4.3 MobileTest: Black Box Testing ... 18

4.4 List of available testing tools ... 21

5 Conclusion ... 23

6 List of Literature ... 24

(3)

3

1 Introduction

Mobile devices are evolving and becoming more complex with a variety of features and functionalities. Many applications that were originally deployed as desktop applications or web applications are now being ported to mobile devices.

In this thesis, a mobile application is defined as an application running on mobile devices and taking in input contextual information. They are either pre-installed on phones during manufacture or downloaded from an application store or through other mobile software distribution platforms. According to Keane a Firm for IT services mobile applications can be categorized into standalone- and enterprise- applications. “Standalone applications reside in the device and do not interface with external systems”[11]. Enterprise applications must meet the standards for business. They are developed to perform transactions that are resource-intensive and that must meet requirements for maintenance, administration and security. “Enterprise applications interface with external systems through Wireless Application Protocol (WAP) or Hyper Text Transfer Protocol (HTTP)”[11].

Although mobile applications have limited computing resources they are expected to be agile and reliable like traditional applications. One of the best quality metrics to decide whether a mobile application is agile and reliable is mobile application testing.

2 Challenges of mobile application testing

Unlike traditional testing, mobile application testing requires special test cases and techniques. The wide variety of mobile technologies, platforms, networks and devices presents a challenge when developing efficient strategies to test mobile software.

This section discusses the challenges that have to be considered while testing mobile applications in comparison with traditional application such as desktop application testing. Although many traditional software testing practices can be applied to the testing of mobile applications, there are numerous technical issues that are specific to mobile applications that need to be considered. Traditional guidelines and methods used in testing of traditional applications may not be directly applicable to a mobile environment.

Traditional applications such as desktop applications run on Personal Computers and work stations. Desktop application testing is focused on a specific environment. Complete applications are tested in categories like GUI, functionality, Load, and backend. On client server application two different components are tested. The application is loaded on server machine while the application executes on every client machine. Client server applications are tested in categories like, GUI on both sides, functionality, Load, client-server interaction, backend. This environment is mostly used in Intranet networks. The number of clients and servers is known as well as their locations in the test scenario.

(4)

4 When testing mobile applications additional test cases should be considered. The test phase should be able to answer these questions:

How much battery life does the application use? What good is a mobile device that has to be supplied with electricity just to power the application?

How does the application function with limited or no network connectivity? Minimally the application should not crash; ideally the user should not even notice a difference.

How fast is the application? Even with slower processors and networks, users still expect desktop speeds out of their mobile devices.

How quickly can users navigate the application? With limited attention spans, mobile devices need to be highly intuitive.

How much data will the application need? Will users without unlimited data plans or devices without large internal storage be able to use the application?

Will peripheral devices affect the application? Whether or not the application uses peripheral devices, these devices affect the processes running in the background, in turn affecting the application.

2.1 Device Challenges

Fig. 1: Available mobile manufacturers

In the desktop application testing environment, there is one central processing unit platform on which applications are tested. Hardware components of a Personal Computer, “such as the disk drives, graphics processor and network adapters have usually been thoroughly tested for compatibility with those operating systems”[6]. Display formats and input devices of desktop application fall within a narrow range of choices and are well known.

Mobile devices on the other hand consist of “a wide range of handsets, each with unique configurations and form factors that can have unpredictable effects on the performance, security and usability of applications”[6].

With the boom of smartphones a mobile device usually contains hardware components such as Wi-Fi and Bluetooth network capabilities additional to cellular connectivity, a GPS receiver, multiple input devices, such as a touch-screen and a keypad.

“Each combination of components interacts in different ways with each other, and with the operating system, to create potential compatibility and performance issues that must be addressed in testing”[6]. Mobile application testing has to ensure that the application delivers optimum performance for all configurations of hardware.

2.1.1 Device simulator/emulator vs. Real target device

Ideally when reproducing the production environment, mobile application testing has to be implemented on real target devices so that every possible interaction among hardware/ software component and wireless carrier's network, are tested in the most accurate and reliable environment.

A device simulator/emulator is software that simulates/emulates the performance and behavior of real devices. Simulators/ emulator are easier to obtain and less expensive than

(5)

5 samples of real devices. Simulators/emulators can be beneficial for testing features of the application that are device independent. However, real devices should be used for validating the results.

Unfortunately acquiring all target devices to perform manual testing is complex and costly during every stage of testing. As an alternative to both emulators and buying lots of physical devices, service like DeviceAnywhere.com that gives you online access to numerous real devices on various networks could be considered. DeviceAnywhere’s phone bank enables access to 2000 different handset models across all major global network operators.

2.2 Software Complexity/ Network Challenges

Fig. 2: Available mobile operation systems

In addition to hardware-based challenges, tester must handle the complexity of the software environment of mobile devices. To make certain that performance on the same range of mobile devices will work properly, all current versions of the iOS4, Windows Mobile, Windows Phone 7, Symbian, Android as well as RIM Blackberry must be addressed.

In tradition application testing on the current versions of the Windows, Apple Macintosh and Linux operating systems is adequate to make certain that a desktop application will work properly on most common Personal Computers.

The rapid changes of the handset market require that testing methods for the changing cast of operating systems are maintained. Many mobile applications are developed using RAD (rapid application development) in which multiple versions of the software are quickly developed and assessed by end users. “This rapid-fire cycle of coding and re-coding makes it impossible to assess how each change affects the application's performance, stability or security”[6]. Just as mobile operating systems are constantly changing, so are the networks, protocols and other key elements of the infrastructures used by network providers. “Carriers worldwide are upgrading their networks from 2-G to 3-G, and even to 4-G with LTE (Long Term Evolution) networks. ”[6]. Internet traffic will be upgraded from IPv4 to IPv6 as well.

Mobile network carriers provide various levels of bandwidth. “Carriers use different methods to tunnel their own traffic into the TCP IP protocol used by the Web, changing how

applications receive, transmit and receive data”[6]. Different web proxies are used by the carries to define which Web sites users can access as well as how the sites should be displayed on the devices.

“All of these differences can affect the stability performance or security of a mobile

application, and must be tested to assure the end-user experience”[6]. Tests must be built and scripts executed in order to check the interaction among the handset and between the

application and its components. In addition applications must be tested for their compatibility with the networks on the device they might run on.

(6)

6

3 Testing methods and Guidelines for testing

Although the mobile application testing process is based on traditional testing mobile devices have different testing characteristics that must be kept in mind when deciding which testing methods to use for authentication. In this chapter testing methods used in mobile applications testing are briefly listed and recommendations for optimum testing are given.

3.1 Methods of mobile application testing

Unit Testing:

Unit testing consists of functional and reliability testing in an Engineering environment. Test cases are written after coding. The purpose of unit testing is to find (and remove) as many errors in the mobile software as possible. Unit testing is also referred to as

Component Testing.

Integration Testing:

Integration testing is testing where modules are combined and tested as a group. Integration testing is any type of software testing that seeks to verify the interfaces

between components (modules) against a software design. Integration testing follows unit testing and precedes system testing.

System Testing:

System testing is conducted on a complete, integrated system to evaluate the system's compliance with the system specified requirements. During system testing the entire system of the mobile application will be tested to meet all the specification specified by the application. System testing falls within the scope of black box testing, and does not require any knowledge of the inner design of the code or logic.

Regression testing:

Regression testing resembles functional testing. A regression test allows a consistent, repeatable validation of each new release of a mobile application. Regression testing ensures that reported product defects have been corrected for each new release and that no new quality problems were introduced in the maintenance process. Although regression testing can be performed manually the required testing is often automated to reduce time and resources.

Compatibility Testing:

Compatibility testing ensures compatibility of an application with different native device features. Compatibility testing can be performed manually or can be driven by an

(7)

7 Performance Testing & Stress Testing:

Performance testing can be applied to understand mobile application scalability. This sort of testing is particularly useful to identify performance bottlenecks in high use

applications. Performance testing generally involves an automated test suite as this allows easy simulation of a variety of normal, peak, and exceptional load conditions.

An example of the focus of performance testing is the behavior of mobile application in low resources such as memory and mobile website when many mobile users

simultaneously access mobile websites.

Black Box Testing/ Functional Testing:

Functional testing is testing core functionality of mobile application as per specification and correct performance. This can involve testing of the applications user interface, APIs, database management, security, installation, and networking. Black box testing or

functional testing is testing without knowledge of the internal workings of the item being tested. Tests are usually functional except white box testing.

White Box Testing/ Structural Testing

White box testing is testing based on an analysis of internal workings and structure of a piece of software. White box testing includes techniques such as branch testing and path testing. It is also known as structural testing and glass box testing.

UI Testing (User Interface) Testing

UI testing is the process of testing an application with a graphical user interface to ensure correct behavior and state of the UI. This includes verification of data handling, control flows, states and display of windows and dialogs. An important aspect in mobile application testing is to ensure consistency of GUI over various devices.

(8)

8

3.2 Testing recommendation

For the types of testing mentioned it is a good idea to use some combination of real device and emulator testing, as recommended in the table below:

Fig. 3: Keane’s Recommended Strategy for Testing

In White Paper: Testing Mobile Business Applications Keane shows a good approach towards testing mobile applications.

Fig. 3 lists testing methods into groups. Standard functionality testing should always be considered in mobile application testing. Because of device challenges GUI compatibility testing has to be incorporated in the application testing process in addition to standard testing. The use of test automation, emulators and real devices determines the success of mobile application testing.

Enterprise applications must meet requirements for maintenance, administration and security. They “are more complex in functionality and architecture” and therefore it is important to test enterprise application on performance, security, and synchronization in addition to the standard functionality testing.

3.2.1 Weighted Device Platform Matrix

In order to test mobile application effectively various test combination must be conducted. “Repeating the test cases over many hardware and software combinations increases the tedium of test execution”[11]. A strategy to optimize tests for various combinations is to adopt a Weighted Device Platform Matrix method.

(9)

9 According to Keane the matrix is prepared in two steps:

Defining parameters of importance

With information gained from business requirements, factors that influence the importance of the specific hardware and software combination should be identified. Factors that influence the importance of this combination are:

- The total amount of users for a device and operating system

- Recommendation of business to conduct test for a particular device or operating system

The “factors are weighed and then relative weights are assigned to each of the devices and OS”[11].

Preparing matrix for all possible combinations

After defining parameters of importance, a matrix representing the results for each combination, is prepared. The result is the “product of relative weights of devices and operating systems”[11]. The criticality of combination is proportional to the result. A high result indicates high criticality. “Based on the criticality of the combination, the required degree of coverage can be determined”[11].

(10)

10

4 Test Automation Tools & Frameworks

Testing mobile applications is traditionally done by manual execution of test cases and visual verification of the results. However, it is very time-consuming. Using automation tools and testing frameworks yields quantifiable benefits and is recommended.

Test automation is done by using software to control the execution of tests and compare actual results to expected result. With test automation testers have the possibility to set up test

preconditions, to automate test control and test reporting functions.

“Automated testing offers fast, repeatable and comprehensive test execution, including overnight or over-weekend test runs”[6]. However the “investment in tools, test-srcipt development and data collection is required and is and is not amenable to fast-changing requirements”[6].

Because mobile application are focused on user interaction testing mobile GUI applications raises special challenges. “The event-driven nature of GUIs makes GUI applications non-deterministic; the user can click anywhere on the screen”[3].

Mobile applications are focused on good user experience than traditional applications. Today’s high-resolution displays of mobile devices offer increasing capabilities in user interaction experiences.

The development platform for mobile application which is a Personal Computer is different from the target platform which is a handheld device. It is impossible to test the various possible states a GUI can have. Because of these unlimited testing scenarios mobile GUI testing is more difficult compared to functional testing of desktop application. “Manual GUI testing is very error prone and hardly reproducible, and causes very high effort”[3].

A solution to the mentioned problems is automated GUI testing for mobile applications. “The idea of automated GUI testing is to develop testing scripts which simulate user interactions with the GUI application and verify the correct behavior, state and control flow in the GUI to discover possible deviations from the expected behavior”[3]. GUI testing is conducted to clarify required functionalities of the application from the user perspective.

(11)

11

4.1.1 Android Testing Framework

The Android testing framework provides powerful tools for testing mobile applications. The following diagram summarizes the testing framework of Android:

Fig. 5: Testing framework of Android

The Android SDK (Software Development Kit) consists of tools for developing and testing android based mobile application. The Android SDK tools are available as Plug-in in Eclipse with ADT (Android Development Tools), and “in command-line form for use with other IDES (Integrated development environment)”. The SDK “tools get information from the project of the application under test and use this information to automatically create the build files, manifest file, and directory structure for the test package”. The SDK also provides monkeyRunner, an API testing devices with Python programs, and UI/Application Exerciser Monkey, a command-line tool for stress-testing UIs by sending pseudo-random events to a device. The next sections introduce two android testing frameworks and list their differences, advantages and disadvantages.

4.1.1.1 Android Instrumentation Framework

The Android Instrumentation Framework is a powerful testing tool and is integrated in the Android SDK. “Instrumentation refers to the ability to monitor and diagnose an application by inserting tracking code, debugging techniques, performance counters, and event logs into the code, which also allow measuring the applications performance and controlling its

behavior”.[3]

The Android Instrumentation Framework provides supporting test classes which allows “starting, running, controlling and terminating an application in test mode” [3].

(12)

12 Fig. 6: Android Instrumentation Framework Class Diagram

As one can see in Fig. 6, the Android Instrumentation Framework extends the JUnit

framework. “The ActivityInstrumentationTestCase2 extends the JUnit core TestCase class” [3]. Using the instrumentation framework is easy for experienced JUnit developers because it is based on the JUnit Framework. “This allows using the standard JUnit assert-functionality for verifying expected and actual behavior in the GUI caused by user interactions or fired events” [3].

In Android Activities represent a screen of the application. Each activity consists of different groups of UI elements and has independent lifecycle. Because it consists of so many units, each activity can be tested separately. “Android instrumentation provides the special class ActivityInstrumentationTestCase2 for this testing level”[3].

A GUI test is important for testing process related to the UI elements in the activity or a specific expected user behavior. Listing 1 shows the general Test structure for an

instrumentation test class. The extracted source code in listing 1displays a test where a new person is added to a list of contacts and the correct behavior and state of the GUI are verified With the Android Instrumentation Framework sending key strokes to the selected control is provided by using the sendKeys() method to simulate specific user interactions. In Listing sendKeys() is used to enter the surname of a person.

The save method simulates a “Save” button on a real device and execute the actual test as a Runnable. “The Runnable class includes event call and the assertions”[3]. The overloaded run() method is passed to the UI thread which runs in parallel with the activity being tested. public TestDemo(String pkg, Class<Demo>activityClass) {

public TestDemo(){

super("org.demo", Demo.class);//Bundle activity }

protectedvoid setUp() throws Exception {

super.setUp();

final Demo2 a = getActivity();//Instance Activity

/*Bundle widgets by id in file R*/ surname=(TextView)a.findViewById(R.id.surname); }

(13)

13

publicvoid test1AddPerson (){

sendKeys( KeyEvent.KEYCODE_T); //Type a letter sendKeys( KeyEvent.KEYCODE_O); //Type a letter sendKeys( KeyEvent.KEYCODE_O); //Type a letter

//runnable for save button action event Runnable saveRun = new Runnable() {

publicvoid run() {

save.performClick();//Simulate click event p=getActivity().getPerson();//get data from UI

save.setEnabled(false);//set UI element status Assert.assertFalse(save.isEnabled());

Listing 1: Test Class Structure with Android Instrumentation Framework

Finally the tests are compiled and bundled as an independent instance of the application.

4.1.1.2 Positron Framework

“The Positron framework is a client-server model built on top of the Android Instrumentation Framework to handle the activity’s resources, offering a high-level approach for writing and running test cases”[3].

Each test case is a client, which connects to a server component that runs the activity. The framework provides infrastructure for communication as well as services for the server. “The communication network services use the Android Debug Bridge (adb) to establish the

connection between the server and the clients”[3]. For each client test method an instance of an activity is created to communicate with.

“The Positron framework is thread-safe, which is important to control the UI elements and their events running in a separate thread”[3].

In order to access activity resources the framework uses paths that are dot-separated as shown in Listing 2. The activity is a container of UI elements, which are arranged in a hierarchy. The test class extends the class TestCase. “The structure of the test class is similar to the structure of the standard JUnit test class”[3]. Verifications about behaviors and data are made by "asserts" just like in JUnit. Listing 2 shows test class written with Positron and methods such as pause(), press() and click() used to simulate user interactions with the GUI.

public class AddRecord extends TestCase {

@Before

public void runBeforeEveryTest() {

//Start the activity in test mode startActivity("org.demo.Demo","org.demo.demo2.Demo2"); pause(); //Wait for the requested answer

press("Name", DOWN); //Simulate typing a word

click();//Simulate click event in a focused UI element }

@Test

public void addPerson() throws InterruptedException { /*Assert state*/ assertTrue( "not click",

booleanAt("save.isPressed")); field1=stringAt("listView.1.0.text");//Get string }

}

(14)

14 Positron provides a set of test methods representing user tasks for each test class. This ensures that test can be run independently of each other. Positron also provides synchronization mechanism between applications under test and tests for concurrent access of GUI elements during the testing.

4.1.1.3 Android Instrumentation vs. Positron Framework

Comparing both Android Frameworks, the Android Instrumentation Framework is a low-level API that simulates user interactions while “Positron provides an abstracted comfortable high-level interface for writing GUI tests”[3].

The Android Instrumentation Framework accesses context and widget of the activity to validate directly during testing which ensures efficient runtime and fast response. The Positron Framework however has to connect to the application under test each time the test class needs to use activity resources. This slows down the test runtime execution.

Because of its low-level API the Android Instrumentation Framework requires writing more test code, which increases error rate and also causes higher maintenance effort. The Positron framework provides a high-level interface for writing automated GUI tests, which reduces the effort, for both, writing and maintaining test code significantly.

Strengths of both frameworks are:

- use of instrumentation for handling UI resources through the activity

- user interactions simulation by sending key events

- execution on the target platform

- usage of standard JUnit assertions to verify GUI behavior and states. Weaknesses of both frameworks are

- tester must have a detailed knowledge of the source code under test to find the UI resources in the code.

4.1.2 Sikuli

Sikuli Test is a GUI testing framework that enables automation testing tasks. “It allows testers to write visual scripts to automate tests, to refer to GUI objects by their visual representation directly, and to provide robustness to changes in spatial arrangements of GUI

components”[7]. The script uses action statements to simulate the interactions and assertion statements to visually verify the outcomes of the interactions.

Test scripts under Sikuli Test are written to test traditional desktop GUI applications on Windows and Mac OSX, as well as mobile applications in an Android emulator and iOs simulators.

(15)

15 Fig. 7: Sikuli Test Interface

To simulate interactions involved in a test case, action statements using the API defined in Sikuli Script can be written. “Sikuli Script is a visual automation system that provides a library of functions to automate user inputs such as mouse clicks and keystrokes”[7]. These library functions transform taken screenshots of GUI components into arguments. Given the image of a component, Sikuli Script searches and tests the whole screen for the component to deliver the actions.

“Since Sikuli Script is based on a full scripting language, Python, it is possible for QA testers to programmatically simulate a large variety of user interactions, simple or complex”[7]. Sikuli Test provides two visual assertion functions. Outcomes of a test are verified by using these Visual Assertion Statements.

These two assertion functions are:

- assertExist(image or string [, region]) asserts that an image or string should appear on screen or in a specific screen region

- assertNotExist(image or string [, region]) asserts that an image or a string should not appear on screen or in a specific screen region Sikuli Test also provides a record-playback utility that enables automation of GUI testing. “The operation of a GUI can be described as a cycle consisting of actions and feedback”[7].

(16)

16 While testing actions to operate a GUI it is important to verify if the visual feedback equals the expected actions. “With the record-playback mechanism, the testers can demonstrate the interactions involved in the test case”[7]. During recorded playback the actions as and the “screen are recorded and translated into a sequence of action and assertion statements automatically”[7]. When the script is being executed the action statements can replicate the actions. The assertion statement can also verify if the automated interactions brings the desired visual feedback.

Sikuli Script supports testing by minimizing the effort needed to write test scripts. Sikuli Test currently has two limitations:

- Sikuli Test is unable to detect unexpected visual feedback

- Sikuli Test is unable to test the GUI’s internal functionalities.

4.2 JaBUTi/ME: White Box Testing

White box testing is a technique based on the internal structure of a given implementation, from which the test requirements are derived. In general, “white box testing criteria use a representation known as Control Flow Graph (CFG) to abstract the structure of the program or of part of the program, as a procedure or method”[4].

JaBUTi (Java Bytecode Understanding and Testing Tool) is a complete tool suite for understanding and testing Java programs and Java-based components.

JaBUTi differs “from other testing tools because it performs the static and analysis directly on Java Bytecode not on the Java source code”[4].

In order to apply the structural testing techniques, a tool is necessary to perform static analysis, code instrumentation, requirement computation or coverage analysis.

JaBUTi performs the given tasks:

- Static analysis – the program is parsed and the control- and data-flow information is abstracted in the form of def-use graphs. Other

information, as for instance, call graphs, inter-method control-flow graphs and data-flow data can also be gathered.

- Requirement computation – based on the information collected in the task above, JaBUTi computes the set of testing requirements. Such requirements can be a set of nodes, edges or def-use associations.

- Instrumentation – in order to measure coverage, i.e., to know which requirements have been exercised by the test cases, it is necessary to know which pieces of the code have been executed. The most common way to do this is by instrumenting the original code. Instrumentation

(17)

17 consists of inserting extra code in the original program, in such a way that the instrumented program produces the same results of the original program and, in addition, produces a list of “traces”, reporting the program execution.

- Execution of the instrumented code – a test is performed using the instrumented code, instead of the original program. If it behaves inappropriately, a fault is detected. Otherwise, a trace report is

generated and the quality of the test set can be assessed based on such a report.

- Coverage analysis – confronting the testing requirements and the paths executed by the program, the tool can compute how many of the requirements have been covered. It can also give an indication of how to improve the test set by showing which requirements have not been satisfied.

The extension of JaBUTi to deal with mobile environment is named JaBUTi/ME. “With JaBUTi/ME it is possible to execute the test case on the real environment and still apply structural testing criteria with the supporting tool”[4].

Fig. 8: Server based testing JaBUTi/ME

When instrumenting code some parameters in JaBUTi are defined that can be chosen when testing a mobile application. These parameters are: address of the test server, identification name of the program being tested, name of the file used for temporary storage of trace data, minimum amount of available memory and keep or not the connection.

- The address of the test server determines the IP address and the port to which the connection to send trace data should be established

(18)

18

- The identification name of the programming being tested allows the instrumented code to identify itself to the test server when sending trace data.

- The name of the file used for temporary storage of trace data is optional. If provided, t instructs the code inserted in the instrumented code to store all the trace data until the end of execution and the, send it at once to the test server. The data is stored in a tem file in the mobile device’s file system.

- The minimum amount of available memory is a parameter, which helps

tester to determine the amount of memory that could be used by the instrumented code to store and trace data before deciding to send it to the test server.

- Keep or not the connection is a parameter that controls the connection behavior. The ordinary behavior of instrumented code is to create a single connection with the test server at the beginning of the execution of test case and keep the connection open until the program end. If the cost of keeping the connection open is restrictive, this parameter can be used to instruct the instrumented code to create a connection each time it is needed.

4.3 MobileTest: Black Box Testing

An automatic black box testing tool for mobile devices introduced in this Chapter is MobileTest.

With MobileTest building maintainable and reusable test cases for testing system level and application level software on various mobile devices is possible. MobileTest observes the input and output information of an application. “From the input perspective, a mobile application receives two kinds of inputs “[6]. The first input is form the user GUI such as keyboard events, touch events. The second input is environmental context events.

“MobileTest enables support to test features such as interactive operations, volume, multiple states, boundary test and multiple task”[2].

(19)

19 MobileTest architecture

The architecture of Mobile Test subdivides the system environment into layers to reduce the complexity the system. Each layer provides services to upper layers with the support of lower layers. In this way, the test control layer can be separated from the characteristics of the underlying devices.

Fig. 9: MobileTest architecture

According to Bo, Xiang and Xiopeng “the system is composed of four layers” [6]: 1) The User Interface Layer interacts with testers. It runs on the workstation. 2) The Test Control Layer executes the test scripts. It sends simulated operations

to target devices, receives screenshots and sensitive events from the target devices and further controls the test process according to sensitive events. This layer also runs on the workstation.

3) The Communication Layer connects the Test Control Layer and the Device Agent Layer. It runs on the workstation and target devices.

4) The Device Agent Layer receives commands from upper layer, executes them and sends the status of the target back.

(20)

20

The basic usage scenario of MobileTest is shown in Figure 10:

Fig. 10: Basic usage of MobileTest

Relating to Figure 10 the basic usage is explained:

1) Make configurations on the target device and the test environment.

2) Make test plans and write test scripts in the script editor manually or generate test scripts using the virtual devices.

3) Scripts are scheduled to run.

4) “When scripts are interpreted, the script interpreter uses the uniform interfaces

provided by communication layer to send simulated keys or other input information to the Device Agent running in the target device and then the script interpreter suspends itself”[6].

5) Device Agent simulates the keys in the target device and returns the screenshots and results to the process monitor module with the help of the communication module. “The Device Agent may also notify the test process monitor about the sensitive events automatically”[6].

6) After receiving feedback, the test process monitor will activate the script interpreter. The next statements are saved as test results in the test resource library.

(21)

21

4.4 List of available testing tools

The table below gives a list of available testing frameworks with the platforms they can be tested on.

Automation Tools

Platforms

Sikuli iOS/Android OS

Robotium Android OS

Perfecto Mobile Blackberry OS

UISpec iOS /Windows Mobile

eggPlant Symbian MonkeyTalk iOS/WebOS/Android OS Positron Android OS Fledge RIM QTP iOS/WebOS/Android OS CTS Android OS MObilePBDB PJUnit Hermes J2MEUnit GlassfajrToolkit J2ME JaBUTi

MobileTest Java based components

Fig. 11: Available mobile automation testing tools and the platforms they run on

Fledge: BlackBerry device simulator that enables mobile application testing on a Personal Computer. Tests, various connectivity and state changes can be simulated.

MonkeyTalk (formerly known as FoneMonkey): offers the possibilities to save a script, load, read and modify it. FoneMonkey is designed to support developers and quality control tests. Tests of FoneMonkey can be easily incorporated into different integration environments. FoneMonkey automates testing on iOS simulators, Android emulators or real devices. QTP (QuickTest Professional): is an automated testing tool provided by HP/Mercury Interactive. QTP uses VB scripting language to build its flows. It provides automated and regression testing and also generates test scripts that can be executed on local or remote mobile devices. QTP utilizes add-in architecture for compactness and extensibility:

- SeeTestTM developed by Experitest is a mobile test automation tool for iPhone, Android, Blackberry, and Window Mobile that plugs into QTP.

(22)

22

- DeviceAnywhere has integrated its mobile test automation platform with QTP to enable testers to access DeviceAnywhere through their QTP environment

- MobileCloud developed by Perfecto Mobile plugs QTP to offer native

QTP scripting and flow control together with cloud-based automated mobile testing technology.

CTS (Compatibility test Suit): runs on Personal Computer and manages test execution. Individual test cases are executed on attached mobile devices or on an emulator. The test cases are written in Java as JUnit tests and packaged as Android .apk files to run on the actual device target.

UISpec: Behaviour Driven Development framework for the iPhone that provides a full automated testing solution that drives the actual iPhone UI. It is modelled after the very popular RSpec for Ruby.

MObilePBDB (Mobile Performance Benchmark Database): method for performance unit testing in an emulator-based test environment after data is collected from benchmark testing in a real target device

PJUnit: tool for performance testing at the unit test level. It measures the time taken between methods and finds the methods that are frequently called in a specified program, which helps enhance performance and identify the bottle-neck section by removing unnecessary code and preventing object creation [1].

White box automation testing tools

Other testing tools supporting functional control-flow and data flow testing are J2MEUnit and GlassjarToolkit. However J2MEUnit and GlassfajrToolkit allow the testing only on computer desktop through emulators and not like JaBUTi directly on the handheld device.

Black box automation testing tools

Hermes: black box automation tool for testing J2ME applications. Hermes supports application independence, and open interfaces for extensibility. It manages testing of an application on physical devices. “In Hermes tests are represented according to an XML schema that is generic to allow multi-faceted tests concerning an application’s function, aesthetics and operating environment to be described” [5].

Eggplant: black box automation tool that runs on Mac OSX and Linux. It can test applications on a vast range of other platforms because it connects to and controls them using VNC

(Virtual Network Computing).

Robotium: black box automation tool for Android Mobile Application. Robotium is a UI testing tool. The Framework provides API’s to test various kinds of Widgets/UI present in mobile applications developed with android SDK

(23)

23

5 Conclusion

Application development for mobile devices is evolving. The strategies presented in this thesis discuss the differences between mobile device applications and how important it is to plan a test strategy that is mobile-specific. Unique challenges like device challenges and software challenges of mobile devices need to be considered because traditional testing does not cover all characteristic important for mobile application. The use of automation test tools and test methods should be conducted for a successful testing result.

Below is a final guideline for testing mobile application

1) Network landscape and device landscape should be understood before testing to identify bottlenecks.

2) Testing in real-environment should be conducted 3) Adequate Automation test tool should be used.

Rules for an ideal tool are:

- One tool should support all desired platforms. The tool should support testing for various screen types, resolutions, and input mechanisms - such as touchpad and keypad.

- The tool should be connected to the external system to carry out end-to-end testing.

4) Weighted Device Platform Matrix method should be used to identify the most critical hardware/ platform combination to test. This method is very useful

especially when hardware/ platform combinations are high and time to test is low. 5) End-to-end functional flow in all possible platforms should be checked at least

once.

6) Performance testing, GUI testing, and compatibility testing using actual devices should be conducted. Even if the tests can be done using emulators, testing with real devices is recommended.

7) Performance should be measured only in realistic conditions of wireless traffic and user load.

(24)

24

6 List of Literature

[1] Heejin Kim, Byoungju Choi, W. Eric Wong, ‘Performance Testing of Mobile

Applications at the Unit Test Level’, Third IEEE International Conference on Secure Software Integration and Reliability Improvement, 2009

[2] Jiang Bo, Long Xiang, Gao Xiaopeng , ‘MobileTest: A Tool Supporting Automatic

Black Box Test for Software on Smart Mobile Devices’, AST'07, Proceedings of the Second International Workshop on Automation of Software Test, 2007

[3] Martin Kropp, Pamela Morales, ‘Automated GUI Testing on the Android Platform’, ICTSS, Proceedings of the International Conference on Testing Software and Systems, 2010

[4] M.E. Delemaro, A. M. R. Vincenzi, J.C. Maladonado, ‘A strategy to perform coverage testing of mobile applications’, AST ’06, Proceedings of the 2006 international

workshop on Automation of software Test, 2006

[5] Sakura She, Sasindran Sivapalan, Ian Warren, ‘Hermes: A Tool for Testing Mobile Device Applications’, Australian Software Engineering Conference, 2009

[6] Selvam R, Dr Karthikeyani V, ‘Mobile Software Testing – Automated Test Case

Design Strategies’, ISSN: 0975-3397, Vol. 3 No. 4 Apr 2011

[7] Tanya Dumaresq, Matt Villeneuve, ‘Test Strategies for Smartphones and Mobile

Devices’ ©2010 Macadamian Technologies Inc, 2010

[8] Tsung-Hsiang Chang, Tom Yeh, Robert C. Miller, ‘GUI Testing Using Computer

Vision, CHI 2010’, April 10 – 15, 2010, Atlanta, Georgia, USA Copyright 2010 [9] Zhang, D. and Adipat, B. (2005), “Challenges, Methodologies, and Issues in the

Usability Testing of Mobile Applications”, Proceedings of the International Journal of Human Computer Interaction (IJHCI), vol. 18, nº 3, 2005, p. 293-308.

[10] Logigear Magazin, Mobile Application Testing, November 2011 | Volume V | Issue 7,

www.logigearmagazin.com

[11] White Paper: ‘Testing Mobile Business Applications’. Available at <www.keane.com> Access to: 20 May 2012

[12] http://developer.android.com/guide/topics/testing/testing_android.html

[13] http://www.mobileappstesting.com/tag/list-of-mobile-application-automation-tools/ [14] ebook: A Guide to Mobile App Testing, uTest, Inc. Available at <www.utest.com >

(25)

25

7 List of Figures

Fig. 1: Available mobile manufacturers ... 4

Fig. 2: Available mobile operation systems ... 5

Fig. 3: Keane’s Recommended Strategy for Testing ... 8

Fig. 4: Weighted Device Platform Matrix ... 9

Fig. 5: Testing framework of Android ... 11

Fig. 6: Android Instrumentation Framework Class Diagram ... 12

Fig. 7: Sikuli Test Interface ... 15

Fig. 8: Server based testing JaBUTi/ME ... 17

Fig. 9: MobileTest architecture ... 19

Fig. 10: Basic usage of MobileTest ... 20

References

Related documents

The Effect of Learning and Creativity Models on the Economic Learning Outcomes of Grade XI Berastagi High School Students. Wisnu

Results for Individual Work Areas Fraud Control Attribute Overall Results for. Individual Fraud

Start of tour of Punta della Dogana with a specialist guide: presentation of the exhibition spaces, the restoration project and of the works in the

(Bernard J. [Ben] Scaglione, CPP, CHPA, CHSP, is Principal of The Secure Hospital, a resource management and blogging site, and author of Security Management for Health-

For the 2013-14 Academic Year, Blue Ridge Community College’s Small Business Center (SBC) ranked second in the state for entrepreneur events and fifth in the state for overall SBC

By 2030, the national ratio of elderly persons with mental illness or substance abuse issues to geriatric psychiatrists will be 6,000:1 (Hoge, Stuart, Morris, Flaherty, Paris,

We captured the changes in the interstitial fluids in the subcutaneous tissues using the changes in the range of 48– 144 echogenicity, and we found that lymphoedema manage- ments

When high quality data and and a well established paradigm with dependable models are available to the analyst, the outcome of the analysis is usually dependable and the analyst's