• No results found

Bachelor Degree Project

N/A
N/A
Protected

Academic year: 2021

Share "Bachelor Degree Project"

Copied!
72
0
0

Loading.... (view fulltext now)

Full text

(1)

Author:

Stefan Fredriksson

Supervisor:

Jesper Andersson

Semester:

VT 2020

Subject:

Computer Science

Bachelor Degree Project

WebAssembly vs. its predecessors

A comparison of technologies

(2)

Abstract

For many years it has only been HTML, CSS, and JavaScript that have been native to the Web. In December 2019, WebAssembly joined them as the fourth language to run natively on the Web. This thesis compared WebAssembly to the technologies ActiveX, Java applets, Asm.js, and Portable Native Client (PNaCl) in terms of their performance, security, and browser support. The reason why this was an interesting topic to investigate was to determine in what areas WebAssembly is an improvement over previous similar technologies. Another goal was to provide companies that still use older technologies with an indication as to whether or not it is worth upgrading their system with newer technology. To answer the problem, the thesis mainly focused on comparing the performance of the technologies through a controlled experiment. The thesis also aimed at getting a glimpse of the security differences between the technologies by the use of a literature study. The thesis showed that PNaCl was the technology with the best performance. However, WebAssembly had better browser support. Also, PNaCl is deprecated while WebAssembly is heavily supported and could potentially be further optimized. Keywords: WebAssembly, wasm, ActiveX, Java applet, applet, Asm.js, Portable Native Client, PNaCl, Performance, Security, Browser support, Dynamic Web

(3)

Preface

I would like to thank my supervisor during this thesis, Jesper Andersson, for guiding me and coming up with ideas I would not have had without him. I would also like to thank my peers who reviewed the thesis during its different stages.

(4)

Contents

List of Figures List of Tables List of Listings 1 Introduction 1 1.1 Background . . . 1 1.2 Related work . . . 2 1.3 Problem formulation . . . 3 1.4 Objectives . . . 4 1.5 Scope/Limitation . . . 4 1.6 Target group . . . 4 1.7 Outline . . . 4 2 Method 5 2.1 Scientific Approach . . . 5 2.1.1 Literature study . . . 5 2.1.2 Controlled Experiment . . . 5

2.2 Reliability and Validity . . . 6

3 Dynamic web performance 7 3.1 Dynamic web . . . 7 3.2 Technologies server-side . . . 8 3.3 Technology overview . . . 8 3.3.1 Java Applets . . . 8 3.3.2 ActiveX . . . 9 3.3.3 Asm.js . . . 10

3.3.4 Portable Native Client . . . 11

3.3.5 WebAssembly . . . 12

3.4 Performance Testing . . . 12

4 Data Collection 14 4.1 Design . . . 14

4.1.1 Design for collecting performance data . . . 14

4.1.2 Design for collecting data on security . . . 17

4.2 Performance experiment preparations . . . 18

4.2.1 Browser settings . . . 18

4.2.2 Compilation tools . . . 18

4.3 Performance test execution . . . 19

4.3.1 Readying machine for testing . . . 19

4.3.2 Running the tests . . . 20

4.3.3 Hardware and Software . . . 20

5 Results 22 5.1 Performance experiment . . . 22

5.1.1 Result explanation . . . 22

(5)

5.1.3 Array application . . . 25 5.1.4 Numeric application . . . 28 5.2 Qualitative results . . . 31 5.2.1 Security . . . 31 5.2.2 Browser support . . . 34 6 Analysis 35 6.1 Performance experiment . . . 35 6.1.1 Execution time . . . 35 6.1.2 Load time . . . 38 6.1.3 CPU usage . . . 41 6.1.4 Memory usage . . . 44 6.2 Qualitative results . . . 47 6.2.1 Security . . . 47 6.2.2 Browser support . . . 48 7 Discussion 49 7.1 Execution time & Load time . . . 49

7.2 CPU & RAM Usage . . . 50

7.3 Security . . . 50

7.4 Browser support . . . 51

7.5 Summary . . . 51

8 Conclusions & Future work 53

References 55

A Appendix 1 A

B Appendix 2 B

(6)

List of Figures

3.1 High-level flow of a server-side scripting application. . . 7

3.2 High-level flow of a client-side scripting application. . . 8

3.3 High-level flow of a Java applet application. . . 9

3.4 High-level flow of an ActiveX application. . . 10

3.5 High-level flow of an Asm.js application. . . 11

3.6 High-level flow of a Portable Native Client application. . . 11

3.7 High-level flow of a WebAssembly application. . . 12

4.1 Design of the test process flow. . . 17

5.1 Execution times of the Fibonacci application. . . 23

5.2 Load times of the Fibonacci application. . . 23

5.3 CPU usage of the Fibonacci application running on the desktop. . . 24

5.4 CPU usage of the Fibonacci application running on the laptop. . . 24

5.5 Memory usage of the Fibonacci application running on the desktop. . . . 25

5.6 Memory usage of the Fibonacci application running on the laptop. . . 25

5.7 Execution times of the Array application. . . 26

5.8 Load times of the Array application. . . 26

5.9 CPU usage of the Array application running on the desktop. . . 27

5.10 CPU usage of the Array application running on the laptop. . . 27

5.11 Memory usage of the Array application running on the desktop. . . 28

5.12 Memory usage of the Array application running on the laptop. . . 28

5.13 Execution times of the Numeric application. . . 29

5.14 Load times of the Numeric application. . . 29

5.15 CPU usage of the Numeric application running on the desktop. . . 30

5.16 CPU usage of the Numeric application running on the laptop. . . 30

5.17 Memory usage of the Numeric application running on the desktop. . . 31

5.18 Memory usage of the Numeric application running on the laptop. . . 31

6.1 Box plot showing the variance of execution times for the different tech-nologies running the Fibonacci application. . . 36

6.2 Box plot showing the variance of execution times for the different tech-nologies running the array application. . . 37

6.3 Box plot showing the variance of execution times for the different tech-nologies running the numeric application. . . 38

6.4 Box plot showing the variance of load times for the different technologies running the Fibonacci application. . . 39

6.5 Box plot showing the variance of load times for the different technologies running the array application. . . 40

6.6 Box plot showing the variance of load times for the different technologies running the numeric application. . . 41

6.7 Box plot showing the variance of CPU usage for the different technologies running the Fibonacci application. . . 42

6.8 Box plot showing the variance of CPU usage for the different technologies running the array application. . . 43

6.9 Box plot showing the variance of CPU usage for the different technologies running the numeric application. . . 44

6.10 Box plot showing the variance of memory usage for the different tech-nologies running the Fibonacci application. . . 45

(7)

6.11 Box plot showing the variance of memory usage for the different tech-nologies running the array application. . . 46 6.12 Box plot showing the variance of memory usage for the different

tech-nologies running the numeric application. . . 47 C.1 Execution times of the Fibonacci application showing the execution time

per browser. . . C C.2 Load times of the Fibonacci application showing the load time per browser. C C.3 Execution times of the Array application showing the execution time per

browser. . . D C.4 Load times of the Array application showing the load time per browser. . D C.5 Execution times of the Numeric application showing the execution time

per browser. . . E C.6 Load times of the Numeric application showing the load time per browser. E

(8)

List of Tables

4.1 Keywords used during the literature study. . . 18 4.2 Specification of the machines that the performance tests were executed on. 20 4.3 The versions of the software used in this study. . . 21 5.1 Browser support for each technology. . . 34

(9)

Listings

1 Installing the Emscripten SDK . . . A 2 WebAssembly compilation command for the array application . . . A 3 Makefile of the array PNaCl application . . . A 4 Installing the NaCl SDK . . . B 5 Make.bat file content of a PNaCl application . . . B 6 PNaCl compilation commands . . . B 7 Installing curl for Visual Studio 2017 . . . B

(10)

This thesis is the final product of a bachelor degree in computer science consisting of 15 credits at the Linnaeus University.

1

Introduction

The Web is the largest and still growing platform in most aspects of today’s industries, and 95% of web sites use JavaScript as their front-end language[1]. The web sites of today are more demanding than the ones of the past with the web sites of today becoming closer to native applications. Today’s web sites can include things such as 3D visualization, audio- and video editing, and 3D video games. However, JavaScript was not meant to handle such applications, and thus developers have tried many times to develop technolo-gies for the Web that can handle such applications. Recently, one of these technolotechnolo-gies, WebAssembly, was declared a web standard as it runs natively in web browsers.

This thesis researched what WebAssembly does differently to some of the earlier attempts at implementing native-speed applications on the Web. The technologies that were compared to WebAssembly were Java applets, ActiveX, asm.js, and Portable Native Client.

1.1 Background

Back before JavaScript existed, the Web was a much more static place, where web pages did not do much other than display information. In 1995 when the Java Programming Language was released and with it the Java Applets, the possibility of embedding small applications on a web page became a reality. These small applications made it possible to run visualizations and games on a web page[2].

The Java applets were a massive success, and others wanted to share that success. In the same year as the release of the Java applets, Brendan Eich created the JavaScript language in ten days[3]. In 1996 Microsoft joined in by releasing ActiveX for their Inter-net Explorer web-browser. Like the Java applets, ActiveX allowed developers to embed small applications on a web page that could perform the same functionality as native applications[4].

Java applets ruled the web industry for over a decade, only challenged by Microsoft, but were eventually replaced with JavaScript as the web-browsers started using compilers aimed at JavaScript[5]. While JavaScript has improved in performance over the years, it still comes up short when put under heavy pressure. Mozilla tried to improve the perfor-mance of JavaScript in 2013 by making it a compile-target through asm.js. By making it a compile-target, it allowed developers to write their applications in lower-level languages such as C and C++ and then compile the code into optimized JavaScript code through a tool such as Emscripten[6].

Google attempted to increase the performance of applications running on web pages through the use of their Native Client (NaCl) and later the Portable Native Client (PNaCl)[7].

In 2015 the World Wide Web Consortium (W3C) announced they were working on WebAssembly with help from all major browser vendors. It was later released in 2017, and in December 2019, it was declared a web standard as it became the fourth lan-guage to run natively in web browsers, HTML, CSS, and JavaScript being the other three languages[8], [9], [10].

Another technology released in 1995 that will not be part of this study but is still worth mentioning is FutureSplash, which changed the name to Flash in 1996. Flash has

(11)

been a significant part of the Web just as long as the Java applet, however, in 2017 Adobe announced that support for Flash would come to an end on December 31st 2020. The main reason for this is that newer technologies such as HTML5, WebGL, and WebAssembly are better options [11], [12].

While Java applets and ActiveX were originally created to produce a more interac-tive web, they remained popular because of their large increase in performance compared to JavaScript at the time. Asm.js, PNaCl, and WebAssembly all try to achieve native performance on the Web, it is, therefore, an interesting angle to compare the different technologies to one another to determine which one offers the best performance on the Web. While Java applets and ActiveX are very old technologies and are no longer rec-ommended to use, there are still websites out there that use them. A reason why some companies might not have changed their use of technology could be because they already have a working product and do not see a reason to spend time redeveloping it if the other technologies offer the same performance. This study aims at highlighting the differences in performance between the technologies so that legacy applications can be updated using modern technologies.

1.2 Related work

Finding research that compares the technologies to one another was difficult. Most perfor-mance tests were between JavaScript, WebAssembly, and native code; since WebAssem-bly’s main goal is to rival native performance on the Web. One article that mainly focuses on comparing WebAssembly to native performance does include a short performance comparison between WebAssembly and asm.js.

Jangda Abhinav et al. [13] give a brief performance comparison between WebAssem-bly and asm.js in Google Chrome and Firefox web browsers. While the majority of the thesis focuses on comparing WebAssembly to native code, which is outside of this thesis’s scope, their comparison between WebAssembly and asm.js is relevant to this thesis’s re-search. In their thesis, they concluded that WebAssembly is faster than asm.js with a 1.54x mean speedup in Chrome and 1.39x in Firefox. At the end of their thesis, they include a short description of all but one (Java applets) of the technologies covered in this thesis and why WebAssembly is the preferred choice. For ActiveX, they mention that its unrestricted access to the user’s system was one of its downfalls. For PNaCl, they mention that, while it is an improvement over NaCl, it still "exposes compiler and/or platform-specific details such as the call stack layout." For asm.js, they mention that adding new features, such as 64-bit integers, would require extending JavaScript as a whole. Some improvements WebAssembly provide compared to asm.js that they mention are "(i) WebAssembly bi-naries are compact due to its lightweight representation compared to JavaScript source, (ii) WebAssembly is more straightforward to validate, (iii) WebAssembly provides formal guarantees of type safety and isolation, and (iv) WebAssembly has been shown to provide better performance than asm.js."

In a non-scientific article, David Tippett [14] compares the performance between We-bAssembly and PNaCl using multi-threading when rendering PDF documents in Chrome. They concluded that WebAssembly is faster than PNaCl in certain situations but slower in others. Where WebAssembly proved to be faster was the first time a user viewed a document, it loaded their viewer library faster before the viewer was cached on the client, and it also proved to perform basic math operations faster than PNaCl. WebAssembly proved to be slower when rendering the PDF documents, however, by quite the margin. For their simple and moderately complex documents, PNaCl was 62% and 23% faster,

(12)

respectively, and for their large and complex documents, PNaCl proved to be 122% faster than WebAssembly.

It is fascinating that PNaCl seems to be a lot faster than WebAssembly during heavy work, such as rendering PDF documents. If PNaCl is indeed as fast as the article claims compared to WebAssembly, it must mean that WebAssembly outperforms PNaCl in other areas by an equal or even more significant margin.

1.3 Problem formulation

Since the main objective of the technologies covered in this study is to increase the per-formance on the Web, it seems strange that WebAssembly is heavily recommended but is outperformed by PNaCl with a significant margin. If WebAssembly is, indeed, slower than PNaCl there is a possibility that WebAssembly is an improvement in other areas. One important area, when it comes to technologies that can be used on the Web, is secu-rity. If WebAssembly is slower but more secure, it could be enough to recommend it over other technologies.

There have not been any studies comparing WebAssembly to older technologies such as Java applets and ActiveX, no studies that were found at least. Therefore, it is of inter-est for companies who still use those technologies to get a view of how they compare to newer technologies. If it turns out that Java applets or ActiveX outperform WebAssembly as well, then what is it that makes WebAssembly so great compared to the older tech-nologies? Also, if it turns out that WebAssembly does not have greater performance than some of the other technologies, or at least does not heavily outperform them, the cost of migrating to WebAssembly might be too large to warrant the migration. This could be especially true for systems that are only used locally by companies and are not accessible to the open Web. If a system is only used locally, security might not be a considerable concern. If WebAssembly is the quickest technology, but only by a small margin, the migration to WebAssembly might take longer than the time it would save in execution time. Since no performance comparisons were found between ActiveX or Java applets to WebAssembly, it would be difficult for companies that use these old technologies to deter-mine the efficiency of migrating to WebAssembly. One could argue that ActiveX and Java applets have been dead for years and should not be in use any longer and companies us-ing either ActiveX or Java applets should have started migratus-ing years ago. However, for some reason, some companies have decided not to migrate yet, which could potentially be contributed to one of the reasons mentioned above. The issue that this thesis, there-fore, tries to solve is to provide these companies with the information needed to make a decision on whether or not it is worth migrating to newer technology, more specifically to WebAssembly.

The problems this thesis aimed at solving were:

• Determine if PNaCl, or any of the other technologies, have better performance than WebAssembly and by what margin.

• If any of the other technologies outperform WebAssembly, determine in what other areas WebAssembly might be an improvement or if it lacks compared to the older technologies in other areas as well. The other areas included in this study are secu-rity and browser support.

(13)

1.4 Objectives

The main objective of this thesis is to perform a controlled experiment that compares the performance of the technologies mentioned earlier. The goal of the controlled experiment is to establish a hierarchy between the technologies in terms of performance. To perform the experiment, a test environment was set up, and applications were developed using the technologies.

While the performance experiment was the main focus, there was also a comparison in terms of security and browser support.

The results of the performance and security comparisons are used to provide inside into whether or not it is worth migrating a legacy system to newer technology.

1.5 Scope/Limitation

The technologies covered in this research are not all the technologies aimed at improving performance on the Web. Technologies such as Flash and Silverlight were left out because of time restrictions but would have been suitable candidates to include in this research. 1.6 Target group

The main target group of this thesis is companies or developers that have existing im-plementations of systems using either ActiveX, Java applets, Asm.js, or PNaCl. These systems should be focused on performance since that is what this thesis studied. The companies or developers would have an interest in updating their current system by mi-grating to WebAssembly.

1.7 Outline

The next chapter covers the methods used during the research. Chapter 3 provides an overview of the different technologies, more in-depth than the background, as well as mention some key aspects of performance testing. Chapter 4 covers how the data was collected. Chapter 5 covers the results of the performance tests and also explain more in-depth what the differences are between the different technologies. Chapter 6 gives an analysis of the results while Chapter 7 discusses the results in a more personal way. Chapter 8 ends the research with a conclusion.

(14)

2

Method

This chapter explains the methods used to solve the research questions from the Introduc-tion chapter.

2.1 Scientific Approach

To answer the question of what WebAssembly does differently compared to the older technologies in terms of security, a case study was performed by gathering data through a qualitative literature study. The literature study was performed by searching for articles on Google Scholar, OneSearch, and when they came up short, through regular Google.

To answer the question of which technology has the best performance, a quantitative controlled experiment was performed. Small applications were developed for each tech-nology and then compared to one another through a series of performance benchmarks. 2.1.1 Literature study

A literature study is performed by reading and synthesizing data from published articles within the area of the subject. By looking at what others have researched and comparing the results of that research to what others have concluded, it allows the one performing the literature study to draw conclusions on their own. Since the only data gathered in a literature study comes from what others have published, no "firsthand" data is collected.

Articles are often collected from popular digital libraries, and the decision of which articles to include in the literature study is made by defining inclusion and exclusion criteria. Once the criteria are defined, the abstract section of the articles is put through the inclusion and exclusion criteria to determine if they are worth reading in their full text.

In the case of this thesis, the literature study was used to determine differences be-tween the technologies in terms of security. The literature study was performed by search-ing for articles on Google Scholar, OneSearch, and as a last resort when no articles were found; regular Google. If regular Google was used, the only source of information that was included was from articles written by people in the field of computer science or in-formation gathered from the technologies’ official homepages and specifications.

To determine whether an article seemed to contain information about the security of one of the technologies, the article’s abstract was read. If the abstract mentioned one of the technologies, the article was briefly read in its full text. The brief reading of an article was used to determine if it, indeed, contained information about the security for one of the technologies. If the brief reading proved successful, the article was read more thoroughly. The literature study does not give rise to any direct or indirect ethical considerations. 2.1.2 Controlled Experiment

A controlled experiment is generally when a system is tested inside a controlled environ-ment. The test produces quantitative data that is used to answer some research question. A controlled experiment consists of one or more dependent variable and independent variable. The independent variable is something that can be modified manually, and the dependent variable is the result of the test, which will differ depending on the independent variable.

The controlled experiment of this study was used to compare the performance of the technologies covered in this study. The controlled experiment consists of four indepen-dent variables, technology, browser, application, and hardware. The depenindepen-dent variables

(15)

are response time, throughput, and capacity. The response time is how long it takes for the web page to load, from the request to the server to the embedded application being finished. Throughput will be measured by the time it takes for the embedded application to finish performing its functions; this does not include the response time from the server. Capacity will be measured by including the CPU and RAM usage of the computer while the embedded application is performing its functions.

In this thesis’s study, the results were calculated using the median value; this was to help eliminate any outlying values that could arise if some background process on the computer starts while the tests are running.

2.2 Reliability and Validity

There are certain factors that could be considered to lessen the validity of the study. One such factor is that only devices running the Windows operating system are used when performing the tests. The reason for this is that ActiveX does not work on other operat-ing systems and because there was no access to other devices at the time of this study. Also, the applications developed in this thesis are all small and not valuable in real-life situations. Therefore, there is a possibility that the results produced by the experiment in-cluded in this thesis will not be the same as one would get if running a large-scale real-life application.

Another possible validity factor to consider is that background processes could affect the results of the performance tests. To try and decrease the risk of this happening, the choice of calculation (median) helps to remove the impact of outlying results. There were also measures taken to prevent background processes from running while performing the tests. Running the tests multiple times also reduce the impact background processes could have on the results.

To help others reproduce the results from this study, the relevant hardware information is provided as well as the versions of any software used.

To verify that the results of the experiment are different enough to be worth con-sidering, the results were put through tests to see if they show statistically significant difference.

The programmer that developed the applications did not have any prior experience in developing applications using the technologies included in this thesis. Therefore, there is a possibility that the implementation of the applications could be faulty. However, to reduce the risk of any implementation faults occurring, official tutorials were used when developing the applications.

(16)

3

Dynamic web performance

This chapter gives a brief explanation of how web applications have evolved through the years and also explores if the technologies covered in this thesis can be used on the server-side of web development as well. This chapter also explains how the technologies function in a not too detailed fashion. The reasoning for not going too in-depth is that it would have consumed too much time to explore each technology in detail. In the end, there is also a general look at how performance testing can be done.

3.1 Dynamic web

As mentioned earlier in Chapter 1, Java applets were the first successful attempt at making the Web more dynamic instead of static. Before Java applets, dynamic web pages were achieved by using server-side scripting. Server-side scripting entailed the browser sending a request to the server which modified the HTML page based on the request and then sent the HTML page back to the browser in the response. The communication between the browser and the server was most commonly achieved through the use of the Common Gateway Interface (CGI)[15]. Figure 3.1 aims at providing a visual representation of the flow of server-side scripting.

Figure 3.1: High-level flow of a server-side scripting application.

The success of the applets resulted in JavaScript being created, and with the rise of JavaScript; it allowed for dynamic web pages to include side scripting. In client-side scripting it is the browser that updates the HTML DOM of the web page. The server in client-side scripting is mainly used to save and fetch data. Client-side scripting gave rise to a popular architecture in the shape of Single Page Applications (SPA). A web page that is a SPA never leaves the initial page. To simulate the changing of web pages, the browser updates the DOM through the use of JavaScript. This gives the web page a better user-experience and also makes it feel more similar to a desktop application[16]. Figures 3.1 and 3.2 together provides a visual difference between client-side and server-side scripting.

(17)

Figure 3.2: High-level flow of a client-side scripting application.

While the Java applets and ActiveX include their own graphical interfaces, the newer technologies do not. The reason why the newer technologies do not include their own graphical interface is because JavaScript has evolved to the point of it being close to unbeatable when it comes to graphical interfaces on the Web. The focus of the newer technologies is instead honed in on performance, which is an area where JavaScript still falls short. Therefore, the newer technologies rely on JavaScript to handle most graphical components while the embedded technologies handle the heavy computational tasks. 3.2 Technologies server-side

While this thesis only used ActiveX, Java applets, asm.js, PNaCl, and WebAssembly on the client-side, some of the technologies can also be used on the server-side.

One example of an ActiveX control that is used as an HTTP/S server is "PowerTCP WebServer for ActiveX"1, it also has support for the SOAP protocol.

Java applets do not natively run on the server-side, although it is possible to invoke an applet method from a server framework. The server-side equivalent of the Java applet is the Java servlet. Although servlets cannot handle a server on their own, they are used on top of an already existing server implementation; similar to how applets enhance the HTML page they are embedded on[17].

No uses of asm.js or PNaCl on the server-side could be found.

While WebAssembly’s main use case is within a web browser, it can also be used outside of the browser[18]. WebAssembly can be used server-side with tools such as Wasmer2and WASI3.

3.3 Technology overview

The following section gives an overview of the different technologies covered in this study.

3.3.1 Java Applets

Java applets work by pointing a HTML applet element, since HTML4 it is recommended to use the object element instead[19], to the applet’s Java class file. The class file contains bytecode which is executed by the Java Virtual Machine (JVM) of the user’s machine,

1PowerTCP WebServer for ActiveX 2Wasmer

(18)

as can be seen in Figure 3.3. The JVM allows the Java applet to be accessible on any operating system that supports a JVM, which fulfils the "Write Once, Run Anywhere" goal of Java[20].

The JVM that runs the applet is separated from the browser on the operating system level. The applet loads in the background so that the web page remains responsive during loading. When the applet is finished loading, it appears on the web page and is ready to be interacted with. Unlike the JavaScript interpreter of the web browsers, which is single-threaded, the Java applet is multi-threaded. This is something that should be taken into consideration when developing applets that communicate with the JavaScript located on the web page[21].

Java applets are not widely used anymore, and with the release of Java 9 in 2017, the applets were deprecated[22]. Most websites that used Java applets are now switching to other alternatives. For example, the Massive Multiplayer Online Role-Playing Game (MMORPG) RuneScape removed their Java applet version of the game in December, 2019[23].

Figure 3.3: High-level flow of a Java applet application. 3.3.2 ActiveX

ActiveX controls are built using the Component Object Model (COM) specification. COM was Microsoft’s attempt at making applications platform-independent. While it is possible to use languages such as C++ inside of an ActiveX control, its structure is built using COM. Data types defined in COM are meant to be interpreted the same way no matter what machine or platform runs it[24, p. 319-324].

When developing ActiveX controls, there are two libraries available: the Microsoft Foundation Classes (MFC) and the ActiveX Template Library (ATL). As of writing this thesis, both libraries are still available in the newer versions of Visual Studio[24, p. 320]. ActiveX controls support asynchronous loading, which avoids blocking the single thread of the web browser. They can also be embedded without a user interface and can instead be used for faster calculations by invoking methods of the ActiveX control using the JavaScript code in the browser[25].

Microsoft stopped supporting ActiveX development in 2015 and are instead support-ing the use of more modern technologies such as HTML5, JavaScript, and WebAssembly modules. A clearer showcase of this is that ActiveX can not run in Microsoft’s newer browser, Microsoft Edge[26].

ActiveX controls are embedded on a web page through the use of the HTML object element. The object element has a classid attribute that contains the id of the ActiveX control installed on the computer. If the control is not already installed, the object element

(19)

also contains a codebase attribute which points to the location of where to download the control from.

The control gets installed on the user’s system and is executed with access to any part of the machine. A comparison between ActiveX and Java applet can be seen in Figures 3.3 and 3.4, which shows that the applet is contained within the JVM while the ActiveX control has full access to the machine.

Figure 3.4: High-level flow of an ActiveX application.

3.3.3 Asm.js

The specification of Asm.js describes the language as "a strict subset of the JavaScript language, providing a low-level, efficient target-language for compilers. Similarly to the C/C++ virtual machine, asm.js provides an abstraction through the use of a large binary heap with efficient loads and stores, integer and floating-point arithmetic, first-order func-tion definifunc-tions, and funcfunc-tion pointers"[6].

Unlike regular JavaScript, which uses Just-in-Time (JIT) compilers, asm.js can be compiled Ahead-of-Time (AOT). Compiling AOT provides performance benefits such as "unboxed representations of integers and floating-point numbers, absence of runtime type checks, absence of garbage collection, and efficient heap load and stores."[6] Code that fails to validate during the AOT compilation falls back to JIT compilation[6].

Asm.js code is not meant to be written by hand. Instead, it is meant to be written in other languages such as C++ and through tools such as Emscripten, compile into asm.js. Emscripten generates a JavaScript file containing all the "glue-code" necessary to use the asm.js module with regular JavaScript.

Since asm.js is a subset of JavaScript, it is possible to run asm.js in any web browser. However, not all browsers have implemented the optimizations for asm.js which means it would be executed with the same performance as regular JavaScript.

Asm.js code is included on a web page by pointing a script element to the JavaScript file containing the "glue-code" and the asm.js code. The asm.js functionality can be ac-cessed outside of the asm.js file through the use of the Module object. Figure 3.5 show that, unlike ActiveX and the applet, asm.js is executed within the Web browser.

(20)

Figure 3.5: High-level flow of an Asm.js application. 3.3.4 Portable Native Client

Portable Native Client (PNaCl, pronounced "pinnacle") is an extension of Google’s Na-tive Client (NaCl). The main difference between PNaCl and NaCl is that PNaCl is architecture-independent. NaCl requires different implementations depending on which system architecture accesses the website containing the application. PNaCl solves the ar-chitecture dependency by compiling native code to a portable executable (pexe). The pexe file is served to the browser via a server, and before the code executes in the browser, the pexe file is translated to the appropriate native executable (nexe), as can be seen in Figure 3.6[27].

To compile native code, like C++, to a pexe requires the use of the NaCl SDK, which includes the Pepper Plug-in API (PPAPI). PPAPI makes it possible for C/C++ modules to communicate with the hosting browser, and it also grants access to system-level functions in a safe and portable fashion. For instance, PPAPI lets the NaCl module read and write to files, however it can only access files stored in Chrome’s sandboxed local disk[27].

PNaCl makes use of the HTML embed element when added to a website. The embed element points to a manifest file (.nmf) which contains different options as well as the location of the pexe file[27].

Currently, the pexe to nexe translator is only available on Google Chrome. PNaCl is also no longer recommended by Google; instead, Google recommends the use of WebAssembly[27].

(21)

3.3.5 WebAssembly

WebAssembly is a "low-level assembly-like language with a compact binary format"[8]. Similarly to asm.js, WebAssembly is also a compile-target for other languages such as C, C++, Rust, and many more. Since WebAssembly is a binary format, it is not meant to be written by hand. However, WebAssembly does provide a textual format of the language called WebAssembly text format (WAT)[28].

As mentioned in Chapter 1, WebAssembly was recently made into a web standard as it runs natively in web browsers. This means that the same Virtual Machine located inside of the web browsers that load JavaScript code, can also load WebAssembly code, which can be seen in Figure 3.7.

WebAssembly is based on a stack machine where sequences of instructions are exe-cuted in order. There are only four value types in WebAssembly, i32, i64, f32, and f64 which are 32 and 64 bit integer and float values. WebAssembly also consists of a linear-memory which is a large array of bytes. The linear linear-memory has an initial size which can be increased dynamically as needed. There are also functions, tables, and modules in the WebAssembly specification. A function is as one would expect, a function takes some values and performs some operation and then returns values. The table is an array of function pointers that can be used to call functions indirectly. The module contains the definitions of the functions, tables, linear memories, and variables[29].

While the main motivation for creating WebAssembly was to run it on the Web, there is nothing that prevents it from running on other platforms[30]. As mentioned earlier in this chapter, WebAssembly can also be used server-side. Some other use-cases for We-bAssembly could be "Game distribution service (portable and secure), server-side com-pute of untrusted code, hybrid native apps on mobile devices, and symmetric computa-tions across multiple nodes"[18].

WebAssembly is used on the Web by instantiating the module. If a start function was defined, the function would execute once the module is instantiated. If no start func-tion was defined, it is possible to use the Module object with JavaScript to call exposed functions of the WebAssembly module.

Figure 3.7: High-level flow of a WebAssembly application. 3.4 Performance Testing

Performance testing can be described as the "comparison of one or more products to an industrial standard product over a series of performance metrics"[31]. Defining when an application is performing well can be a challenging task because what performance classifies as differ between types of applications. A web-based application might consider

(22)

good performance as quick response times from the server, while a single-player game might consider good performance as lower-end machines getting high frame rates while playing the game. Molyneaux, in their book, suggests that performance is a matter of perception and that "a well-performing application is one that lets the end user carry out a given task without undue perceived delay or irritation"[32, p. 1].

When measuring the performance of an application, there are certain key performance indicators(KPIs) that can be considered. KPIs can be divided into two types: service-oriented and efficiency-service-oriented. Service-service-oriented indicators include availability and re-sponse time, and they measure how well the application is providing its services to the end-user. Efficiency-oriented indicators include throughput and capacity, and they mea-sure how well the application makes use of the hosting infrastructure[32, p. 2-3].

Once the performance tests have been completed, the results need to be calculated. There are multiple ways to do this; the first is to calculate the arithmetic mean or median values. The median has an advantage over the mean value in that if there are a few outlying values (e.g. 2, 3, 3, 19) the mean can become skewed while the median reflects the values more accurately. Another way of calculating the results is to use the standard deviation, the average variance from the mean value. It is based on the assumption that most data exhibit a normal distribution. If the standard deviation is a large value, it could indicate an erratic end-user experience where the results can vary by a significant magnitude. The final example is to use the nth percentile when selecting which values to include in the calculation. If only the 80th percentile of values were used in the previously mentioned values, it would include 2, 3, 3, which could then be used to calculate the arithmetic mean[32, p. 94-95].

(23)

4

Data Collection

This chapter explains how the data used in the comparison of the technologies was col-lected. It aims to provide the reader with the required information to be able to replicate the data collection in the most similar way possible.

4.1 Design

The design phase of the data collection consisted of choosing which applications to use, what to use when collecting performance data, how the process of the performance data collection would work, as well as how to collect data on qualities outside of performance. 4.1.1 Design for collecting performance data

The design of the performance data collection consisted of learning how to implement each technology on a web page. Once each technology was implemented, deciding on which applications to develop for the technologies came next. Deciding on what software to use to collect the performance data was done once the applications had been selected. After the data collection software had been established, the actual test process was de-cided.

Application selection

Before considering which applications to develop for the performance tests, simple "Hello World" applications were developed for each technology to figure out how to embed each technology on a web page. All technologies except for ActiveX went without much trou-ble. The COM code of the ActiveX control was very confusing to someone who had never worked with COM code previously. There was no time to learn how to work with COM on an advanced level, which made it clear early on that the applications developed for the tests would not be very complex or include graphical components. While testing the graphical components would have been interesting, WebAssembly, asm.js, and PNaCl are all meant to be used alongside JavaScript, where JavaScript handles the graphical com-ponents. Therefore, to save time, it was considered an acceptable loss to not include graphical components in the tests.

During the development of the Hello World applications, a simple server was also created to serve the files to the browser. Once the Hello World applications were up and running, the process of choosing what applications to develop for the tests began. Some criteria, listed below, were established to determine if an application was suitable or not.

• Does the application perform heavy-computational tasks?

• Is the application of an acceptable scale so that it can be developed within the given time-frame?

• Is the application possible to replicate for the technologies included in this study? The applications that were selected were an algorithm that calculates the first 43 numbers of the Fibonacci sequence by using recursion. A recursive Fibonacci algorithm is a good way of testing the technologies’ function call effectiveness. The second application se-lected was filling an array of length 30 million with random integers and then randomly

(24)

shuffling it, to test the memory management of the technologies. The third application se-lected was comparing 100 million pairs of random integers to one another, to test numeric computations.

While these applications are not ones that would be used in actual systems, their aim is to provide insight into if some of the technologies are especially efficient at one or more specific areas of execution, for example, memory management. Because the applications are small, it allows for a very similar implementation of each application across the dif-ferent technologies, which is both time-efficient and could potentially give more reliable results if the technologies execute very similar code.

As mentioned earlier, the applications do not include any graphical components; this does stray from one of the primary use cases of ActiveX and Java applets since they became popular because they included their own graphical interface. While it would have been a valuable addition to the study to include an application with real-life use that makes use of the built-in graphical interface of ActiveX and Java applet, the time restraint of this study prevented that possibility. Also, it would have been challenging to implement the same application using asm.js, PNaCl, and WebAssembly because they do not have any built-in graphical interfaces. This could potentially reduce the reliability of the results if it turns out that, for example, Java applet is quicker than WebAssembly when executing the applications included in this study, but rendering and manipulating the graphical components of the applet is slower than rendering and manipulating HTML elements that WebAssembly makes use of.

While all of the technologies, except for asm.js, support multi-threading, the applica-tions did not make use of multi-threading. One reason for this is that handling multiple threads is more complex and takes longer to implement, and another reason is that these applications are very small and does not require the use of multiple threads.

The implementation of the applications can be found in the GitHub repository avail-able in Appendix 2.

Selecting performance data collection software

When searching for a software to collect the performance data, it was determined that no publicly available performance testing software was available that fit this study’s re-quirements. It was, therefore, decided that an application would be developed, which could measure the execution times and hardware usage of the different technologies. To accomplish this, a timestamp is collected the moment before the browser sends a request to the server for the application. When the application has finished its function, another timestamp is collected and compared to the earlier one to get the load time of the page.

The embedded application measures the execution time of its function internally, and the web page it is embedded in retrieves that data after the application has finished its function. ActiveX handles it slightly differently, instead of the web page retrieving the execution time from the embedded application; the ActiveX applications send the load time and execution time directly to the server via HTTP.

Because some of the technologies can not automatically access the file system of the machine running it, the already created server was selected to help with that process. The server is a local Node.js4 server using the express5 framework. When the browser asks

the server for the application, the server starts measuring the CPU and RAM usage of the system and then opens the application as a new process. The usage is measured using the

4Node.js 5Express

(25)

npm package systeminformation6 and the applications are started using the npm package open7. The server measures the computer’s total CPU and RAM usage, not just the appli-cations. The reason why it includes all other processes running on the system is to find the application’s process and then measure that specific process uses all of the computer’s processing power, which could obfuscate the results. The usage is measured every half of a second so that the server does not use too much of the system’s hardware when measur-ing the usage. Because the machine’s total CPU and RAM usage is measured, there has to be some way to make sure that the machine running the tests has the same baseline all the time while running the tests. To achieve this, all apps that are not essential will be shut down before running the tests. Examples of apps that should be turned off are Spotify, Slack, and Steam.

Design of the test process

To be able to automate the execution of multiple sequential test runs of the applications, a base application was needed. The base application should handle the selection of ap-plication, technology, browser, and then execute the tests multiple times without the user needing to interact with the machine.

Deciding how the flow of the performance tests was going to work was mainly decided by the base application needing to know when the application containing the technology finished its function. For the base application to know this, the server has to tell the client web page when a different web page has finished its function, which is not doable with regular HTTP request/response flow. It was, therefore, decided that the use of the WebSocket protocol was a good option to handle such functionality.

The flow of the test process starts with the user pressing a button. The client collects a UNIX timestamp and sends the selected application, technology, browser, and timestamp to the server via WebSocket so that the server will later know which client to send the information to that the test finished. The server saves the timestamp in a JSON file, each technology has a dedicated JSON file and starts measuring the CPU and RAM usage of the machine. The server then uses the open package to start the selected browser at the proper URL for the selected application and technology. The newly created browser instance then runs the application which contains the selected technology version of the application. Once the application finishes its function, the client collects another UNIX timestamp and sends the execution time, timestamp, which application, and which technology that was used to the server via WebSocket. The server then stops measuring the machine’s usage and saves the timestamp, execution time, and usage in the same JSON file as mentioned earlier. Once the file is saved, the server tells the base application via WebSocket that the test finished and the server also sends data that the client can use to render graphs. If the tests have run the selected amount of times, then the base application renders the graphs and stops running any tests. If the tests have not run the selected amount of times, the test process starts over.

The entire test process flow described above can also be seen in Figure 4.1.

6SystemInformation 7Open

(26)

Figure 4.1: Design of the test process flow. 4.1.2 Design for collecting data on security

To gather information for the case study, a literature study was used to synthesize in-formation from published articles. There were no criteria defined on articles collected from either Google Scholar or OneSearch. The original plan had been to only include peer-reviewed articles; however, not many useful articles were found using those criteria which meant that regular Google was used more than Scholar and OneSearch. Therefore, it was decided that even articles that were not peer-reviewed would be included. There was, however, criteria set for regular Google, which is mentioned in Chapter 2. Only ar-ticles written by people in computer science would be included, or information gathered from official specifications of the technologies. This meant that information from sources such as forum comments would not be included.

The literature study was performed during April and May 2020.

The keywords shown in Table 4.1 were used when gathering information on the secu-rity of the technologies.

(27)

Keywords Search engine

activex AND security Google Scholar & OneSearch applet AND security Google Scholar & OneSearch asm.js AND security Google Scholar & OneSearch pnacl AND security Google Scholar & OneSearch webassembly AND security Google Scholar & OneSearch

+asm.js Google

javascript AND security Google Scholar & OneSearch

+webassembly Google

+java +applet Google

Table 4.1: Keywords used during the literature study. 4.2 Performance experiment preparations

To help others replicate this experiment, information on how to configure the browsers properly, and how to compile the technologies is discussed in this section.

4.2.1 Browser settings

To run ActiveX controls and Java applets, the security settings in Internet Explorer need some modification. The easiest way of making sure that ActiveX controls and Java applets are allowed to run is by setting the browser’s security settings to at most Medium for the Local intranetzone. The browser’s security settings are accessed by clicking on the cog in the upper right corner of the screen and then selecting Internet Options. In the window that opens, select the Security settings tab. Next, select the Local intranet zone and then drag the slider so that the text next to it says Medium.

One other measure that needs to be taken is to allow the applet to run. Allowing this is done by going into the Java settings and adding http://localhost:4000 to the Exception site list. To remove the popup message that appears when running a Java applet, set the Mixed code (sandboxed vs trusted) security verificationsetting under the Advanced tab of the Java settings to Disable verification. If this is not done, the user will manually have to allow the applet to run every time, which will obfuscate the load time results.

NaCl is by default disabled in Chrome, to enable it navigate to chrome://flags, search for Native Client and enable it.

Another browser setting that needs to be changed is allowing the applications to close themselves upon completion. By default this functionality is disabled in Firefox. To enable this functionality enter about:config in the URL search bar. On the config page, search for dom.allow_scripts_to_close_windows and set it to true.

4.2.2 Compilation tools

All of the Listings mentioned in this section can be found in Appendix 1.

Compiling C++ code to WebAssembly, asm.js, and PNaCl related files require third party compilation tools. Both WebAssembly and asm.js were compiled using Em-scripten8. The tests are run on Windows machines and to use Emscripten on Windows;

Python9 needs to be installed. Next, the Emscripten SDK needs to be installed, and it is

done by using the commands shown in Listing 1 with a command prompt.

8Emscripten 9Python

(28)

Once the Emscripten SDK is installed, the C++ files are ready to be compiled into WebAssembly and asm.js files. To compile the C++ file, navigate to the folder contain-ing the C++ file with the command prompt that installed the SDK. The commands for WebAssembly and asm.js compilation are almost identical; the only difference is the -s WASMflag, which is 1 when compiling to WebAssembly and 0 when compiling to asm.js. The -O3 flag specifies the amount of optimization that should be made -O3 is the most heavily optimized. The more optimization, the longer the compilation time but faster ex-ecution time. Listing 2 shows the compilation command for compiling the array C++ file to WebAssembly.

To use PNaCl, the NaCl SDK needs to be installed. The NaCl SDK requires Python as well. However, it needs to be Python version 2.7. Once Python is installed, the PATH environment variable needs to have a pointer to the Python location. To accomplish this, search for environment variables in the Windows search bar and click on the option that appears. Click on the Environment Variables... button. Under User variables, double-click on the variable path. Click the New button and enter the path to the Python version 2.7 folder. Next, download the NaCl SDK10and unzip it in a chosen location. Once the NaCl SDK is unzipped, run the commands in Listing 4 with a command prompt.

The PNaCl files required for compilation consist of a C++ file, a Makefile, and a Make.bat file. The Makefile used for the tests in this thesis uses the same structure as the one used in Google’s "getting started" tutorial. The Make.bat file should contain the location to the make.exe file inside of the tools folder of the NaCl SDK. The content of a Makefile is shown in Listing 3, and the make.bat content is shown in Listing 5.

Once the PNaCl files are set up correctly, the compilation process is ready to begin. In a command prompt, navigate to the folder containing the PNaCl files and run the com-mands shown in Listing 6.

To install the ActiveX controls, open the solution files located in the GitHub repository found in Appendix 2. Visual Studio 2017 was used to develop the ActiveX controls, and it is assumed that any attempts at replicating this experiment use the same software. The solution will not be able to build successfully without knowing where to find the curl library. The easiest way of letting Visual Studio know where to find curl is to download Microsoft’s C++ library manager, vcpkg11. Unzip the vcpkg into a chosen folder, open the Developer Command Prompt for VS 2017 and run the commands shown in Listing 7. The ActiveX control solutions should now be able to build, Visual Studio should also be set to release mode and not debug mode.

4.3 Performance test execution

How to execute the performance tests is described below. 4.3.1 Readying machine for testing

Before starting any tests, all non-essential applications should be turned off. This was accomplished by opening the task manager and ending any task that was currently running that did not need to be running. Identifying which applications to turn off was done by restarting the machine and looking at which applications were running on boot, and then turning off any non-essential that had fluctuating CPU and Memory usage. Once all the

10NaCl SDK 11Vcpkg

(29)

non-essential applications were turned off, a screenshot of the task manager was used as a comparison before each test run.

4.3.2 Running the tests

The application that starts the tests was always executed from the Firefox browser. The main reason why Firefox was used instead of another browser was that when running the tests from Chrome, PNaCl consistently caused the browser to crash after 4-6 test runs, and the error could not be identified.

Launching the test application is done by starting the local server and navigating to http://localhost:4000. The application’s user interface consists of one input, three selec-tions, and one button. The input determines how many tests run to perform; the default is 30. The selections determine which application, technology, and browser to use. The button starts the tests, and while the tests are running, the button is disabled. While the tests are running, the machine should not be interacted with in any way.

Once the tests are finished running, the results are displayed visually through multiple graphs. To view the graphs without running more tests, navigate to http://localhost:4000/graph, which brings up a similar interface to the test interface. On the graph page, the selections decide what app, and what metrics to compare. The graph page also contains an option to download the displayed graph as a png image file.

4.3.3 Hardware and Software

The hardware used during the execution of the performance tests are described in Table 4.2.

Desktop Laptop

Model Home-built Acer Aspire V3-772G

Operating system Windows 10 Professional Windows 8.1 Processor Intel Core i7-8700k Intel Core i7-4702MQ

Memory 16GB 8GB

Graphics GeForce GTX 1080 GeForce GTX 760M

Table 4.2: Specification of the machines that the performance tests were executed on. The software versions used during implementation and execution of the performance tests are described in Table 4.3.

(30)

Software Version Node.js 13.11.0 Express 4.17.1 Systeminformation 4.23.0 Open 7.0.2 Pepper API 49 Emscripten 1.39.6 Clang 10.0.0 Google Chrome 81.0.4044.92 Firefox 75.0 Microsoft Edge 81.0.416.58 Internet Explorer 11.719.18362.0

(31)

5

Results

The data presented in this chapter is the data that was collected during the performance ex-periment and the literature study. The data from the performance exex-periment is presented through graphs, while the data from the literature study is presented through text.

5.1 Performance experiment

This section contains the graphs obtained by running the performance tests 30 times per supported browser and then taking the median value of the results. Only WebAssembly (wasm) and asm.js were executed on multiple browsers since the other technologies only have support on one browser each.

5.1.1 Result explanation

This section gives a short overview of what the different types of results imply.

The execution time only measures the algorithm performed by the application, no browser or module loading is included in the execution time. The load time measures the time it takes for the browser to start and the module to load and ends when the application finishes its function. The CPU usage measures the total CPU usage of the machine, not just the application’s usage; the reasoning for this can be found in Chapter 4. The memory usagemeasures, just like the CPU usage, the machine’s total memory usage and not just the applications.

Since WebAssembly and asm.js were executed on multiple browsers, Chrome, Firefox and Edge, their load time and execution time were measured as the median of all the browsers combined. However, Figures C.1-C.6 located in Appendix 3 does show the differences in execution time and load time of each technology per supported browser.

The CPU and memory usage were calculated using the median value for each point in time during the loading of the technology. The usage is measured every 500ms, this means that the median is calculated by looking at the first 500ms and calculating the median of that per technology, and then move to the next 500ms and calculate the median of that. This process continued until the median of each point in time had been calculated. The length of the usage also had its median calculated instead of including the usage of every point in time. This was to prevent the usage of the test run that had taken the longest amount of time from not having any other values to compare within the final 500ms points. 5.1.2 Fibonacci application

Here follows the results of the Fibonacci applications which calculated the 43 first num-bers in the Fibonacci sequence.

(32)

Figure 5.1: Execution times of the Fibonacci application.

(33)

Figure 5.3: CPU usage of the Fibonacci application running on the desktop.

(34)

Figure 5.5: Memory usage of the Fibonacci application running on the desktop.

Figure 5.6: Memory usage of the Fibonacci application running on the laptop. 5.1.3 Array application

The Array application filled an array of length 30 million with random integers and then performed a random shuffle on the array.

(35)

Figure 5.7: Execution times of the Array application.

(36)

Figure 5.9: CPU usage of the Array application running on the desktop.

(37)

Figure 5.11: Memory usage of the Array application running on the desktop.

Figure 5.12: Memory usage of the Array application running on the laptop. 5.1.4 Numeric application

(38)

Figure 5.13: Execution times of the Numeric application.

(39)

Figure 5.15: CPU usage of the Numeric application running on the desktop.

(40)

Figure 5.17: Memory usage of the Numeric application running on the desktop.

Figure 5.18: Memory usage of the Numeric application running on the laptop. 5.2 Qualitative results

This section describes differences in the technologies that are not performance-based. The two areas that were identified that can differ between the technologies were security and browser support.

5.2.1 Security

Security is a crucial aspect for any web page. If one of the technologies is embedded on a public web page, the users who visit the web page expect that page to be safe to use. Therefore, a literature study was performed in an attempt at discerning the differences in security between the technologies.

(41)

ActiveX

ActiveX does not have a security model like the other technologies. Instead, ActiveX has a trust model; where the whole security responsibility is put onto the user. When the user accesses a website containing an ActiveX control, they are presented with an option of letting the control run its code or to block it. If the user agrees to let the control execute on their machine, the control gains full access to the machine. To prevent harmful controls of accidentally being run, Microsoft introduced digital certificates to assure users that controls are safe. These certificates are not foolproof as they can be stolen and used on harmful controls. Another attempt at preventing harmful controls from running is the security settings of Internet Explorer. If Internet Explorer is set to the highest security setting, no control without a certificate will be downloaded. The National Institute of Standards and Technology (NIST) assigned ActiveX the risk level of high[33], [34], [35]. Java applets

Applets that are accessed from a Website are run in a sandbox. Certain applets that are signed by a trusted certificate can run outside of the sandbox. The sandbox restricts the access that an applet has to the client’s computer. Some of the things a sandboxed applet can not do are:

• If not launched through the Java Network Launch Protocol (JNLP), no access to the client’s local file system, executable files, system clipboard, or printers.

• Can not connect to or retrieve resources from any third-party server. • Can not load native libraries.

• Can not change the SecurityManager. • Can not create a ClassLoader.

• Can not read certain system properties.

Applets with a trusted certificate do not suffer from any of the restrictions mentioned above; they can run outside of the sandbox. If a trusted applet is accessed through JavaScript code, it is treated like a not trusted applet and will run inside of the sandbox. It should be stated that every applet requires the user’s permission to be run[36].

Asm.js

Since asm.js is a subset of JavaScript and is executed as regular JavaScript; using a script tag, it benefits from the same security principles as JavaScript does. JavaScript code executed in a web browser is, like the Java applets, run inside of a sandbox. The sandbox restricts the access that the JavaScript code has to the user’s system. The JavaScript code cannot read or write to any files without the user’s permission, nor can it load native code or libraries. Also, since JavaScript does not use pointers; determining the virtual address of JavaScript variables is impossible[37].

(42)

Portable Native Client

It is not actually PNaCl that is executed in the browser; it is instead NaCl that is executed. NaCl has three different architectures that it can run on, x86-32, ARM, and x86-64, and they have slightly different implementations of security.

The x86-32 architecture has both an inner-sandbox and an sandbox. The outer-sandbox is not described in detail other than it works on the Operating System system-call level as an interceptor. The inner-sandbox works on the binary level, validating untrusted x86 code. The inner-sandbox uses static analysis when detecting security concerns within untrusted x86 code. Practices such as self-modifying code and overlapping instructions are not allowed in NaCl. Such practices are identified by reliably disassembling the code in a way that all reachable instructions are identified. The identified instructions can then be run through the inner-sandbox’s validator to verify that only legal instructions are present within the code. The inner-sandbox is meant to verify that any code that runs will not harm the user in any way[38].

The ARM architecture also makes use of the inner-sandbox to verify that no forbidden instructions can run. In addition to the security of the x86-32 architecture, ARM also verifies that untrusted code can not store to memory locations above 1GB, and can not jump to memory locations above 1GB[39].

The x86-64 architecture is similar to the ARM architecture except that instead of 1GB memory locations, x86-64 uses 4GB memory locations. x86-64 also designates a register within the 4GB memory location that is read-only to untrusted code[39].

The PNaCl sandboxing is similar to the JavaScript sandbox; it enforces the same-origin policy and keeps PNaCl separated from the local file system of the machine. PNaCl is, however, allowed a sandboxed file system that is non-persistent between application instances[40].

One security flaw of PNaCl that was found in the literature study was the possibility of a Prime+Probe attack. Since PNaCl uses the CPU of the machine it is executing on, it has access to the data cache. It is, therefore, possible to use memory-contention on the data cache to retrieve information of other processes running on the same CPU. Since PNaCl supports arrays, it makes it extra simple to perform memory-contention[40]. WebAssembly

Just like Java applets, asm.js, and PNaCl; WebAssembly is also executed within a sandbox which contains the application so that it can not affect the rest of the machine. The only way for the application to access functionality outside of the sandbox is through safe and appropriate APIs. WebAssembly applications also execute deterministically, with very few exceptions12, which means that the application will behave in the same way each

time it is executed[41].

During the development of a WebAssembly application, the developer can decide which functions to expose to the JavaScript code on the web page. Since only the selected functions are possible to call from JavaScript, potential attackers have limited access to the application. WebAssembly code is also immutable and impossible to observe at runtime. WebAssembly applications are, therefore, protected from control-flow hijacking attacks, but they are not immune. There are still ways to hijack the control flow of a WebAssembly application through code reuse attacks against indirect calls[41].

(43)

WebAssembly applications are also immune to buffer overflow attacks, where adja-cent memory regions are accessed by exceeding the boundaries of an object. Local and global variables in WebAssembly applications are fixed-size and accessed by index and, therefore, safe from buffer overflows[41]. However, it is still possible to create buffer overflows within WebAssembly’s linear memory. This can give attackers access to local variables[42].

If a WebAssembly function that takes a parameter is called from JavaScript there is a possibility of an integer overflow. If the WebAssembly function is expecting a 32 bit integer but instead receives a larger number than a 32 bit integer can handle, it creates an integer overflow[42].

5.2.2 Browser support

Knowing which browsers each technology can use is a major decision point when se-lecting technology. Table 5.1 presents the browser support for the technologies; the cells marked with a x indicate that the technology has support in the corresponding browser [43], [44], [45], [46].

ActiveX Asm.js Java applet PNaCl WebAssembly

Google Chrome x x x

Firefox x x

Safari x

Microsoft Edge x x

Internet Explorer x x

References

Related documents

Since maintaining primary teeth is one of the goals in the pediatric dentistry, and with regard to the structural differences between primary and permanent teeth

This study was undertaken to assess the impact of the standardised neem extract NeemAzal® on the fitness of the malaria vector Anopheles stephensi following repeated exposure to

Key words: bile duct proliferation, chlordanes, dichlorodiphenyltrichloroethane, dieldrin, East Greenland, HCB, hexacyclohexanes, Ito cells, lipid granulomas, liver, mononuclear

Initial analysis of the differential protein expression of Betta and Betta DN wheat in response to Russian wheat aphid and Bird Cherry-Oat aphid phloem feeding

With regards to the moderating effect of gender on the relationship between Cause importance and consumers‟ purchasing intention to CRM, the results of this

The Spine element consists of a base cabinet clad in brushed stainless steel, topped with a white frosted acrylic display element and a bamboo wood ‘box’ with framed niche for a

(2004) Comparison of MPEG-7 audio spectrum projection features and MFCC applied to speaker recognition, sound classification and audio segmentation, In : Proceedings of the

− Increases risk of severe diarrhea, fp malaria, measles severity; child mortality. „