ArticlePDF Available

Analysis and Testing of Ajax-based Single-page Web Applications

Authors:

Abstract and Figures

This dissertation has focused on better understanding the shifting web paradigm and the consequences of moving from the classical multi-page model to an Ajax-based single-page style. Specifically to that end, this work has examined this new class of software from three main software engineering perspectives. Software Architecture: to gain an abstract understanding of the key architectural properties of Ajax applications; Software Reengineering: to understand the implications of a migration from classical multi-page web systems to single-page Ajax variants. Software Analysis and Testing: to explore strategies for analyzing and testing this new breed of web application. The work presented in this dissertation aims at advancing the state-of-the-art in comprehending, analyzing, and testing standards-based single-page web applications, by means of a new architectural style, a significant set of techniques and tools, and case study reports. These contributions are aimed at helping software and web engineers better comprehend and deal with the complexity of highly dynamic and interactive web systems.
Content may be subject to copyright.
A preview of the PDF is not available
... Web applications have been evolved through last decade to satisfy requirements of different users. Its evolution process started from a simple static page-sequence client/server system [9] into a dynamic medium of user-created content and rich interaction. The complete evolution steps are discussed in [9]. ...
... Its evolution process started from a simple static page-sequence client/server system [9] into a dynamic medium of user-created content and rich interaction. The complete evolution steps are discussed in [9]. In this paper we classify web applications into two general groups i.e. traditional and Modern, and review the existing automatic test case generation methods for each class. ...
... Traditional web applications are based on a multi-page user interface model, in which interactions are based on a synchronous page-sequence pattern [9]. This class of web applications typically involves complex, multi-page, multi-tiered architectures containing Web sites, applications, database servers, and clients [13], and heterogeneous execution environments [7]. ...
Article
Full-text available
With the growing complexity of web applications, testing is essential to ensure that they provide reliable qualities. An essential task in software testing is the generating of test cases, which is in general, a costly and labor-intensive process. As web applications have been evolved through recent decades, various methods of generating test cases have been proposed according to their features and complexity. In this paper an all-around classification framework for existing automatic test case generation approaches for web applications are introduced. Different techniques for both traditional and modern web applications are compared by defining general evaluation criteria and the results are analyzed.
... Raj et al. performed crawls on Ajax-driven RIAs and detailed a method of using state transitions defined by JavaScript events and state equivalence using the DOM trees for comparison [24]. Using a similar approach, Mesbah et al. performed several experiments regarding crawling and indexing representations of web pages that rely on JavaScript [17,18,22] focusing mainly on search engine indexing and automatic testing [20,21]. ...
... Li et al. defined state transitions in the same way as Raj et al., but used a different technical approach -injecting JavaScript into crawled pages via a proxy [17]. Fejfar crawled RIA DOM elements using a human-in-the-loop approach, allowing the human to direct the user interactions that should be crawled [13] (similar to pa11y [28]). ...
Preprint
Full-text available
Achieving accessibility compliance is extremely important for many government agencies and businesses who wish to improve services for their consumers. With the growing reliance on dynamic web applications many organizations are finding it difficult to implement accessibility standards, often due to the inability of current automated testing tools to test the stateful environments created by dynamic web applications. In this paper, we present mathematical foundations and theory for the Demodocus framework and prototype, and outline its approach to using web science, web crawling,and accessibility testing to automatically navigate and test interactive content for accessibility. Our approach simulates the page interactions of users with and without disabilities, and compares graphs of reachable states from these simulations to determine both the accessibility and the difficulty of content access for these different users.
... Dimana penggunaan teknologi AJAX merupakan hasil pengembangan dari teknologi API (Application Programming Interface) yang dikenal dengan nama DOM (Document Object Model). Teknologi API DOM sendiri diperkenalkan atau digunakan pada tahun 1995, ketika browser Netscape versi 2 diluncurkan untuk membuat halaman website menjadi lebih interaktif bagi pada penggunanya [13]. ...
Article
Full-text available
The change in the behavior of internet users from using computers or laptops to mobile internet users makes changes in the way the browser and also the web pages display information. Internet users generally want a quick access time when visiting a website page to get the desired information. In the research conducted in the writing of this journal, the researchers wanted to show and explain the several important factors that influence the speed of access from a website page, as well as analyzing based on technical factors. Where the main discussion in this study will focus more on the evaluation of technical factors starting from the programming side (server side programming and client side programming) and also the design of the user interface using web pages using minify CSS along with the use of AJAX technology. The results to be achieved from this study are to identify how much influence the technical factors mentioned above have on the speed of visitor access to a web page, apart from other technical factors such as internet network speed, devices and areas where users can access website page.
... Several efforts have studied client-side state with attention toward testing RIAs.Mesbah et al. performed several experiments regarding crawling and indexing representations of web pages that rely on JavaScript[69,70,74]focusing mainly on search engine indexing and automatic testing[72,73]. Singer et al. developed a method for predicting how users interact with pages to navigate within and between web resources[92]. ...
Preprint
Full-text available
The web is the prominent way information is exchanged in the 21st century. However, ensuring web-based information is accessible is complicated, particularly with web applications that rely on JavaScript and other technologies to deliver and build representations; representations are often the HTML, images, or other code a server delivers for a web resource. Static representations are becoming rarer and assessing the accessibility of web-based information to ensure it is available to all users is increasingly difficult given the dynamic nature of representations. In this work, we survey three ongoing research threads that can inform web accessibility solutions: assessing web accessibility, modeling web user activity, and web application crawling. Current web accessibility research is continually focused on increasing the percentage of automatically testable standards, but still relies heavily upon manual testing for complex interactive applications. Along-side web accessibility research, there are mechanisms developed by researchers that replicate user interactions with web pages based on usage patterns. Crawling web applications is a broad research domain; exposing content in web applications is difficult because of incompatibilities in web crawlers and the technologies used to create the applications. We describe research on crawling the deep web by exercising user forms. We close with a thought exercise regarding the convergence of these three threads and the future of automated, web-based accessibility evaluation and assurance through a use case in web archiving. These research efforts provide insight into how users interact with websites, how to automate and simulate user interactions, how to record the results of user interactions, and how to analyze, evaluate, and map resulting website content to determine its relative accessibility.
Chapter
In this paper, we survey several ongoing research threads that can be applied to web accessibility solutions. We focus on the challenges with automatically evaluating the accessibility violations in websites that are built primarily with JavaScript. There are several research efforts that – in aggregate – provide insight into how users interact with websites; how to automate and simulate user interactions; how to record the results of user interactions; and how to analyze, evaluate, and map resulting website content to determine the relative accessibility. We close with a discussion on the convergence of these threads and the future of automated, web-based accessibility evaluation, and assurance.
Thesis
Full-text available
This thesis is an exploration of the subject of historical record linkage. The general goal of historical record linkage is to discover relations between historical entities in a database, for any specific definition of relation, entity and database. Although this task originates from historical research, multiple disciplines are involved. Increasing volumes of data necessitate the use of automated or semi-automated linkage procedures, which is in the domain of computer science. Linkage methodologies depend heavily on the nature of the data itself, often requiring analysis based on onomastics (i.e., the study of person names) or general linguistics. To understand the dynamics of natural language one could be tempted to look at the source of language, i.e., humans, either on the individual cognitive level or as group behaviour. This further increases the multidisciplinarity of the subject by including cognitive psychology. Every discipline addresses a subset of problem aspects, all of which can contribute either to practical solutions for linkage problems or to further insights into the subject matter.
Thesis
Full-text available
Nowadays, 90 percent of the innovation in vehicles is enabled by software. Over the past thirty years different methods have been developed to tackle the increasing complexity and to decrease the development costs of the automotive software systems. In the scope of this thesis, automotive architectural modeling and quality evaluation methods have been addressed. According to the ISO 42010 standard, an Architecture Description language (ADL) and an Architecture Framework (AF) are the key mechanisms used in architecture descriptions. ADLs can exist without respective AFs. However, the successful application of an ADL can depend on the proper definition of an AF, since an AF enables better organization and application of an ADL with clear separation of concerns. Although automotive ADLs have been developed over the last decade, only in recent years, automotive companies started to take initiative in defining an architecture framework for automotive systems, e.g., the Architecture Design Framework by Renault. The first draft of the Automotive Architecture Framework (AAF) was already proposed half a decade ago by Broy. The first contribution of this thesis is the definition of an Architecture Framework for Automotive Systems (AFAS), which fills a major gap between existing automotive AFs and ADLs that was identified during the literature review and the evaluation of automotive ADLs. During the evaluation of automotive ADLs, we identified the lack of the capability to ensure the architectural quality. Even though quality models based on the ISO/IEC SQuaRe quality standard have been specified for MATLAB Simulink design models, the quality framework for automotive architectural models has not been defined. Based on a series of structured interviews with architects (from one automotive company) responsible for modeling automotive software at different architectural viewpoints, we identified consistency, modularity, and complexity as the three main pillars of quality for automotive architectures. Modeling hierarchal elements consistently from different architectural viewpoints, and handling data and control complexity are the key needs of automotive architecture modeling. Therefore, the second contribution of this thesis is the definition and development of the quality evaluation framework for automotive software systems. Ensuring consistency between the different architectural viewpoints is one of the key issues regarding architectural quality of automotive systems. Correspondence rules between architectural viewpoints are not formally defined in the scope of the automotive architecture description mechanisms. Therefore, we propose a consistency detection mechanism based on correspondence rules between automotive architectural viewpoints and developed a prototype tool to perform this consistency checking between different architectural viewpoints. The consistency checking approach and the prototype tool were evaluated in the scope of an Adaptive Cruise Control modeling between two separate teams emulating OEM and automotive supplier. To evaluate modularity and complexity, we follow the Goal-Question-Metric (GQM) approach. By conducting a series of interviews with automotive architects and reviewing relevant standards, we have identified complexity and modularity aspects serving as goals in GQM. Then based on the academic and industrial publications, we have identified a series of questions that need to be answered to achieve the aforementioned goals. Automotive architects have again reviewed these questions. Finally, we have defined metrics required to answer the questions, and identified/implemented tools capable of measuring and presenting these metrics. The quality framework has been applied to industrial automotive architectural and design models. Results of the framework application have been evaluated by means of qualitative and quantitative analyses. By applying the framework to three subsequent releases of an architectural model and the corresponding design models, we have observed, for example, that addition of new functionality or bug fixing in design models often come at a price of increased complexity at the design level, and sometimes compromise modularity of the architectural model. To facilitate the quality evaluation process, the framework applies visual analytics approach for the visualization of modularity and complexity with the help of SQuAVisiT toolset. This approach enables early feedback about software quality making it cheaper and easier to reuse and maintain than traditional techniques. In addition to the visualizations, a mechanism for clone management based on Variant Configuration Language (VCL) is developed to manage model clones and variants. The benefits of using VCL as the variability technique includes separating the variability concern from the functionality concern. The variability mechanism has been validated by converting a number of clone pairs with a varied set of differences into generic representations of VCL. To summarize, we defined an architecture framework for automotive software systems with a coherent set of viewpoints and views for automotive ADLs. Having a coherent set of architecture viewpoints and views and analyzing automotive specific needs for architecture description mechanisms, we identified consistency, modularity, and complexity as the three main quality attributes for automotive software systems. We developed a correspondence rule based method for ensuring consistency between different architectural viewpoints and defined metric sets for assessing modularity and complexity as part of the quality framework. The quality framework is also extended by the quality visualization and clone detection mechanisms to improve software quality.
Article
As web 2.0 becomes one of the important architecture styles, more web applications adopt single page structure instead of multiple web pages and navigations between pages. A single page web application client, called a mashup client in this paper, interfaces more than one services and allows users to navigate in the page. A mashup client page includes complicated functions and has to handle various styles of services and user requirements, and therefore is usually developed manually. In this paper, we propose a model driven code generation approach for in-page navigations. We propose a page model and view navigation design approach, applying REST service architecture patterns. Then, we consider type conditions for each view to have service calls or navigation controls. Also, we developed an XForms page code generation system to demonstrate the efficiency of the proposed method. The developed system generates mashup client pages including navigation controls between services and views. This system can generate ready to use codes from service specifications, so this can help to reduce the development overhead. Moreover, our approach is based on formal model and navigation patterns so the generated result code is simple and easy to understand, and includes only the necessary controls. Therefore, the proposed approach can be more effective for the case of a large number of services.
Article
In this paper we review five years of research in the field of automated crawling and testing of web applications. We describe the open source Crawljax tool, and the various extensions that have been proposed in order to address such issues as cross-browser compatibility testing, web application regression testing, and style sheet usage analysis.Based on that we identify the main challenges and future directions of crawl-based testing of web applications. In particular, we explore ways to reduce the exponential growth of the state space, as well as ways to involve the human tester in the loop, thus reconciling manual exploratory testing and automated test input generation. Finally, we sketch the future of crawl-based testing in the light of upcoming developments, such as the pervasive use of touch devices and mobile computing, and the increasing importance of cyber-security.
Thesis
Full-text available
The evolution of computer technology follows a trajectory of miniaturization and diversification. The technology has developed from mainframes (large computers used by many people) to personal computers (one computer per person) and recently, embedded computers (many computers per person). One of the smallest embedded computers is a wireless sensor node, which is a batterypowered miniaturized device equipped with processing capabilities, memory, wireless communication and sensors that can sense the physical parameters of the environment. A collection of sensor nodes that communicate through the wireless interface form a Wireless Sensor Network (WSN), which is an ad-hoc, self organizing network that can function unattended for long periods of time. Although traditionally WSNs have been regarded as static sensor arrays used mainly for environmental monitoring, recently, WSN applications have undergone a paradigm shift from static to more dynamic environments, where nodes are attached to moving objects, people or animals. Applications that use WSNs in motion are broad, ranging from transport and logistics to animal monitoring, health care and military, just to mention a few. These application domains have a number of characteristics that challenge the algorithmic design of WSNs. Firstly, mobility has a negative effect on the quality of the wireless communication and the performance of networking protocols. Nevertheless, it has been shown that mobility can enhance the functionality of the network by exploiting the movement patterns of mobile objects. Secondly, the heterogeneity of devices in a WSN has to be taken into account for increasing the network performance and lifetime. Thirdly, the WSN services should ideally assist the user in an unobtrusive and transparent way. Fourthly, energy-efficiency and scalability are of primary importance to prevent the network performance degradation. This thesis focuses on the problems and enhancements brought in by networ mobility, while also accounting for heterogeneity, transparency, energy efficiency and scalability. We propose a set of algorithms that enable WSNs to self-organize efficiently in the presence of mobility, adapt to and even exploit dynamics to increase the functionality of the network. Our contributions include an algorithm for motion detection, a set of clustering algorithms that can be used to handle mobility efficiently, and a service discovery protocol that enables dynamic user access to the WSN functionality.
Article
Full-text available
Legacy systems constitute valuable assets to the organizations that own them, and today, there is an increased demand to make them accessible through the World Wide Web to support e-commerce activities. As a result, the problem of legacy-interface migration is becoming very important. In the context of the CELLEST project, we have developed a new process for migrating legacy user interfaces to web-accessible platforms. Instead of analyzing the application code to extract a model of its structure, the CELLEST process analyzes traces of the system-user interaction to model the behavior of the application's user interface. The produced state-transition model specifies the unique legacy-interface screens (as states) and the possible commands leading from one screen to another (as transitions between the states). The interface screens are identified as clusters of similar-in-appearance snapshots in the recorded trace. Next, the syntax of each transition command is extracted as the pattern shared by all the transition instances found in the trace. This user-interface model is used as the basis for constructing models of the tasks performed by the legacy-application users; these task models are subsequently used to develop new web-accessible interface front ends for executing these tasks. In this paper, we discuss the CELLEST method for reverse engineering a state-transition model of the legacy interface, we illustrate it with examples, we discuss the results of our experimentation with it, and we discuss how this model can be used to support the development of new interface front ends.
Article
Proefschrift Technische Universiteit Eindhoven. Met index, lit. opg. - Met samenvatting in het Nederlands.
Article
This thesis is concerned with two research areas in natural computing: the computational nature of gene assembly and membrane computing. Gene assembly is a process occurring in unicellular organisms called ciliates. During this process genes are transformed through cut-and-paste operations. We study this process from a theoretical point of view. More specifically, we relate the theory of gene assembly to sorting by reversal, which is another well-known theory of DNA transformation. In this way we obtain a novel graph-theoretical representation that provides new insights into the nature of gene assembly. Membrane computing is a computational model inspired by the functioning of membranes in cells. Membrane systems compute in a parallel fashion by moving objects, through membranes, between compartments. We study the computational power of various classes of membrane systems, and also relate them to other well-known models of computation.