# 2012 technical reports

CS-2012-01
Title "Looks cool, I'll try this later!": Understanding the Faces and Uses of Online Tutorials
Authors Ben Lafreniere, Andrea Bunt, Matthew Lount and Michael Terry
Abstract Despite their prevalence, little is known about how users make use of web tutorials or feature-rich applications, and why authors take the time to create them. In this paper, we analyze comments posted to web tutorials to understand the purposes that tutorials serve for users, and examine the tutorials themselves to understand their structure and authors' motivations to create them. Our results indicate that users come to tutorials for help performing techniques with applications, and to achieve complex end results beyond their current expertise. As for tutorial authors, we found that most tutorials had clear extrinsic reasons for existing, including earning ad or referral revenue, promoting premium products or services, or acting as portfolios for their authors. Based on these results, we suggest a number of improvements to current tutorial interfaces.
Date January 5, 2012
Report "Looks cool, I'll try this later!": Understanding the Faces and Uses of Online Tutorials (PDF)
CS-2012-02
Title A Web-based Framework for Collaborative Innovation
Authors Donald Cowan, Paulo Alencar, Fred McGarry and Carlos Lucena
Abstract

Twenty-first century global change in most sectors is challenging us and our communities in the management and use of resources. The World Economic Forum has recognized the need for new approaches to enable collective action among both the leadership and concerned members of communities. We call this approach of working together to produce societal solutions collaborative innovation (CI). CI is part of web science as it brings together information networks of people and communities who use and augment the digital records related to common problems mediated by the web. This paper outlines an approach to CI based on dynamic asset-mapping and how it has been supported through a web-based framework that requires both communication and operations on a knowledge base or asset map. Based on the experience using the framework in designing and deploying over 70 systems that incorporate dynamic asset-mapping as a foundation for CI, it is clear that CI takes many forms. We illustrate some of these forms through specific examples in environment, socio-economic development and planning. We conclude that it is not possible to build a single set of tools to support CI and that the users need access to meta-tools and frameworks to implement tailored systems supporting CI directly rather than relying on people with in-depth knowledge of the technologies. We then outline some of the properties of a set of software meta-tools that have been be used to implement these systems.

Date February 07, 2012
Report A Web-based Framework for Collaborative Innovation (PDF)
CS-2012-03
Title A Study on Justifications for Choices: Explanation Patterns and Guidelines
Authors Ingrid Oliveira de Nunes, Simon Miles, Michael Luck and Carlos José Pereira de Lucena
Abstract Many different forms of explanation have been proposed for justifying decisions made by automated systems. However, there is no consensus on what constitutes a good explanation, or what information these explanations should include. In this paper, we present the results of a study into how people justify their decisions. Analysis of our results allowed us to extract the forms of explanation adopted by users to justify choices, and the situations in which these forms are used. The analysis led to the development of guidelines and patterns for explanations to be generated by automated decision systems. This paper presents the study, its results, and the guidelines and patterns we derived.
Date February 16, 2012
Report A Study on Justifications for Choices: Explanation Patterns and Guidelines (PDF)
CS-2012-04
Title Assertion Absorption in Object Queries over Knowledge Bases
Authors Jiewen Wu, Alexander Hudek, David Toman and Grant Weddell
Abstract

We present a novel optimization of evaluating object queries, especially instance queries, over description logic knowledge bases. The method is designed to perform well for large ABoxes and expressive TBoxes, e.g., where background knowledge requires the use of disjunction and negation. The method significantly improves the performance of answering object queries, and particularly so in cases where a large number of concrete feature values are included. We also report on the results of an experimental evaluation that validates the efficacy of the optimazation.

Date August 21, 2012
Report Assertion Absorption in Object Queries over Knowledge Bases (PDF)
CS-2012-05
Title A Feature-Oriented Requirements Modelling Language
Authors Pourya Shaker and Joanne M. Atlee
Abstract In this report, we present FORML, a language for modelling the requirements of features in a software product line. FORML aims to support feature modularity and precise requirements modelling, and to ease the task of adding new features to a set of existing requirements. In particular, FORML decomposes a product line’s requirements into feature modules, and provides language support for specifying tightly-coupled features as model fragments that extend and override existing feature modules. We discuss how decisions in the design of FORML affect the succinctness of feature modules, the evolvability of requirements models, and the specification of intended interactions among related features. We applied FORML to the specification of a set of telephony features, the models of which are included in the report.
Date March 9, 2012
Report A Feature-Oriented Requirements Modelling Language (PDF)
CS-2012-06
Title A Framework for Adaptive Workflow (Model Human Behaviour - Don’t Constrain It!)
Authors H. Dominic Covvey, Donald D. Cowan, Paulo Alencar, William Malyk, Joel So, D. Henriques and Shirley L. Fenton
Abstract Healthcare processes are complex and highly variable from day to day. Healthcare process execution can be affected by any participant in a process, including clinicians, the patient, and the patient’s family, as well as by environmental factors such as clinician, staff, facility and equipment availability, and patient clinical status. However, there are no solutions that enable computer support for a process to address the full complexity and variability of healthcare processes. We have re-conceptualized workflow and developed an innovative process representation and execution framework based on concepts from software engineering, machine learning, complexity, and database management. This new framework frees processes to track human behaviour, thereby releasing us from the constraints of past methods.
Date March 28, 2012
Report A Framework for Adaptive Workflow (Model Human Behaviour - Don’t Constrain It!) (PDF)
CS-2012-07
Title Pathwidth and small-height drawings of 2-connected outer-planar graphs
Authors Therese Biedl
Abstract In this paper, we study planar drawings of 2-connected outer-planar graphs. In an earlier paper we showed that every such graph has a visibility representation with height O(log n).  In this paper, we show that with a different construction, the height is 4pw(G)-3, where pw(G) denotes the pathwidth of graph G.  Since for any planar graph G, any planar drawing has height >= pw(G), this is a 4-approximation algorithm for the height. We also show that our visibility representations can be converted into straight-line drawings of the same height.
Date April 17, 2012
Report Pathwidth and small-height drawings of 2-connected outer-planar graphs (PDF)
CS-2012-08
Title On the Impact of Storage in Residential Power Distribution Systems
Authors Omid Ardakanian, Catherine Rosenberg, and S. Keshav
Abstract It is anticipated that energy storage will be incorporated into the distribution network component of the future smart grid to allow desirable features such as distributed generation integration and reduction in the peak demand. There is, therefore, an urgent need to understand the impact of storage on distribution system planning. In this paper, we focus on the effect of storage on the loading of neighbourhood pole-top transformers. We apply a probabilistic sizing technique originally developed for sizing buffers and communication links in telecommunications networks to jointly size storage and transformers in the distribution network. This allows us to compute the potential gains from transformer upgrade deferral due to the addition of storage. We validate our results through numerical simulation using measurements of home load in a testbed of 20 homes and demonstrate that our guidelines allow local distribution companies to defer transformer upgrades without reducing reliability.
Date May 2, 2012
Report On the Impact of Storage in Residential Power Distribution Systems (PDF)
CS-2012-09
Title In Silico Spectral Investigation of Methemoglobin and Sulfhemoglobin Effects on Human Skin Reflectance
Authors Gladimir V. G. Baranoski, Tenn F. Chen, BradleyW. Kimmel, Erik Miranda, and Daniel Yim
Abstract There are several pathologies whose study and diagnosis is impaired by a relatively small number of documented cases. A practical approach to overcome this obstacle and advance the research in this area consists in employing computer simulations to perform controlled in silico experiments. The results of these experiment, in turn, may be incorporated in the design of differential protocols for these pathologies. Accordingly, in this paper, we investigate the spectral responses of human skin affected by the presence of abnormal amounts of two dysfunctional hemoglobins, methemoglobin and sulfhemoglobin, which are associated with two life-threatening medical conditions, methemoglobinemia and sulfhemoglobinemia respectively. We analyse the results of our in silico experiments, and discuss their potential applications to the development of more effective noninvasive differentiation and monitoring procedures for these medical conditions.
Date June 4, 2012
Report In Silico Spectral Investigation of Methemoglobin and Sulfhemoglobin Effects on Human Skin Reflectance (PDF)
CS-2012-10
Title A Qualitative Study of Mozilla's Process Management Practices
Authors Olga Baysal and Reid Holmes
Abstract The Mozilla Anthropology project was started in late 2011 to examine how various Mozilla community stakeholders make use of Bugzilla in practice and to gain a sense of how Bugzilla could be improved in the future to better support the community. In this report, we present the results of a qualitative study on the interview data of the community members on their engagement and experience with the Bugzilla bug tracking system. We performed an open card sort to gain insight into high-level themes about Bugzilla to identify strengths, weaknesses, and ideas for future enhancement of the platform.
Date July 5, 2012
Report A Qualitative Study of Mozilla's Process Management Practices (PDF)
CS-2012-11
Title Sharing of Fire Fighting Resources
Authors Alan Tsang, Kate Larson, Rob McAlpine
Abstract The Canadian Interagency Forest Fire Center (CIFFC) was created in 1982 with the primary mandate to facilitate the exchange of wildland fire fighting resources including personnel, equipment and aircraft between member agencies. Since its establishment, resource sharing across agencies has increased dramatically. However, there has been a growing recognition by the CIFFC Council of Directors and the Wildland Fire Management Working Group that the current levels of resource sharing will not be adequate for future activity. There have been several incidents in the recent past where there were insufficient resources to meet the national demand, and trends in changing climates, fire occurence and the expansion of the wildland-urban interface coupled with government fiscal trends of  cost containment and reduction all point to increased resource shortages.  There is, thus, interest in investigating new solutions to the problem.
Although in Canada we have been sharing resources between agencies for more than 25 years, to the best of our knowledge, little is known about the strategic process that goes into the borrow/lend decision.  When does an agency request resources is, perhaps, an easy question, but the more difficult, and perhaps more interesting, is "When and how do we lend?". Understanding the parameters that guide this decision among the lending agencies could provide a profound understanding of the nature of lending in the country and provide some guidance to mitigating barriers to resource sharing.  To this end, we have commenced a study of the factors which influence decision-making with respect to resource sharing across Canada. By conducting interviews with agencies across the country as well as CIFFC, and by building a game-theoretic model to capture strategic aspects of resource-sharing, we aim to provide insights into resource-sharing across the country.
Date July 16, 2012
Report Sharing of Fire Fighting Resources (PDF)
CS-2012-12
Title Translating the Feature-Oriented Requirements Modelling Language to Alloy
Authors David Dietrich, Pourya Shaker, Jan Gorzny, Joanne Atlee and Derek Rayside
Abstract The Feature-Oriented Requirements Modelling Language (FORML) provides explicit support for modelling Software Product Lines and systems that are composed of features. However, a problem that developers must be aware of when developing feature oriented software are interactions between features. The purpose of this project is to provide a method for analyzing FORML models to detect interactions that may occur. To this end a translator has been created that will translate FORML models into Alloy models so that they can be analyzed to show a lack of unwanted interactions. This report discusses the implementation of this translator, gives several examples of how various structures in the FORML are translated and discusses the methods of analysis that have been implemented. A small evaluation is also performed to demonstrate how the translator can be used in practice.
Date August 24, 2012
Report Translating the Feature-Oriented Requirements Modelling Language to Alloy (PDF)
CS-2012-13
Title Delay Differentiation without Rate Tinkering
Authors Martin Karsten, Jens Schmitt, and Michael Beck
Abstract Link buffering is a key element in packet-switched networks to absorb temporary traffic bursts without excessive dropping. The resulting queuing delay is a critical performance factor. Research on buffer management typically studies differentiated buffer allocation to control forwarding rates or buffer size management to control the overall queuing delay. On the other hand, delay differentiation is typically accomplished by packet scheduling and thus implies rate differentiation. In this paper a radically simpler approach to delay differentiation through buffer management is studied. Instead of actively managing throughput, Delay Differentiated FIFO (DDF) uses a simple drop-based mechanism to offer multiple delay classes, but closely tracks the per-class throughput of the corresponding single-class FIFO queuing system. End systems choose which delay class best balances their loss and delay requirements. Architecturally, DDF does not interfere with management and policy decisions at end and edge systems and does not add another control loop to the existing mix of traffic management techniques. The forwarding characteristics of DDF are analyzed using stochastic models. Refinements to the basic algorithm are presented and it is shown how DDF can be implemented with very little execution overhead compared to FIFO. Packet-level simulation results are presented to complement the analytical findings.
Date August 23, 2012
Report Delay Differentiation without Rate Tinkering (PDF)
CS-2012-14
Title On Minimum- and Maximum-Weight Minimum Spanning Trees with Neighbourhoods
Authors Reza Dorrigiv, Robert Fraser, Meng He, Shahin Kamali, Akitoshi Kawamura, Alejandro Lopez-Ortiz and Diego Seco
Abstract We study optimization problems for the Euclidean minimum spanning tree (MST) on imprecise data. To model imprecision, we accept a set of disjoint disks in the plane as input. From each member of the set, one point must be selected, and the MST is computed over the set of selected points. We consider both minimizing and maximizing the weight of the MST over the input. The minimum weight version of the problem is known as the minimum spanning tree with neighbourhoods (MSTN) problem, and the maximum weight version (max-MSTN) has not been studied previously to our knowledge. We provide deterministic and parameterized approximation algorithms for the max-MSTN problem, and a parameterized algorithm for the MSTN problem. Additionally, we present hardness of approximation proofs for both settings.
Date August 27, 2012
Report On Minimum- and Maximum-Weight Minimum Spanning Trees with Neighbourhoods (PDF)
CS-2012-15
Title Minimizing Cache Usage in Paging
Authors Alejandro L´opez-Ortiz and Alejandro Salinger
Abstract Traditional paging models seek algorithms that maximize their performance while using the maximum amount of cache resources available. However, in many applications this resource is shared or its usage involves a cost. In this work we introduce the Minimum Cache Usage problem, which is an extension to the classic paging problem that accounts for the efficient use of cache resources by paging algorithms. In this problem, the cost of a paging algorithm is a combination of both its number of faults and the amount of cache it uses, where the relative cost of faults and cache usage can vary with the application. We present a simple family of online paging algorithms that adapt to the ratio between cache and fault costs, achieving competitive ratios that vary with , and that are between 2 and the cache size k. Furthermore, for sequences with high locality of reference, we show that the competitive ratio is at most 2, and provide evidence of the competitiveness of our algorithms on real world traces. Finally, we show that the offline problem admits a polynomial time algorithm. In doing so, we define a reduction of paging with cache usage to weighted interval scheduling on identical machines.
Date August 30, 2012
Report Minimizing Cache Usage in Paging (PDF)
CS-2012-16
Title &alpha;Route: A Name Based Routing Scheme for Information Centric Networks
Authors Reaz Ahmed, Md. Faizul Bari, Shihabur Rahman Chowdhury, Md. Golam Rabbani, Raouf Boutaba and Bertrand Mathieu
Abstract One of the crucial building blocks for Information Centric Networking (ICN) is a name based routing scheme that can route directly on content names instead of IP addresses. However, moving the address space from IP addresses to content names brings scalability issues to a whole new level, due to two reasons. First, name aggregation is not as trivial a task as the IP address aggregation in BGP routing. Second, the number of addressable contents in the Internet is several orders of magnitude higher than the number of IP addresses. With the current size of the Internet, name based, anycast routing is very challenging specially when routing efficiency is of prime importance. We propose a novel name-based routing scheme (&alpha;Route) for ICN that offers efficient bandwidth usage, guaranteed content lookup and scalable routing table size. &alpha;Route consists of two components: an alphanumeric Distributed Hash Table (DHT) and an overlay to underlay (Internet topology) mapping algorithm. Simulation results show that &alpha;Route performs significantly better than Content Centric Network (CCN) [1] in terms of network bandwidth usage, lookup latency and load balancing.
Date September 4, 2012
Report &alpha;Route: A Name Based Routing Scheme for Information Centric Networks (PDF)
CS-2012-17
Title DEWS: A Decentralized Engine for Web Search
Authors Reaz Ahmed, Rakibul Haque, Md. Faizul Bari, Raouf Boutaba and Bertrand Mathieu
Abstract The way we explore the web is largely governed by centrally controlled, clustered search engines, which is not healthy for our freedom in the Internet. A better solution is to enable the web to index itself in a decentralized manner. In this work we propose a decentralized web search mechanism, named DEWS, which will enable the existing webservers to collaborate with each other to form a distributed index of the web. DEWS can rank the search results based on query keyword relevance and relative importance of websites. DEWS also supports approximate matching of query keywords and incremental retrieval of search results in a decentralized manner. We use the standard LETOR 3.0 dataset to validate the DEWS protocol. Simulation results show that the ranking accuracy of DEWS is very close to the centralized case, while network overhead for collaborative search and indexing is logarithmic on network size. Simulation results also show that DEWS is resilient to changes in the available pool of indexing webservers and works efficiently even in presence of heavy query load.
Date September 4, 2012
Report DEWS: A Decentralized Engine for Web Search (PDF)
CS-2012-18
Title Persistence Service for Non-Persistent P2P Systems
Authors Reaz Ahmed, Nashid Shahriar, Mahfuza Sharmin, Raouf Boutaba and Bertrand Mathieu
Abstract Ensuring content persistence with minimal replication overhead is a prerequisite for providing any consistent service over a peer-to-peer (P2P) overlay. This paper introduces S-DATA, a bandwidth efficient protocol for achieving highly available P2P systems with minimal replication overhead. When considering a global P2P system, the cyclic behaviour of peers situated at different time zones can be found complementary of one another. In S-DATA, peers with complementary diurnal availability patterns collaborate in small replication groups and host each other’s content in turn to ensure 24/7 availability. In this work we present a mathematical model for measuring time-based availability with (&beta; - 1) redundancy as a function of replication group size and peer uptime behaviour. We also simulate the S-DATA protocol in the PeerSim simulator and compare its performance against a few other time-based replication protocols.
Date September 4, 2012
Report Persistence Service for Non-Persistent P2P Systems (PDF)
CS-2012-19
Title EdgeCloud: A New Hybrid Platform for On-Demand Gaming
Authors Sharon Choy, Bernard Wong, Gwendal Simon and Catherine Rosenberg
Abstract Cloud computing has been a revolutionary force in changing the way organizations deploy web applications and services. However, many of cloud computing's core design tenets, such as consolidating resources into a small number of datacenters and fine-grain partitioning of general purpose computing resources, conflict with an emerging class of multimedia applications that is highly latency sensitive and requires specialized hardware, such as graphic processing units (GPUs) and fast memory. In this paper, we look closely at one such application, namely, on-demand gaming (also known as cloud gaming), that has the potential to radically change the multi-billion dollar video game industry. We demonstrate through a large-scale measurement study that the current cloud computing infrastructure is unable to meet the strict latency requirements necessary for acceptable game play for many end-users.  In response to these results, we propose augmenting the existing cloud infrastructure with an EdgeCloud, consisting of end-hosts that are more geographically diverse than datacenters and also more likely to have specialized resources. We detail the challenges introduced by such a hybrid design and our solutions to these challenges, and demonstrate through a large-scale simulation that the addition of even a small number of end-hosts significantly reduces latency and improves client coverage for on-demand gaming.
Date September 20, 2012
Report EdgeCloud: A New Hybrid Platform for On-Demand Gaming (PDF)
CS-2012-20
Title On the Sublinear Processor Gap for Multi-Core Architectures
Authors Alejandro Lopez-Ortiz and Alejandro Salinger
Abstract

In the past, parallel algorithms were developed, for the most part, under the assumption that the number of processors is $\Theta(n)$ and that if in practice the actual number was smaller, this could be resolved using Brent's Lemma to simulate the highly parallel solution on a lower-degree parallel architecture. In this paper, however, we argue that design and implementation issues of algorithms and architectures are significantly different — both in theory and in practice — between computational models with high and low degrees of parallelism. We report an observed gap in the behaviour of a CMP/parallel architecture depending on the number of processors. This gap appears repeatedly in both empirical cases, when studying practical aspects of architecture design and program implementation as well as in theoretical instances when studying the behaviour of various parallel algorithms. It separates the performance, design and analysis of systems with a sublinear number of processors and systems with linearly many processors. More specifically we observe that systems with either logarithmically many cores or with $O(n^\alpha)$ cores (with $\alpha<1$) exhibit a qualitatively different behaviour than a system with a linear number of cores on the size of the input, i.e. $\Theta(n)$. The evidence we present suggests the existence of a sharp theoretical gap between the classes of problems that can be efficiently parallelized with $o(n)$ processors and with $\Theta(n)$ processors unless $NC=P$.

Date

October 30, 2012

Report On the Sublinear Processor Gap for Multi-Core Architectures (PDF)
CS-2012-21
Title Algorithms in the Ultra-Wide Word Model
Authors Arash Farzan, Alejandro Lopez-Ortiz, Patrick K. Nicholson and Alejandro Salinger
Abstract The effective use of parallel computing resources to speed up algorithms in current multicore and other parallel architectures remains a difficult challenge, with ease of programming playing a key role in the eventual success of these architectures. In this paper we consider an alternative view of parallelism in the form of an ultra-wide word processor. We introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model, that allows for constant time operations on thousands of bits in parallel. Word parallelism as exploited by the word-RAM model does not suffer from the more difficult aspects of parallel programming, namely synchronization and concurrency. In practice, the speedups obtained by word-RAM algorithms are moderate, as they are limited by the word size. We argue that a large class of word-RAM algorithms can be implemented in the Ultra-Wide Word model, obtaining speedups comparable to multi-threaded computations while keeping the simplicity of programming of the sequential RAM model. We show that this is the case by describing implementations of Ultra-Wide Word algorithms for dynamic programming and string searching. In addition, we show that the Ultra-Wide Word model can be used to implement a non-standard memory architecture, which enables the sidestepping of lower bounds of important data structure problems such as priority queues and dynamic prefix sums.
Date October 30, 2012
Report Algorithms in the Ultra-Wide Word Model (PDF)
CS-2012-22
Title Broadcasting in Confict-Aware Multi-Channel Networks
Authors Francisco Claude, Reza Dorrigiv, Shahin Kamali, Alejandro Lopez-Ortiz, Pawel Pralat, Jazmin Romero, Alejandro Salinger and Diego Seco
Abstract The broadcasting problem asks for the fastest way of transmitting a message to all nodes of a communication network. We consider the problem in conflict-aware multi-channel networks. These networks can be modeled as undirected graphs in which each edge is labeled with a set of available channels to transmit data between its endpoints. Each node can send and receive data through any channel on its incident edges, with the restriction that it cannot successfully receive through a channel when multiple neighbours send data via that channel simultaneously. We present efficient algorithms as well as hardness results for the broadcasting problem on various network topologies. We propose polynomial time algorithms for optimal broadcasting in grids, and also for trees when there is only one channel on each edge. Nevertheless, we show that the problem is NP-hard for trees in general, as well as for complete graphs. In addition, we consider balanced complete graphs and propose a policy for assigning channels to these graphs. This policy, together with its embedded broadcasting schemes, result in fault-tolerant networks which have optimal broadcasting time.

Date November 20, 2012
Report Broadcasting in Confict-Aware Multi-Channel Networks (PDF)
CS-2012-23
Title A Web-based Toolkit for Collaborative Innovation
Authors

Donald Cowan, TerryWilkinson, Douglas Mulholland, Paulo Alencar and Fred McGarry

Abstract In a previous report we outlined the need for a web-based framework for collaborative innovation. In this report we describe the toolkit, meta-components and meta-processes
that are needed to support such activity.
Date November 28, 2012
Report A Web-based Toolkit for Collaborative Innovation (PDF)
CS-2012-24
Title smv-morph: Working with Cadence SMV models in Moscow ML
Authors Alma L. Juarez-Dominguez and Nancy A. Day
Abstract We describe smv-morph: a toolkit of functions to manipulate and print SMV models (both Cadence and NuSMV) in Moscow ML. Our toolkit includes the ability to read Cadence SMV models and counterexamples produced by both SMV model checkers. In this report, we concentrate our explanations on functionality related to Cadence SMV.
Date December 10, 2012
Report smv-morph: Working with Cadence SMV models in Moscow ML (PDF)
CS-2012-25
Title An Automated Verication of Property TP2 for Concurrent Collaborative Text Buffers
Abstract Using the Coq proof assistant, we present a mechanically verified proof that the operation transforms defined on text buffer insertions and deletions, as outlined in Ressel et al's adOPTed algorithm do not satisfy Ressel's transformation property TP2. We then apply similar techniques to Cormack's Calculus for Concurrent Update (CCU) and show that the CCU ransformations also do not satisfy TP2.
Date December 17, 2012
Report An Automated Verication of Property TP2 for Concurrent Collaborative Text Buffers (PDF)
CS-2012-26
Title Materialized Views for Eventually Consistent Record Stores
Authors Changjiu Jin, Rui Liu, Kenneth Salem
Abstract

Distributed, replicated keyed-record stores are often used by applications that place a premium on high availability and scalability. Such systems provide fast access to stored records given a primary key value, but access without the primary key may be very slow and expensive. This problem can be addressed using materialized views. Materialized views redundantly store records, or parts of records, and the redundant copies can be organized and distributed differently than the originals, e.g, according to the value of a secondary key. In this paper, we consider the problem of supporting materialized views in multimaster, eventually consistent keyed-record stores. Incremental maintenance of materialized views is challenging in such systems because there no single master server responsible for serializing the updates to each record. We present a decentralized technique for incrementally maintaining materialized views in multi-master systems. We have implemented a prototype of our technique using Cassandra, a widely used system of this type. Using the prototype, we show that secondary-key-based access is much faster using materialized views than using Cassandra’s native secondary indexes, but maintaining the views in the face of updates may be more expensive than maintaining indexes.

Date December 31, 2012
Report

Materialized Views for Eventually Consistent Record Stores (PDF)