CERAS Research Program

The Centre of Excellence for Research in Advanced Systems (CERAS) is an innovative, collaborative virtual organization which brings together researchers from IBM, universities and research centres from Canada, US and other countries. The aim is to investigate technologies, techniques and methods for web service centric software, commonly known as Web2.0. This next generation of web service technologies will enable new types of web applications, introduce new ways of interaction and collaboration over the Internet and create new business models.
CERAS’s objectives are to
  • create and test seed ideas for distributed virtual enterprises,
  • demonstrate how emerging applications can be developed, deployed and run more effectively on a virtual infrastructure, and
  • explore the concept of an academic virtual campus.
CERAS will investigate two technological aspects: virtualization of computing resources and model driven engineering, with the goal of providing easier development, deployment, reconfiguration, maintenance and adaptation of a web service based IT infrastructure. Of interest is a perpetual beta environment where applications are continuously evolving. Under this environment, a cycle consisting of application design and development, application deployment, operations, and run time analysis that results in feedback to application design, is repeated.
Included in our investigation are applications that rely on dynamically discovering and composing application services. Using this approach, an application may be built from components provided by different parties and may require access to data available at some remote locations. CERAS will initially focus on applications in the life science area and the automotive area. Other application areas will be added in the future. The types of applications to be deployed on a virtual infrastructure may range from batch to highly interactive.
Future IT infrastructures are expected to support a much more diverse and a much greater number of applications. The will lead to new challenges in managing the hardware and software resources. Moreover, different types of computing resources may be available, e.g., clusters, grid, and individual systems; and these resources may be scattered across many geographical regions and time zones. Effective resource management is an important issue. An important objective of CERAS is to conduct research on making application services more autonomic by providing automatic capabilities for their configuration, management, tuning, and repair.

CERAS’s Research Infrastructure

A research infrastructure for CERAS will be developed. This infrastructure is composed of computing resources located at the University of Waterloo (UW), the Ontario Cancer Institute (OCI) and North Carolina State University (NCSU). The primary usage of the computing resources at these locations are:
  • UW – investigation of technological issues related to virtualization of computing resources;
  • OCI – production environment for scientific and highly interactive applications in the life science area; and
  • NCSU – a virtual Campus that matches students, instructors with computing resources and enables remote education and computing resource reservations for the researchers.
A workload Portal will be developed which will enable several types of workloads to access the computing resources at the three locations. These resources are heterogeneous in nature and under different administrative domains. Usage policy may therefore be imposed at some locations and there may be restrictions because the computing platform required for a given application is only available at a certain location. The possibility of connecting the resources at the three locations by a high speed network (e.g., CANARIE) will be explored. The overall infrastructure, seen as a big virtual data centre, will be shared by CERAS researchers and others. It also provides an environment for virtual research collaboration where a distributed research team may run common experiments, share and exchange research settings.

Research Program

The proposed research aims at devising an integrated approach in which existing and new applications are deployed and evaluated on a virtual infrastructure. The overall research program can be organized into two layers:
  • Application services layer: This layer is concerned with application service discovery and composition; service provisioning negotiation and contracting that result in service-level agreements; and finally service invocation, monitoring, and profiling.
  • Computing resource provisioning layer: This layer realizes the virtualization of server, storage, and networking resources through a host of services such as authentication, authorization, resource discovery, resource reservation and scheduling, monitoring, analysis, and data replication.
In terms of autonomic computing, autonomic management at the resource provisioning layer is achieved through a collection of autonomic managers; each has the capability to monitor specific resources, analyze the results, plan any changes if necessary, and enact these changes in the operating environment. Workload may also be reconfigured in case of failures. At the application services layer, autonomic managers could re-negotiate service level agreements (SLAs). They rely on the availability of appropriate models of the computing infrastructure, the high-level application goals, user preferences, and various decision models in order to perform their planning tasks.
Model-Driven Engineering (MDE) techniques will be used in application design and development. MDE refers to the systematic use of models as primary engineering artifacts throughout the engineering lifecycle. Models are abstractions of a system and its environment and they play an important role in the proposed research program. Autonomic resource provisioning relies on the availability of various models that are utilized by the autonomic managers and the system users, including specifications of high-level application goals, specifications of application services, workflow models of service composition and orchestration, specifications of SLAs, decision models and goal models for achieving SLAs, policy models (e.g., business rules), user models, models of computing infrastructure (computing nodes, storage nodes, network links, etc.), performance models, and service deployment models. These models will be utilized at any time of the development cycle, including design time and runtime. %br%Two or more applications in the life science and automotive areas will be selected for our investigation. Possible work could include techniques for dynamically discovering and composing application services and tools for application development. These applications will be used as examples in our work on application design and self configuration and optimization of IT infrastructures. CERAS’s research program consists of a number of projects. An overview of these projects follows.

Modeling, Evolution, and Automated Configuration of Software Services
Web service (WS) technologies provide means to publish, discover, and invoke services over the World Wide Web. Model-driven engineering (MDE) refers to the systematic use of models as primary engineering artifacts throughout the engineering lifecycle. There are significant overlaps between the MDE technology space and the WS technology space. For example, UML class models are the MDE counterpart of WSDL / XSL for modeling syntactic aspects of interfaces in WS. At the same, both spaces are also different from each other in significant ways. They are driven by related but different sets of requirements and each space is rooted in a different paradigm: the object-oriented paradigm for MDE and the XML paradigm for WS. Consequently, each technology space has its strengths and weaknesses and bridging between the spaces can offer best of both worlds. Our aim is to advance MDE technologies to support web services in the context of adaptable systems.
Model Management for Continuously Evolving Systems
Software development today takes place in the context of a complex system-of-systems that includes a broad technological infrastructure along with a wide set of human activities. The system context evolves continually, and can only ever be partially understood. Existing approaches to software development assume that we can write complete and consistent specifications, based on well-defined sets of features and interfaces. While this allows us to build components that conform (in a narrow sense) to their specifications, it does not help with the analysis of whether such components will be any use in any of the many different systems-of-systems in which they may be deployed. To address this challenge, we propose to develop a model management framework that supports the development and evolution of collections of partial models of the system and its environment, from different perspectives.
Elaborating and Evaluating UML’s 3-Layer Semantics Architecture
UML is the de facto standard for software modeling. To be able to maximize the utility of UML models, a generally agreed-upon, formal semantics of at least certain core parts of UML is necessary. The definition of UML takes a first step towards providing such a semantics by providing a 3-layer architecture which identifies key semantic areas and how they relate to each other. However, much more work is necessary to elaborate the architecture into a formal semantics and to evaluate its utility as an implementation architecture.
Semantically Configurable Modeling Notations and Tools
We are interested in supporting the rapid creation of new modeling notations (e.g., domain-specific languages, problem-specific semantics, UML variants) for exploring and analyzing software specifications and designs. The goals are to ease the definition of notations, such that the semantics are coherent and precise, and to generate supporting tools (e.g., editors, analyzers, verifiers, simulators).
Intelligent Autonomic Computing for Computational Biology
To significantly impact cancer research, novel therapeutic approaches for targeting metastatic disease and diagnostic markers reflective of changes associated with disease onset that can detect early stage disease must be discovered. Better drugs must be rationally designed, and current drugs made more efficacious either by re-engineering or by information-based combination therapy.
Automated Management of Virtual Database Appliances
Virtualization is a powerful emerging tool for deploying and managing applications, services and computing resources. A virtual database appliance is a database management system running in a virtual machine. The goal of our work is to automatically configure and tune database appliances for particular workloads and underlying physical computing environments. By doing so, we aim to simplify the deployment and management of appliance-based database services.
Fine-grained Resource Management and Problem Detection in Dynamic Content Servers
We target fine-grained problem diagnosis, visualization and adaptive resource management in complex multi-tier Internet cluster servers. Autonomic management of large-scale Internet servers through development of system self-optimization and self-healing techniques has recently received growing attention, due to the excessive personnel costs involved in managing these complex systems. Current approaches to automatic management of Internet servers usually monitor a few performance metrics such as response time, and throughput at the application level, and report alarms or react only when these metrics are beyond specific safe thresholds. These approaches are too coarse grained to precisely locate the reason for the perceived problems. Since the problem is not properly diagnosed, the reaction to the problem may be inaccurate. The lack of precise problem diagnosis, visualization and targeted reaction undermines user trust. User trust is at the core of wide-acceptance for any current or future system self-optimization and self-healing techniques.
Performance-Model-Assisted Creation and Management of Service Systems
Manage and improve the performance of advanced service systems that include multiple layers and internal concurrency, linking the autonomic control of performance and the assembly of systems from components and platforms. Performance in this case signifies capacity and responsiveness.
Performance Management of IT Infrastructure
We consider an IT infrastructure that consists of a variety of computing resources, e.g., clusters, individual servers, and grid, which are accessed via a virtualization layer. An important issue is the development of autonomic capabilities for managing these resources. We investigate such capabilities in the context of resource allocation to a diverse set of applications such that their performance requirements are met while minimizing the cost of resource usage.

Participants:

CERAS includes researchers from the following institutions: University of Waterloo, University of Toronto, Carleton University, Queen's University, University of Western Ontario, North Carolina State University(USA), Ontario Cancer Institute, IBM Toronto Lab, IBM Ottawa Lab, IBM Raleigh (USA). Research funds come from the above institutions as well as from the governments of the U.S. and Canada.

Management:

The Institute will be managed by a Research Steering Committee made of researchers from the above institutions.

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r11 - 2007-05-15 - CherylMorris
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback