On Developing New Models, with Paging as a Case Study
[.pdf]
Abstract
As computer science has progressed, numerous models and measures have
been developed over the years. Among the most commonly used in theoretical computer science are
the RAM model, the I/O model, worst case analysis,
space (memory) usage, average case analysis, amortized analysis, adaptive analysis and
the competitive ratio. New models are added to this list every few years to reflect
varying constraints imposed by novel application or advances in computer architectures.
Examples of alternative models are the transdichotomous RAM or word-RAM,
the data stream model, the map reduce model, the cache oblivious model and the smoothed
analysis model. New models and measures, when successful expand our
understanding of computation and open new avenues
of inquiry. As it is to be expected relatively few models and paradigms
are introduced every year, and even less are eventually proven successful.
In this paper we discuss first certain shortcomings of online competitive analysis
particularly as it concerns paging, discuss existing solutions in the literature
as well as present recent progress in developing models and measures that better
reflect actual practice for the case of paging. From there we proceed to a more general
discussion on how to measure and evaluate new models within theoretical computer
science and how to contrast them, when appropriate, to existing models. Lastly,
we highlight certain ``natural''
choices and assumptions of the standard worst-case model which are often unstated and
rarely explicitly justified. We contrast these choices to those made in the
formalization of probability theory.
Bibtex Entry