Paper Title: Conflict-driven Load Control for the Avoidance of Data-Contention Thrashing Author(s): Axel Moenkeberg and Gerhard Weikum 1) Is the paper technically correct? [ ] Yes [X] Mostly (minor flaws, but mostly solid) [ ] No 2) Originality [ ] Very good (very novel, trailblazing work) [X] Good [ ] Marginal (very incremental) [ ] Poor (little or nothing that is new) 3) Technical Depth [ ] Very good (comparable to best conference papers) [X] Good (comparable to typical conference papers) [ ] Marginal depth [ ] Little or no depth 4) Impact/Significance [ ] Very significant [X] Significant [ ] Marginal significance. [ ] Little or no significance. 5) Presentation [ ] Very well written [X] Generally well written [ ] Readable [ ] Needs considerable work [ ] Unacceptably bad 6) Overall Rating [ ] Strong accept (very high quality) [X] Accept (high quality - would argue for acceptance) [ ] Weak Accept (marginal, willing to accept but wouldn't argue for it) [ ] Weak Reject (marginal, probably reject) [ ] Reject (would argue for rejection) 7) Summary of the paper's main contribution and rationale for your recommendation. (1-2 paragraphs) This paper presents a load control approach for DBMSs using data-contention as the throttling measure for admission of transactions. The goal is to avoid data- contention caused by lock conflicts by monitoring a metric which characterizes data-contention. The paper argues that the metric of choice is the "conflict rate", which represents data-contention independent of the number and types of transactions. The conflict rate is defined as the total number of locks held divided by the number of locks held by non-blocking transactions. The paper presents an algorithm for admission control for transactions (throttled by conflict rate) and aborting running transactions in the case of an overload (which is assumed if the data-contention exceeds the critical conflict rate). Victims for abort are chosen by considering the number of locks they hold, and the number of restarts they have had. In addition, the paper also argues for a theoretical constant, the "universal critical conflict rate". Their experiments suggest this value is around 1.3. However, the experiments all use a uniform distribution for transactions to select which resources to lock. It is possible that under a different distribution (where some resources are more likely to be locked than others), that the total number of locks, as well as the number held by non-blocking transactions may be smaller than in the uniform case. While the conflict rate would likely behave the same, it is not clear that it would still retain the value of 1.3. I rate this paper as "Accept" based on the practicality and value of the feedback control mechanism, however, the argument for the universal critical conflict rate is not very convincing based solely on a limited number of synthetic experiments. 8) List 1-3 strengths of the paper. (1-2 sentences each, identified as S1, S2, S3.) S1. The paper presents an interesting way to characterize data-contention in a manner useful for predicting overloads. 9) List 1-3 weaknesses of the paper (1-2 sentences each, identified as W1, W2, W3.) W1. The argument for a "universal critical conflict rate" is not very convincing. It seems likely that the uniform distribution for transactions to select which resources to lock will effect both the number of locks held and the number held by non-blocking transactions. Even though the behaviour of the conflict rate may be the same under other distributions, it's not clear that the exact value of 1.3 will generalize. W2. Multiplying the number of locks held by the number of restarts for deadlock victim selection will make all new transactions have equal probability for selection (0 restarts). This seems to work against the initial argument for using the number of locks held for victim selection. 10) Detailed comments for authors. The approach used in the system is interesting, and the idea of a metric for characterizing data-contention is also interesting. However, it is not very convincing to estimate a "natural constant" using synthetic data and synthetic workloads.