Abstract : Many European countries are devising systems for measuring academic research output and allocating university funding according to the results. The Danish Ministry of Science, Technology and Innovation has been working on a model for allocating 5 per cent of the budget for universities competitively on the basis of a ‘quality barometer', but have repeatedly delayed its implementation. Without waiting, some universities already have their own schemes and debates have explored different models. According to a report for the OECD, there is no shortage of such models for ‘steering by numbers'. Among them, the British RAE continues to rouse interest, even though it is itself undergoing change. The paper reviews the many official reports and academic studies of the RAE to draw out the strengths and problems of this peer review system and its bibliometric and other alternatives. It examines the ways that, according to the Funding Council for England, ‘Any assessment process, particularly one as important to its subjects as the RAE, will distort the very thing it intends to measure'. The paper identifies three ways that such assessment systems skew university activities much more fundamentally than so far recorded in the literature: distorting the conditions for critical dialogue among academics; creating ‘regional deserts' where certain core disciplines are unavailable, with consequences for social inequalities; and promoting the inefficient use of public finds, when competition to improve rankings results in universities' subsidising the profits of the four big commercial journal publishers.