A typical example of such a measure is the number of publications. Anybody who works in research knows that the number of papers produced is a bad predictor for the quality of research a scientist does. Not only is the frequency of publications strongly dependent on the research area, it also depends on the personal style, as well as on the number of collaborators and students. Some people write a long paper every couple of years because they want to have everything in place. Others write a paper about every small step. Yet others again are authors of papers they didn't even read.
You'd think this is so obvious that nobody can plausibly try to measure scientific success by counting publications. Then read the folling, via Nature News Blog:
One hundred academics at the University of Sydney, Australia, have this week been told they will lose their jobs for not publishing frequently enough. The move is part of wider cost-cutting plans designed to pay for new buildings and refurbishment to the university.
Letters were posted to researchers on Monday, 20 February, informing them their positions were being terminated because they hadn’t published at least four “research outputs” over the past three years, Michael Thomson, branch president of the Australian National Tertiary Education Union, told Nature.
Read the full article here.