In fact, there is a "science" being developed, called "bibliometrics" (see here, here, here and here) aiming at producing such indices. The most (in)famous of these indices is the h factor (not the x factor--this is a TV programme--see below). The h factor is defined as follows: If a researcher has n papers cited n times each then his or her h factor is at least n. In fact the h factor is the largest such n.

For example, if a researcher has written one paper which is cited 1000 times and 9 other papers which are cited once each then his h factor is 1. If researcher B has written 3 papers, each cited twice, then her h factor is 2. Hence B is better than A (an administrator would conclude).

A recent report by Robert Adler (Probabilist of the Technion–Israel Institute of Technology), John Ewing (Executive Director, American Mathematical Society, and Peter Taylor (Probabilist, University of Melbourne) show that

*"citation data provide only a limited and incomplete view of research quality, and the statistics derived from citation data are sometimes poorly understood and misused; [r]esearch is too important to measure its value with only a single coarse tool".*It's an interesting read.

However, I admit that the report may not be as much fun as an x factor episode:

Just to make things clear, the guy in this video is not looking for an h factor, but an x factor. (Some colleagues tell me it's more or less the same thing though.)

## No comments:

## Post a Comment