Journal editors and publishers, authors of scientific papers, research directors, university and research council administrators, and even government officials increasingly make use of so-called ‘Impact Factors’ to evaluate the quality of journals, authors and research groups. These figures are used in decision-making processes about (dis)continuation of journal subscriptions, selection of journals for submission of papers, ranking of authors and groups of authors, and even for increase and decrease of funding to research groups. All data are based on the counting of citations of the scientific papers of authors. Very few users appear to realize that these figures can be seriously wrong, biased and even manipulated, as a result of: (i) citation habits for authors in different fields, (ii) selectivity in (non)citations by authors, (iii) errors made by authors in citation lists at the end of papers, (iv) errors made by ISI in entering publications and citations in databases, and in classifying citations and accrediting them to journals and authors, and (v) incomplete and misleading impact figures published by ISI. Although quite a few bonafide and competent analysts and organisations specialized in citation analyses exist, the incompetence of many analysts, when using crude ISI data in discussing rankings of journal and/or authors, is an additional factor that makes such analyses often unreliable.
This paper reviews some of the current practices in publications and citations for (bio)chemists and (bio)chemistry journals; critical comments are made with regard to the use and consequences of erroneous and incomplete or too detailed data. A few recent examples are given of the use and misuse of such data, to illustrate and evaluate the (non)sense of current practice.
Fetching data from CrossRef. This may take some time to load.