I got a top cited article! What does that mean?!?

Yesterday the Research Excellence Framework results were published, and it was therefore a nice coincidence to be notified by Springer yesterday that my paper is one of the top cited papers in Journal of Archaeological Method and Theory of 2013/2014. You can see it on this picture:

jamt

I am really happy and grateful about this. However, it did make me wonder what it means in numbers to have a top cited article. The answer is rather sobering: not much! In this blog post I will have a little look around citation land, and share some take-home messages about citation and impact in archaeology with you. Read on until the end, and you might find a call for revolution in the academic publishing world! 🙂

The source mentioned is ISI/Thomson Reuters database, and luckily I can access their metrics through Web of Science. A quick search revealed this paper has 8 citations on Web of Science (all databases), see the figure below:

jamt2

That’s a sobering eyeopener! Especially considering one of these 8 citations is by a paper I wrote myself. This tells me quite a lot about the impact of the Journal of Archaeological Method and Theory, about the bit of archaeology that I am specialised in, and about that part of archaeologists’ citation behaviour represented by Web of Science.

Let’s start by that last one. Web of Science only indexes publications (mainly journals) with a long and consistent editorial board and publication history, focusing almost exclusively on English as the language of science. It defends this policy by stating the fact that the majority of all citations (about 60% or so) cite papers in a minority of journals (I believe about 20%, but don’t cite me on this). So there’s a clear tendency here to include high impact publications. Archaeology does not have many journals of high impact with a long tradition and a stable editorial history, whilst English is definitely NOT the only language of academic archaeology which is mainly due to the need to publish excavation reports in the local language. From my citation network analysis work I get the impression that less than half of all citations are included in Web of Science.

Why do I know that? Well let’s compare my 8 citation in Web of Science with how many this paper got according to Google Scholar:

jamt3So according to Google Scholar this paper was cited 16 times. Now Google Scholar does not care so much about the language or format of publication, so a much larger number of publications is indexed. But these citations also include those that are usually not included in any impact scores, such as citations mentioned on presentation slides or poster uploaded to the internet.

Take-home message number 1: check the citations to your paper on multiple citation databases before bragging about your impact (Web of Science, Google Scholar, Scopus).

What about the Journal of Archaeological Method and Theory? It is not the highest rated journal in archaeology, but I do think it’s up there in the top ten or so. But the top ten of what? Journals are usually ranked by their impact factor, which is the measure introduced by the Institute for Scientific Information using the data you can access through Web of Science. It represents the average number of citations in the last few years per paper in a journal. Here some Impact Factor results of the Journal of Archaeological Method and Theory:

jamt3

In 2013 ISI gave it an impact factor of 1.389 which ranked it 18th in Anthropology, just below Antiquity and just above American Antiquity. These rankings are published yearly by ISI as the Journal Citation Reports. But there are more measures than just the Impact Factor. Google Scholar uses the h5 index to rank journals in disciplines: “the h5 index is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2009-2013 have at least h citations each”. In the category of Archaeology the Journal of Archaeological Method and Theory has an h5 index of 13 and ranks 15th: lower than American Antiquity and dwarfed by the scores of Journal of Archaeological Science (38) and Antiquity (21).

These measures of impact give you an idea of the number of citations on average a paper in a journal receives. This is not solely a result of a paper’s own merit or infamy. It should at least in part be seen as an effect of the journal itself being widely read, so papers published in well-known journals attract more citations because they adopt the visibility of the journal they are published in.

But citation practices differ greatly between disciplines. A quantitative measure of impact might therefore not be particularly relevant for all disciplines. For the humanities a more qualitative interpretation of impact is available: the European Reference Index for the Humanities. The site was down when I wrote this blog post, but the idea is simple. It gives a journal one of three ratings: of importance to a subdiscipline, of national importance for a discipline, of international importance for a discipline. But essentially this is just a low level classification based on a quantification of who publishes, cites, and reads each journal.

Take-home message number 2: impact is relative. Compare multiple measures as presented by multiple institutions. Visibility to your subdiscipline is more important than overall visibility/impact.

So my paper might not be cited by many, and it might not be published in the highest impact journal, but it is a piece of work I am pretty pleased with and it seems to reach the few people around the world who have the same niche interests I have. Having many citations according to ISI in my discipline really does not mean much. Way more impressive is the number of views and downloads this paper gets on sites like Academia.edu. We publish our work because we want to share it with those who are interested, and we want to provoke discussion with the final aim to advance human knowledge. Who cares about high citation counts? Just make sure your paper is out there, freely available, actively promote it, send it to those who might be interested in discussing it with you. That’s what you want, not a high impact factor. All these numbers, and especially the Research Excellence Framework, make us forget sometimes that it is science we are doing.

(PS: as a young academic I realize my own career will be enhanced by playing this numbers game. I am sure it will, for now. But I also think things are changing with resources like Academia.edu, which will hopefully push entities with empty prestige like Science and Nature off their pedestals. Scientific quality control is not guaranteed by prestigious publishers, and there are other models of publishing that allow us to debunk bullshit science and keep the good bits)

Citation analysis: winner takes all

A small group of papers (1%) often gets a disproportional amount of attention and citations (17%). This pattern has been identified a long time ago (have a look at the Web of Science selection procedure as an example of this trend). A short correspondence by Barabási, Song and Wang published recently in Nature revealed that this pattern only emerges after some time and that those top 1% of papers are not necessarily cited a lot immediately after they emerge. The authors argue that this pattern might be a result of our changing reading habits now that academic publications are so abundant, easily searchable and as a result easily accessible: “Researchers increasingly rely on crowd sourcing to discover relevant work, a process that favours the leading papers at the expense of the remaining 99%”.

Read the full correspondence on the Nature website.

Maps of citations

These days it is easy to trace down heaps of literature on a specific topic. But how can you manage those mountains of scholarly information? I just read a cool article on citation network analysis, a set of metrics and visualisation tools that helps you to do just that.

Imagine a Google Maps of scholarship, a set of tools sophisticated enough to help researchers locate hot research, spot hidden connections to other fields, and even identify new disciplines as they emerge in the sprawling terrain of scholarly communication.

The article discusses Bergstrom and West’s Eigenfactor metric. More than just the number of citations an article receives, the Eigenfactor metric weights the source of the publication. So an article published in Nature that was cited 20 times will be more prominent than an article published in ‘The Hampshire Journal of Late Medieval Pottery’ cited an equal number of times. Citation networks are just full of stories about how researchers think, build on ideas and elaborate on them.

And I think this is extremely cool! Some of my own work on citation networks of archaeological papers will follow soon.

For now, do have a look at the awesome motion graphs on the Eigenfactor website. You can explore the evolution of the number of articles and their influence through time. Check out Anthropology for example under which all the archaeology journals are grouped. You will see that journals like Antiquity, Journal of Archaeological Science and American Antiquity have a relatively lower impact (according to the eigenfactor metric) compared to Current Anthropology and Journal of Human Evolution for example.

Blog at WordPress.com.

Up ↑