Movies for marketing the idea of Open Access

The Faculty of Health at UiT The Arctic University of Norway has just launched four small movies to market Open Access to their scientific staff. But these movies may and should be used by anyone needing to market OA to staff or students. (They have a CC BY-NC license, by the way.)

See the vice-dean of research speak about why the Tromsø Study wants to become open

Go open – serve society! from UiT Helsefak on Vimeo.

And see one of the few researchers getting a second grant from the ERC tell you why he wants to be open

Go open – get funding! from UiT Helsefak on Vimeo.

Or see one of our leading pharmacy professors tell you how you can increase your visibility – and how one of her students got a job offer – by being open

Go open – get a job! from UiT Helsefak on Vimeo.

And, finally, see one of our top linguists tell you why going open is the right thing to do

Go open – it’s the right thing to do! from UiT Helsefak on Vimeo.

After seeing these movies: How could you possibly consider not going for open?

Posted in Uncategorized | Tagged , , | 1 Comment

The Norwegian accreditation system for scientific/scholarly journals

The Norwegian financing system for Higher Education institutions includes a component which is a fixed sum that is allocated to institutions according to their publications in journals (also book series) that have been found to be of the necessary quality – I call this an accreditation system. (The sums allocated aren’t that big, but no matter how big your budget is, most of it is tied down to operating infrastructure and paying salaries. So the minor amounts that you actually can decide about, are important. The system works so that the only way to get more money, is to increase your publication volume more than the average of the rest of the institutions – a zero-sum game.)

An article in an accredited journal on level 1 (more further down) earns the authors a total of 1 publication point, which is divided amongst the authors. 1 point earns the author’s institution about NOK 33,000 today, i.e. a bit more than USD 4,000 – some of which will end up in a budget near the author.

To be accredited, a journal has to have an ISSN number and a system of quality assurance, generally a double peer review system, that is described on the journal website. It also has to have a non-local authorship, i.e. not more than 70 % of authors may come from a single institution. An editory board should also be visible. At the outstart, data were imported from databases like WoS, Scopus or Ulrichs, later additions have come through suggestions from various interested parties – anyone can register at the site, and suggest new additions.

The database is publicly available here at the site Scientific journals, series and publishers administered by the Norwegian Social Science Data Services (NSD) which acts as a secretariat. They vet all journals, deciding the obvious cases. There is also a publication committee of The Norwegian Association of Higher Education Institutions (UHR) that makes all final and difficult decisions – and there are 53 subject-based advisory boards to that committee, curating about 100 field specific journals lists.

A twist to the system is that journals of high standing, publishing no more than a total of 20 per cent of all publications within their fields, are elevated to a level 2 (level 1 being the standard level), incurring triple publication points to their authors. The infights about which journals to promote to this level occupies much of the time of the subject-based advisory boards, I am told …

Does it work? The level 1 and 2 system has its flaws and problems. For us OA patriots, the fact that only 1 per cent of accredited OA journals are on level 2, while 10 per cent of TA journals are on level 2, makes the system a major obstacle to a speedy transition to OA.

The accreditation – i.e. selecting some journals to be accredited, turning others down because of too low a quality (or on some technicality, soon to be remedied) – seems to work well. Some journals have been let in, that shouldn’t – but some of these have been weeded out, inspired by Beall’s list but not following Beall slavishly. And no such list will be flawless. A listing on level 1 or 2 in the Norwegian system is a clear indication that a journal has been vetted for a minimum level of quality. Some 22,000 journals have been accredited so far, and a large number (unknown) has been turned down for various reasons, not always to do with quality.

We know that the register is used in other countries, and last September I heard a South African speaker saying that they were considering making suggestions to the register in order to have their journals accredited, with a view to use this accreditation in the marketing of their journals towards authors.

Posted in Uncategorized | 2 Comments

Finansieringsutvalgets tanker om belønning av publisering – for noen …

Curt Rice har skrevet innsiktsfullt om dette på bloggen sin, og der finner man bl.a. en morsom illustrasjon som viser at antall forfattere på en artikkel kan ha snodige konsekvenser. En artikkel på nivå 1 gir i dag 1 publiseringspoeng, som deles på forfatterne. I den nye modellen er antall poeng avhengig av antall forfattere, et sted mellom 1 og 0,05 ganger et vilkårlig stort antall forfattere – men så blir bare norske forfattere kreditert for dette. Systemet er utformet slik at mens det i teorien skal være slik at den totale uttelling øker med antall forfattere, foreslår man en “forenklet” modell som gjør at man noen ganger opplever redusert total uttelling med økt antall forfattere. Man må f.eks. for enhver pris unngå et forfattertall i intervallet 21–30, og passerer man 30 blir uttellingen enda dårligere. Først når man når 81 forfattere kan man puste lettet ut og se frem til økt uttelling.

Mekanismen med økt uttelling med økende antall forfattere, skal etter sigende kompensere for problemene med utenlandske forfattere i dag, som gjør at en artikkel kan gi langt lavere total uttelling for norske forfattere enn 1 poeng på deling – utlendingenes andeler blir strøket. Curt Rice har argumentert med at man kunne dele poengene på de norske forfatterne, utvalget nevner ikke dette forslaget. Og både i instituttsektoren og helsesektoren har man en ordning med økt uttelling for artikler med forfattere fra flere institusjoner, dette vil nettopp oppmuntre til samarbeid og kompensere for at belønningen til utenlandske forfattere «forsvinner» – se CRIStins veiledning.

Forslaget fra utvalget premierer flere forfattere, uavhengig av hvor de kommer fra. Fristelsen blir derfor stor til å «hekte på» perifere bidragsytere på forfatterlisten. Trenger vi egentlig flere insentiver til å bevege oss i etiske gråsoner? – gjeste- og gasteforfattere, salami- eller slicingmetoden er allerede kjente problemer. Skal vi forsterke dette gjennom insentivsystemet vårt?

I rapporten sies det eksplisitt at fag med tradisjon for flerforfatterskakp kommer for dårlig ut i det eksisterende systemet, deres posisjon skal styrkes gjennom dette. Det er de harde vitenskapene, biologien og de medisinske fagene som først og fremst opererer med flerforfatterskap mens jus, humaniora og samfunnsfag i stor grad holder seg med eneforfattere. Systemet er altså designet for å flytte midler fra jus, humaniora og samfunnsfag til de andre fagene. Men rapporten sier ikke noe om hvor stor denne effekten kan være.

Jeg har forsøkt å regne på tallene for UiT Norges arktiske universitet for 2013 2 med alle forbehold om mulige feil, dette er gjort på fyrabend.  Jeg har beregnet fordelingen av de midler man fikk for 2015 (basert på publiseringen i 2013) etter gammel modell, og så etter finansieringsutvalgets forslag med divisjon på roten av antall forfattere («forenklet») og 4 poeng for nivå 2. Det nye, skjulte nivå 3 er ikke tatt hensyn til, og jeg har ikke tatt hensyn til at systemet også vil medføre endringer i fordelingen mellom institusjonene- disse endringene kan også tenkes å bli store, i favør av de «gamle» universitetene med sine teknologi- og helseutdanninger med tradisjoner for samforfatterskap.

Det totale beløpet som ble fordelt internt på UiT var 27,75 mill kr. (Dette er ca. 75 % av det beløpet institusjonen fikk, men en del blir ikke fordelt ut på enhetene.) Etter den gamle ordningen fikk Helsefakultetet og Fakultetet for Naturvitenskap og teknologi ca. 47 % av totalen, et beløp på ca. 13,1 mill. kr. Dette stiger med den nye modellen til ca. 60 % (en økning på 27 %) og 16,6 mill. kr. Og hvordan finansieres dette? Jo, ved at Fakultetet for humaniora, samfunnsfag og lærerutdanning (HSL) sammen med Juridisk fakultet får sin andel redusert fra 33 % til 20 % – en reduksjon på 38 %, i kroner går de ned med nesten 3,5 mill. kr fra drøyt 9,1 mill. kr til nesten 5,7 mill. Av disse to fakultetene er det Jur.fak. som får den største relative reduksjonen med ca. 43 %, noe som utgjør nesten 0,5 mill. kr.

Om man ønsker å sultefore humaniora, samfunnsfag og jus for å fete opp helse, naturfag og teknologi er selvfølgelig det en legitim holdning – selv om jeg ikke klarer å se noen vettug begrunnelse.

Men å skjule dette i en for de fleste nokså tåkete teknisk beregningsmodell er uetisk. En omfordeling er en debatt som må tas, ikke skjules for verdens øyne. Etter min mening.

Noen lenker:

Posted in Uncategorized | Leave a comment

Why we should avoid using the Impact Factor to assess research and researchers

After having heard Björn Brembs speech (see http://dx.doi.org/10.7557/5.3226) at the 9th Munin Conference, and re-reading some of P O Seglen‘s articles on the Impact Factor (IF), I decided to try to write out some of my reflections on the use, or abuse, of the IF on my blog. My interest in the IF lies primarily in the fact that it is a major obstacle to a stronger and quicker transition from toll access (TA) to open access (OA) publishing. This was also a strong point made by Claudio Aspesi at the 6th Conference on Open Access Scholarly Publishing (COASP) in Paris, 2014. It is not in his PowerPoint, but an important point he made was that as long as IF is important for researchers and research evaluators, OA will have problems in taking a major market share.

What are the problems with the Impact Factor?

There are numerous problems with the IF. This could possibly be grouped thematically. A major problem is that the IF is owned by a private company, that has strong financial interests in keeping the IF working as it is. It is owned by Thomson Reuter, and is published in their ISI Web of Science Journal Citation Reports.

The data the IF is based on

  • The IF is based on a counting of citations, and takes for granted that a citation is a sign of scientific quality. This is a highly debatable position, and it is untenable at the micro level (i.e. article or author level). There are various reasons for citing, some of them negative. And Seglen (1991) notes that “a citation is primarily a measure of utility rather than of quality”
  • Using citations as the only basis, means you only look at the importance of an article for research itself (only research cites). Other impacts, e.g. on society, is completely ignored. And citations only gives credit to authors, not to other participants in the process of creating an article, like peer reviewers (who can have great influence on the final article) or editors.
  • The data the IF is based on is only a small portion of the data available. The IF only takes into account citations from journals (leaving e.g. monographs out), and only a fraction of all journals are taken into account. The sample of journals is highly skewed towards STM (science, technology and medicine), while HSS (humanities and social sciences) is only superficially covered. There is also a strong language bias towards English language journals, leaving other major world languages out of it.
  • It is at best unclear what may be cited, and what may cite.  There is also strong signs that what counts and what is counted is negotiable, Brembs shows an example of this in his presentation (see above).
  • Different kind of content will invariably receive different rates of citation, it is a well-known fact that method articles and reveiw articles are cited much more than other content. The IF is heavily influenced by this. Articles describing negative results are generally rarely cited.
  • Citation patterns are very different between different fields, so are co-authorship patterns. This also influences the IF, the first more so than the latter. There is e.g. a marked difference in the number of references in an article between different fields, some of this may be inherent in how science and writing is performed, but may also be due to different citation norms in different fields. What is common knowledge, and what needs a citation, may differ. And research shows that citing is a very imprecise activity, most articles contains citations that should not be there, and lacks others. The IF has to be “field normalized” to take this into account, the published IF is not field normalized. And field normalization is not necessarily without problems – how to you define a field, and to which field does a journal belong?

The way the IF is calculated

  • A general problem with the formula for calculating the IF, is that few researchers – even the believers – know how it is done. Björn Brembs presentation contains a very nice illustration showing how it is calculated. The math is simple, but it could still be confusing to many researchers.
  • The IF only counts citations in one year, to articles in the two preceding years. We know that the average time an article may be cited varies widely between fields, in some fast-moving fields a two-year window may be appropriate, in other fields it is wholly inappropriate. In scholarly fields a citing article may use two years from being written to being published, hence citing too late to count even if the cited article is fresh off the press when the citing article is written. The IF thus measures the velocity of scientific advance rather than the quality of the research published.
  • The IF is an average (number of citations divided by the number of items that could be cited). Averages are a wonderful instrument, but only under some circumstances. One is that the numbers you look at are centered around the average – a distribution approaching a normal distribution.
    sitat-graf

    Figure 4 from Lundberg (2006:16)(C) Jonas Lundberg. Reprinted with permission from Lundberg.

    Lundberg (2006:16) shows in Figure 4 a typical distribution of citations. His numbers are for 424 480 life science articles published in 2000, counting citations 2000–2006. You need little statistical knowledge to see that this distribution has no similarity to a normal distribution, it is extremely skewed. The average or mean (corresponding to the IF) is 16, the median (i.e. the middle value) is 8 and the mode (most typical value) is 0. The average (mean) is thus not even remotely representative of the underlying data. Even in my high school math textbook (Hamre 1973:28) on descriptive statistics it is clearly adviced against using the mean as a measure when the data are skewed. We see from Lundberg’s figure that the wast majority of articles receives significantly fewer citations than the average indicates, and a minority receives more, some even extremely much more, citations than the average indicates.

The way the IF is manipulated

We know that the IF is dependent upon many decisions made along the way, both by authors and by editors. And we know that editors need citations in their branding and marketing of the journal to possible authors. Among practices – some quite respectable, some not so – that influences the IF  are:

  • Editors using non-scientific citing items – e.g. editorial content – to promote articles in their own journals, thus earning a citation and increasing their own IF.
  • Citations-on-demand: Editors or peer-reviewers asking/demanding that the author puts in one or more citation to their articles/journals
  • Editors weeding out articles, irrespective of scientific quality, that seems to have little potential for being cited. This means that it is nearly impossible to have negative results published in a high-ranking journal. Paradoxically, as also little-cited articles cite, their being published is important to high-impact journals in order to get a high IF. High IF journals need low-impact journals.
  • Editors soliciting highly cited article types, most common are review articles and method articles. These are useful articles for authors. Review articles review and sum up recent research in a field, and are often cited instead of the original papers reviewed there. One could debate whether review articles themselves are research articles, as their function is one of evaluation. Method articles are referred to by authors using a specific method that is described and validated in the article.
  • Editors sorting content that is rarely cited out from the scientific articles into other categories, so that they are not counted among the number of articles that goes into the IF formula as cited items. They will, however, often still be citing items.
  • As the time span for getting citations that will count towards the IF is the two years following the year of publication, it is important that the article is published early in the year, so that as many readers as possible will have a chance to cite it in the following year. An article published in December will get few citations the following year, as it will take time for articles citing it getting written, reviewed and published. Some authors have noticed a very skewed distribution of articles over the year. I suspect this is one of the reasons for the increasingly popular phenomenon of “online before print” under various names, that makes the article available for reading and use for citing, before it is formally published – even though paper is immaterial (sorry about the pun) in todays’ dissemination of scientific knowledge. Articles may now “risk” being cited before they are published …

The way the IF is (ab)used

The IF is to some extent used for what it was intended for: To evaluate usefulness of journals for library collections. But it has increasingly been used to evaluate individual research and researchers, when employing, promoting or funding. It is not intended for this use, and from the above we should agree it is not suited for the purpose either – even if we were to agree to the shaky premises it is based upon. When the IF of the journal in which an article is published is used as a measure of the quality of that research, instead of evaluating the research itself, we will err in one of two possible ways: As the IF is an average in an extremely skewed distribution, we will assign too high a quality to most articles, hence over-employ, over-promote, over-pay and over-finance the more mediocre and less interesting research. At the same time we will overlook but under-employ, under-promote, under-pay and under-finance the few outstanding examples of research which are hidden in the IF.

Conclusion

The IF does not work as an instrument of evaluation of individual research. To quote Seglen (1980) again: “Clearly, journal impact factors cannot be used even for an approximate estimate of real article impact, and would be grossly misleading as evaluation parameters.” Fortunately, some institutions have realized this, and wow not to use the IF for evaluation in this way. The San Francisco Declaration on Research Assessment (DORA) (http://am.ascb.org/dora/)  is an initiatie to reduce the (ab-)use of the Impact Factor in research evaluation, and also in the marketing and branding of journals. And the RCUK Policy on Open Access and Supporting Guidance (RCUK = Research Councils UK) document contains a promise to evalute research and not where it is printed: “When assessing proposals for research funding RCUK considers that it is the quality of the research proposed, and not where an author has or is intending to publish, that is of paramount importance;  […]” The Wellcome Trust has a similar promise.

There is hope. One hopes …

References

Brembs, B (2014) When decade-old functionality would be progress – the desolate state of our scholarly infrastructure. Keynote speech at the 9th Munin Conference, Tromsø November 26th 2014. http://dx.doi.org/10.7557/5.3226

Hamre, A (1973) Beskrivende statistikk Del 3. Aschehoug, Oslo 1973.

Lundberg, J (2006) Bibliometrics as a research assessment tool: impact beyond the impact factor. Dissertation for the degree of Ph.D.at Karolinska Institutet 2006 http://hdl.handle.net/10616/39489

Seglen, P O (1991) Citation frequency and journal impact: valid indicators of scientific quality? Journal of Internal Medicine 1991:229 109-111

Posted in Uncategorized | 2 Comments

A note on The footnote

Anthony Grafton’s The Footnote - A Curious History starts out very interestingly. The first chapter gives an entertaining introduction to the “life and times” of the footnote. However, in the following chapters one feels bogged down in a swamp of German/Italian/European historiography, and without hope of getting back on track again. Highly erudite, this book needs a tough reader!

A side note: The title is “The footnote*” and in the bottom of the front page comes the subtitle: “*A curious history”. The first time ever I saw a front page with a footnote! The rest of the book is crammed with them.

Posted in Books | Leave a comment

New insight into the world of the PDF

In the article Refurbishing the Camelot of Scholarship: How to Improve the Digital Contribution of the PDF Research Article, Willinsky, Garnett and Wong points to two things about the current use of PDF (Portable Document Format).

One thing is that all-electronic publications, like OA journals, still use layout, design and typography from the days of the printed journals. Margins are scarce, to avoid long lines text are set in columns, giving a non-optimal reading experience on-screen, too little space between lines etc., all creating a less than optimal reading experience – both on screen and on paper. The current design was created to save space and paper, enabling journals to cram a maximum of content into a minimum of pages to hold distribution costs down. Today, distribution is free, and only interested readers create paper copy – copies that should be as readable as possible, not save paper. And typography and design should acknowledge to need to create documents that are easy to read on a screen.

The other one is to point out the possibilities of embedding structured metadata in the PDF, using existing PDF elements and additional software. And, of course, the need to open PDFs to annotation before publishing them. Embedding metadata will enable harvesters and services to extract necessary information in a much better manner than the current state of affairs – guesswork. And enabling annotation will make PDFs much more useful to end-users.

Posted in Articles | Tagged , , , , | Leave a comment

Many of us have been missing a short, introductory textbook on Open Access to read for ourselves and to point to for those in need of education on the subject.

Now the book has come! Peter Suber, one of the “grand old men” of Open Access, with his non-partisan writings on most aspects of OA, has written a coherent overview of Open Acces. The book is aptly titled Open Access.

The book takes the reader through the major questions of why and how, varieties of OA, copyright and possible casualties of OA. Suber writes well and it is easy to follow him in his arguments.

Included in the book is a large number of links and references to supplementary material that an interested reader can follow. And the whole book will be released as Open Access next summer.

In addition to the book itself, Suber has created a companion web page with additional information, including new notes and links, and updates to them.

Disclaimer: yes, I am quoted in Suber’s companion web page, and yes, I work as an OA advocate, so I am not impartial.

Link | Posted on by | Tagged | Leave a comment