Tuesday, 17 January 2017

Impact of Social Sciences – Mendeley reader counts offer early evidence of the scholarly impact of academic articles

 Source: http://blogs.lse.ac.uk/impactofsocialsciences/2017/01/17/mendeley-reader-counts-offer-early-evidence-of-the-scholarly-impact-of-academic-articles/







Mendeley reader counts offer early evidence of the scholarly impact of academic articles

mikethelwallAlthough
the use of citation counts as indicators of scholarly impact has
well-documented limitations, it does offer insight into what articles
are read and valued. However, one major disadvantage of citation counts
is that they are slow to accumulate. Mike Thelwall has
examined reader counts from Mendeley, the academic reference manager,
and found them to be a useful source of early impact information.
Mendeley reader data can be recorded from the moment an article appears
online and so avoids the publication cycle delays that so slow down the
visibility of citations.
Counts of citations to academic articles
are widely used as evidence to inform estimates of the impact of
academic publications. This is based on the belief that scientists often
cite works that have influenced their thinking and therefore that
citation counts are indicators of influence on future scholarship. In
the UK’s REF2014 research assessment exercise,
11 of the 36 subject panels drew upon citation counts to inform their
judgements of the quality of academic publications, for example by
arbitrating when two expert reviewers gave conflicting judgements.
Citation counts are also widely used internationally for hiring,
promotion, and grant applications and aggregated citation-based
statistics are used to assess the impact of the work of large groups of
scholars in departments, universities and even entire countries. On top
of this, there are many informal uses of citation counts by individual
scholars looking to assess whether their work is having an impact or to
decide which of their outputs is having the most impact.
mendeleyImage credit: Mendeley Desktop and iOS by Team Mendeley. This work is licensed under a CC BY 2.0 license.
Despite their many limitations, such as
obvious cases where they are misleading and entire fields for which they
are almost meaningless, citation counts can support the onerous task of
peer review and even substitute for it in certain cases where the
volume of outputs is such that peer review judgements are impractical.
At the level of the individual scholar, citation counts can be useful to
indicate whether papers are read and valued. This gives outputs a
visible afterlife once they have been published and helps to identify
avenues of research that have been unexpectedly successful, motivating
future similar work. It also gives scholars a sometimes-needed incentive
to look outwards at the wider community when writing an article and
consider how it might attract an audience that might cite it. Of course,
uncited does not equate to irrelevant and James Hartley has recently listed his rarely cited articles that he values,
which is a useful reminder of this. Nevertheless, even though I have
little idea why my most cited article has attracted interest, the
knowledge that it has found an audience has motivated me to conduct
follow-up studies and to fund PhDs on the subject, whilst dropping lines
of research that have disappointingly flown under the radar and (so
far) avoided notice.
One major disadvantage of citation counts
is that they are slow to accumulate. Once an article has been published,
even if someone reads it on the first day that it appears and
immediately uses it to inform a new study, it is likely to be 18 months
(depending on the discipline) before that study is complete, written up,
submitted to a journal, peer reviewed, revised, accepted and published
so that its citations appear in Google Scholar, Web of Science or
Scopus. Uses of citation counts in formal or informal research
evaluations may therefore lag by several years. This delay is a major
disadvantage for most applications of citation counts. There is a simple
solution that is effective in some contexts: Mendeley reader counts
(Figure 1).
figure1Figure
1: Mendeley readers typically appear at least a year before citations
due to delays between other researchers reading a paper and their new
study being published.
Mendeley
is a social reference sharing website that is free to join and acts as a
reference manager and sharer for academics and students. Those using it
can enter reference information for articles that they are reading or
intend to read (and this is what most users do, as shown by Ehsan Mohammadi,
whose PhD focused on Mendeley) and then Mendeley will help them to
build reference lists for their papers. As spotted by York University
(Toronto) librarian Xuemei Li, it is then possible to count the number of registered Mendeley readers for any given article and use it as impact evidence for that article. This reader count acts like a citation count in that it gives evidence of (primarily academic) interest in articles but readers accrue about a year in advance of citation counts, as shown by a recent article (Figure 2 – see also: Maflahi and Thelwall, 2016; Thelwall and Sud, 2016).
Mendeley data is available earlier as scholars can register details of
an article they are reading in Mendeley whilst they are reading it, and
so this information bypasses the publication cycle delays (Figure 1). An
article may even start to accumulate evidence of interest in Mendeley
in the week it is published if people recognise it as important and
immediately record it in Mendeley for current or future use.
figure2Figure
2: A comparison between average Scopus citations and Mendeley readers
for articles from journals in the Scopus Transportation category, as
recorded in November/December 2014. Mendeley reader counts are much
higher than Scopus citations for more recent articles, with Scopus
citations lagging by at least 18 months. Citation counts are higher than
reader counts for older articles, probably due to citations from older
articles that were written before Mendeley was widely used. Geometric
means are used because citation counts are highly skewed (data from Maflahi and Thelwall, 2016).
Mendeley is by far the best general source of early scholarly impact information. Download counts are not widely available, counts of Tweets are very unreliable as an impact indicator and other early impact indicators are much scarcer.
The main drawback is that, at present, anyone can set up multiple
accounts and register as a reader of selected articles, making it
possible to spam Mendeley. For this reason, Mendeley reader counts
cannot be used in the UK REF or any other research evaluation that
includes stakeholders with time to manipulate the outcomes. An
additional limitation is that Mendeley reader counts are biased towards
articles that attract the Mendeley user demographic, which has
international and seniority/age imbalances. It is therefore tricky to use Mendeley for international impact comparisons.
It is not hard to obtain evidence of
Mendeley readers for an article – just search for it by title in
Mendeley (e.g. try the query ‘Mendeley readership altmetrics for the
social sciences and humanities: Research evaluation and knowledge
flows’) or look for the Mendeley segment within the Altmetric.com donut
for the article (as in this example;
to find a page like this, Google the article and add
‘site:altmetric.com’ to the end of your query). For large groups of
articles, the free Mendeley API can also be used to automatically
download reader counts for large sets of articles via the (also free)
software Webometric Analyst.
If you already have a set of articles with citation counts, then it is
simple to add Mendeley reader count data to it using this software.
This blog post is based on the author’s article, co-written with Pardeep Sud, ‘Mendeley readership counts: An investigation of temporal and disciplinary differences’, published in the Journal of the Association for Information Science and Technology (DOI: 10.1002/asi.23559).
Note: This article gives the views of
the author, and not the position of the LSE Impact Blog, nor of the
London School of Economics. Please review our 
comments policy if you have any concerns on posting a comment below.
About the author
Mike Thelwall
is Professor of Information Science at the School of Mathematics and
Computing, University of Wolverhampton. His research interests include
big data: webometrics, social media metrics, and sentiment analysis;
developing quantitative web methods for Twitter, social networks,
YouTube, and various types of link and impact metrics; conducting impact
assessments for organisations, such as the UNDP. His ORCID iD is:
0000-0001-6065-205X.


Impact of Social Sciences – Mendeley reader counts offer early evidence of the scholarly impact of academic articles

Assessing Scholarly Productivity: The Numbers Game

 Source: https://library.ithaca.edu/sp/subjects/scholprod

Assessing Scholarly Productivity: The Numbers Game

Assessing Scholarly Productivity

To evaluate the work of scholars
objectively, funding agencies and tenure committees may attempt to
quantify both its quality and impact. Quantifying scholarly work is
fraught with danger, but the current emphasis on assessment in academe
suggests that such measures can only become more important. There are a
number of descriptive statistics associated with scholarly productivity.
These fall broadly into two categories: those that describe individual
researchers and those that describe journals.



Rating Researchers

Raw Citation Counts

One way to measure the impact of a paper is to simply count how many
times it has been cited by others. This can be accomplished by finding
the paper in Google Scholar
and noting the "Cited by" value beneath the citation. Such numbers may
be added together, or perhaps averaged over a period of years, to
provide an informal assessment of scholarly productivity. Better yet,
use Google Scholar Citations
to keep a running list of your publications and their "cited by"
numbers. For more information on determining where, by whom, and how
often an article has been cited, see IC Library's guide on Cited Reference Searching.







Article comparing Google Scholar, Scopus and World of Science and their pros and cons.



Google Scholar as a new data source for citation analysis.



H-index

The h-index, created by Jorge E. Hirsh of the University of California, San Diego, is described by its creator as follows:



A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np - h) papers have no more than h citations each.1
In other words, if I have an h-index of 5, that means that my five
most-cited papers each have been cited five or more times. This can be
visualized by a graph, on which each point represents a paper. The
scholar's papers are ranked along the x-axis by decreasing
number of citing papers, while the actual number of citing papers is
shown by the point's position along the y-axis. The grey line
represents the equality of paper rank and number of citating articles.
The h-index is equal to the number of points above the grey line.








The value of h will depend on the database used to calculate it. 2
Thomson Reuter's Web of Science and Elsevier's Scopus (neither is
available at IC) offer automated tools for calculating this value. In
November of 2011, Google Scholar Citations became generally available. This will calculate h based on the Google Scholar database. An add-on for Firefox called the Scholar H-Index Calculator is also based on Google Scholar data.

Google Scholar Metrics includes lists of top-ranked journals by index in a variety of subject areas.



Comparisons of h are only valid within a discipline, since
standards of productivity vary widely between fields. Researchers in the
life sciences, for instance, will generally have higher h values than those in physics.1



A large number of modifications to the h-index have been proposed, many attempting to correct for factors such as length of career and co-authorship.



Researchers at the National Institutes of Health have developed
a method to quantify the influence of a research article by making novel
use of its co-citation network to field-normalize the number of
citations it has received. The beta version of iCite, can be used to calculate the Relative Citation Ratios of articles listed in PubMed.



Rating Journals

Rightly or wrongly, the quality of a paper is sometimes judged by the
reputation of the journal in which it is published. Various metrics
have been devised to describe the importance of a journal.



Impact Factor

The Impact Factor (IF) is a proprietary measure calculated annually by Thomson Reuters
(formerly by ISI). This figure is based on how often papers published
in a given journal in the preceding two years are cited during the
current year. This number is divided by the number of "citable items"
published by that journal during the preceding two years to arrive at
the IF. Weaknesses of this metric include sensitivity to inflation
caused by extensive self-citation within a journal and by single,
highly-cited articles. For more information about the IF, see the essays of Dr. Eugene Garfield,
founder of ISI. Determining a journal's IF requires access to Thomson
Reuters Journal Citation Reports, not available at IC Library.



Eigenfactor

The eigenfactor is a more recent, and freely-available metric,
devised at the University of Washington by Jevin West and Carl
Bergstrom.3
Where the IF counts all citations to a given article as being equal,
the eigenfactor weights citations based on the impact of the citing
journal. Its creators assert that it can be viewed as "a rough estimate
of how often a journal will be used by scholars." Eigenfactor values are
freely avialable at eigenfactor.org.



SCImago Journal Rank Indicator

The SCImago Journal Rank indicator (SJR) is another open-source metric.4
It uses an algorithm similar to Google's PageRank. Currently, this
metric is only available for those journals covered in Elsevier's Scopus
database. Values may be found at scimagojr.com.


 

Journal Article Acceptance Rates

Locating acceptance rates
for individual journals or for specific disciplines can be difficult,
yet is necessary information for promotion and tenure. Journals with
lower article-acceptance rates are frequently considered to be more prestigious and more “meritorious.”



The method of calculating acceptance rates varies among journals. 
Some journals use all manuscripts received as a base for computing this
rate.  Other journals allow the editor to choose which papers are sent
to reviewers and calculate the acceptance rate on those that are
reviewed that is less than the total manuscripts received.  Also, many
editors do not maintain accurate records on this data and provide only a
rough estimate.  Furthermore, the number of people associated with a
particular area of specialization influences the acceptance rate.  If
only a few people can write papers in an area, it tends to increase the
journal's acceptance rate. Some journals will include the acceptance rate in the “information for authors” area of the print journal or on the home pages for the journal.



Some sources to find journal acceptance rates are as follows:



Cabell's Directories of Publishing Opportunities -Ithaca Collge School of Business has a subscription which covers the following areas: management, marketing, accounting, economics and finance. Go to Your Access where you can either browse or search for a specific journal(s). Note:  On-campus access only as Ithaca College IP address is required.



MLA International Bibliography Choose Advanced Search, then Directory of Periodicals. You can then look up the periodical you are interested in.



American Psychological Association (APA) Journal Statistics and Operations Data
-These PDFs provide information about manuscript rejection rates,
circulation data, publication lag time, and other journal statistics



Association for the Advancement of Computers in Education (AACE) -submission review policy, acceptance rate and indices.









Cited Reference Searching

 How to Find Cited References.

 

References



1.
Hirsch, J.E. An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America 102, 16569 -16572 (2005).


2.
Meho, L.I. & Yang, K. Impact of data
sources on citation counts and rankings of LIS faculty: Web of science
versus scopus and google scholar. Journal of the American Society for Information Science and Technology 58, 2105-2125 (2007).


3.
Bergstrom, C. Eigenfactor: Measuring the value and prestige of scholarly journals. College & Research Libraries News 68, (2007).


4.
González-Pereira, B., Guerrero-Bote, V.P. & Moya-Anegón, F. The SJR indicator: A new indicator of journals' scientific prestige.

Altmetrics

Altmetrics are non-traditional, Social
Web-based measures of the reach and influence of scholarly output. Some
examples of altmetrics are downloads, bookmarks, blog posts, shares, and
citation indexes. Altmetrics data can be used to explain the amount and
type of online attention that items receive. According to the authors
of Meaningful Metricsthese
measures show the impact of a wider type of items beyond traditional
research journals (software, data sets, and slide decks, for example)
and can show areas of influence outside of disciplinary boundaries and
formal academic communities (through citations in public policy
documents and references in patents, learning objects, and press
coverage, for instance).



Wiley now makes altmetrics available for their fully open access
journals. Other scholarly publishers such as Elsevier and Sage also
offer article-level altmetrics, including comments and shares made by
readers via social media channels, blogs, newspapers, etc. Here's an
example from the ScienceDirect database:









When used with traditional, citation-based bibliometrics, altmetrics can
form a more complete and detailed picture of the impact of
scholarship. They can sometimes indicate the potential impact of
research on society or a field of study. Altmetrics can be especially
useful in discplines where journal articles are not the primary output.



Facuty can use altmetrics to
  • Determine who, where, and how their research is being used
  • Quantify how their research is being used
  • Spot potential collaborators in their field
  • Show the influence of their work inside and outside their field of study and academia
  • Learn what others think of their research
  • Demonstrate successful outreach
  • Pinpoint and filter other sources of importance
  • Provide a context for their work
  • Solicit research funding
  • Identify publications to submit to


For Tenure and Promotion



The Leiden Manifesto for Research Metrics



Guide to Preparing a Dossier for Promotion or Tenure

University of Colorado Denver Medical School



Guidelines for Preparing and Reviewing Promotion and Tenure Dossiers 2017-18 

Indiana University-Purdue University Indianapolis



Guidelines for the Evaluation of Digital Scholarship in Art and Architectural History

College Art Association and the Society of Architectural Historians



Guidelines for Evaluating Work in Digital Humanities and Digital Media

Modern Language Association



In a CV

Recommendations for including altmetrics in a CV include providing
contextual information, such as percentiles, maps, and qualitative data.
Here's one example from the CV of Trevor A. Branch, Biology, University of Washington:













Additional examples can be found at What Are Altmetrics? and in the University of Maryland Bibliometrics and Altmetrics: Mesuring the Impact of Knowledge guide.



Altmetrics Tools



Impactstory

Provides a profile of the online impact of a researcher's work.



Altmetric Bookmarklet

Enables viewing article-level metrics.



PLOS Article Level Metrics

Tracks item-level views, saves, citations, recommendations, and discussions of scholarly output.


Assessing Scholarly Productivity: The Numbers Game

Increase the Visibility of Your Research - Bibliometrics and Altmetrics: Measuring the Impact of Knowledge - LibGuides at University of Maryland Libraries

 Source: http://lib.guides.umd.edu/bibliometrics/visibility


Methods for increasing visibility vary by discipline. 



Suggested strategies:



  1. Include publications in an open repository so google will track when you've been cited:
  2. Publish in an Open Access journal or self-archive it (if publisher allows).
  3. Publish/share data associated with your research - for more information see 
  4. Publish in an online journal with search features allowing users to find articles that cite it.  For example, see "cited by" features in Highwire Press journal articles.
  5. Share publications using social networking tools such as MendeleyResearchGateCiteULikegetCITEDtwitterSlideshare, blogs, etc.
  6. Create an online presence utilizing tools such as ORCID IDResearcher IDGoogle Scholar Citations profile, or LinkedIn and link to your profile on university webpages, vitae, and/or within email signatures.
  7. List/link publications on personal websites or university webpages that are crawled by Google Scholar - specifically not behind a login screen such as that of Canvas, WebCT, Blackboard, or Moodle.  
  8. List as recommended reading on a course website (but not buried behind a login).
  9. Bone up on how to influence Google page rankingsFacebook shares, backlinks, and tweets are the top ways to increase page visibility in search engine result pages.
  10. Keywords and abstracts
    play a vital role in researchers retrieving an article - especially for
    indexes or search engines that do not have the full-text of the article
    available.  Be sure to identify numerous synonyms and use terms that you used in conducting your own literature review.
  11. Publish thought-provoking, critical pieces or literature reviews - these traditionally have higher citation rates as do those dealing with hot topics.
For additional information specific to a given discipline, we recommend consulting senior faculty in your department.



Source: Promotion & Tenure Resource Guide. Iowa State University. Authors: Jeff Alger, Jeff Kushkowski, and Lorrie Pellack. [Accessed June, 2014].


Further reading



Increase the Visibility of Your Research - Bibliometrics and Altmetrics: Measuring the Impact of Knowledge - LibGuides at University of Maryland Libraries

Create an Audio/Video Slides for your Research




Create an Audio/Video Slides for your Research