Monday, 30 May 2016

The ultimate guide to staying up-to-date on your articles’ impact - Impactstory blog




Source: http://blog.impactstory.org/ultimate-guide-for-articles/ 

The ultimate guide to staying up-to-date on your articles’ impact









You
published a paper–congrats!  Has anyone read it?  Cited it?  Talked
about it on Twitter?  How can you find out–as it happens?
Automated alerts!  Email updates that matter come right to you.
We’ve compiled a two-part primer on the services that
deliver essential research impact metrics straight to your inbox, so you
can stay up to date without having to do a lot of work.
In this post, we’ll share tips for how to automagically
track citations, altmetrics and downloads for your publications; in our
next post, we’ll share strategies for tracking similar metrics for your
data, code, slides, and social media outreach.

Citations

Let’s start with citations: the “coin of the realm” to
track scholarly impact. You can get citation alerts in two main ways:
from Google Scholar or from traditional citation indices.

Google Scholar Citations alerts

Google Scholar citations track any citations to your work
that occur on the scholarly web. These citations can appear in any type
of scholarly document (white papers, slide decks, and of course journal
articles are all fair game) and in documents of any language. Naturally,
this means that your citation count on Google Scholar may be larger
than on other citation services.
To get Google Scholar alerts, first sign up for a Google Scholar Citations account and add all the documents you want to track citations for. Then, visit your profile page and click the blue “Follow” button at the top of your profile. You’ll see a drop-down like this:

Screenshot of a Google Scholar profile, showing the blue
Enter your preferred email address in the box that appears, then
click “Create alert.” You’ll now get an alert anytime you’ve received a
citation.

Citation alerts via Scopus & Web of Knowledge

Traditional citation indices like Scopus and Web of
Knowledge are another good way to get citation alerts delivered to your
inbox. These services are more selective in scope, so you’ll be notified
only when your work is cited by vetted, peer-reviewed publications.
However, they only track citations for select journal articles and book
chapters–a far cry from the diverse citations that are available from
Google Scholar. Another drawback: you have to have subscription access
to set alerts.

Web of Knowledge

Web of Knowledge offers article-level citation alerts.
To create an alert, you first have to register with Web of Knowledge by
clicking the “Sign In” button at the top right of the screen, then
selecting “Register”.
5sBUo1G.png

Then, set your preferred database to the Web of Science
Core Collection (alerts cannot be set up across all databases at once).
To do that, click the orange arrow next to “All Databases” to the right
of “Search” in the top-left corner. You’ll get a drop-down list of
databases, from which you should select “Web of Science Core
Collection.”
Now you’re ready to create an alert. On the Basic Search
screen, search for your article by its title. Click on the appropriate
title to get to the article page. In the upper right hand corner of the
record, you’ll find the Citation Network box. Click “Create citation
alert.” Let Web of Knowledge know your preferred email address, then
save your alert.

Scopus

In Scopus,
you can set up alerts for both articles and authors. To create an alert
for an article, search for it and then and click on the title in your
search results. Once you’re on the Article Abstract screen, you will see
a list of papers that cite your article on the right-hand side. To set
your alert, click “Set alert” under “Inform me when this document is
cited in Scopus.”
To set an author-level alert,
click the Author Search tab on the Scopus homepage and run a search for
your name. If multiple results are returned, check the author
affiliation and subjects listed to find your correct author profile.
Next, click on your author profile link. On your author details page,
follow the “Get citation alerts” link, and list your saved alert, set an
email address, and select your preferred frequency of alerts. Once
you’re finished, save your alert.
With alerts set for all three of these services, you’ll now
be notified when your work is cited in virtually any publication in the
world! But citations only capture a very specific form of scholarly
impact. How do we learn about other uses of your articles?

Tracking article pageviews & downloads

How many people are reading your work? While you can’t be
certain that article pageviews and full-text downloads mean people are
reading your articles,  many scientists still find these measures to be a
good proxy. A number of services can send you this information via
email notifications for content hosted on their sites. Impactstory can
send you pageview and download information for some content hosted
elsewhere.

Publisher notifications

Publishers like PeerJ and Frontiers send notification emails as a service to their authors.
If you’re a PeerJ author, you should receive notification
emails by default once your article is published. But if you want to
check if your notifications are enabled, sign into PeerJ.com, and click
your name in the upper right hand corner. Select “Settings.” Choose
“Notification Settings” on the left nav bar, and then select the
“Summary” tab. You can then choose to receive daily or weekly summary
emails for articles you’re following.
In Frontiers journals, it works like this: once logged in,
click the arrow next to your name on the upper left-hand side and select
“Settings.” On the left-hand nav bar, choose “Messages,” and under the
“Other emails” section, check the box next to “Frontiers monthly impact
digest.”
Both publishers aggregate activity for all of the
publications you’ve published with them, so no need to worry about
multiple emails crowding your inbox at once.
Not a PeerJ or Frontiers author? Contact your publisher to find out
if they offer notifications for metrics related to articles you’ve
published. If they do, let us know by leaving a comment below, and we’ll
update this guide!

ResearchGate & Academia.edu

bhr3lLZ.png
Some places where you upload free-to-read versions of your papers, like ResearchGate and Academia.edu, will report how many people have viewed your paper on their site.
You can turn on email notifications for pageviews,
downloads, comments, bookmarks, and citations by other papers on
ResearchGate by visiting “Settings” (on both sites, click the triangle
in the upper right-hand corner of your screen). Then, click on the
“Notifications” tab in the sidebar menu, and check off the types of
emails you want to receive. On Academia.edu, the option to receive new
metrics notifications for pageviews, downloads, and bookmarks are under
“Analytics” and “Papers”; on Researchgate, it’s under “Your
publications” and “Scheduled updates”.

PLOS article metrics via Impactstory

Impactstory now offers alerts, so you’re notified any time
your articles get new metrics, including pageviews and downloads.
However, we currently only offer these metrics for articles published in
PLOS journals. (If you’d like to see us add similar notifications for
other publishers, submit an idea to our Feedback site!) We describe how to get Impactstory notifications for the articles that matter to you in the Social Media section below.

Post-publication peer review

Some articles garner comments as a form of post-publication
peer review. PeerJ authors are notified any time their articles get a
comment, and any work that’s uploaded to ResearchGate can be commented
upon, too. Reviews can also be tracked via Altmetric.com alerts.

PeerJ

To make sure you’re notified with you receive new PeerJ
comments, login to PeerJ and go to “Settings” > “Notification
Settings”  and then click on the “Email” tab. There, check the box next
to “Someone posts feedback on an article I wrote.”

ResearchGate

To set your ResearchGate notifications, login to the site and navigate to “Settings” > “Notifications.” Check the boxes next to “One of my publications is rated, bookmarked or commented on” and “Someone reviews my publication”.

Altmetric.com

Post-publication peer reviews from Publons and PubPeer are included in Altmetric.com notification emails,
and will be included in Impactstory emails in the near future.
Instructions for signing up for Altmetric and Impactstory notifications
can be found below.

PubChase

Article recommendation platform PubChase
can also be used to set up notifications for PubPeer comments and
reviews that your articles receive. To set it up, first add your
articles to your PubChase library (either by searching and adding papers
one-by-one, or by syncing PubChase with your Mendeley account).
Then, hover over the Account icon in the upper-right hand corner, and
select “My Account.” Click “Email Settings” on the left-hand navigation
bar, and then check the box next to “PubPeer comments” to get your
alerts.

Social media metrics

What are other researchers saying about your articles
around the water cooler? It used to be that we couldn’t track these
informal conversations, but now we’re able to listen in using social
media sites like Twitter and on blogs. Here’s how.

Social media metrics via Altmetric.com

Altmetric.com allows you to track altmetrics and receive
notifications for any article that you have published, no matter the
publisher.
S00Rpwu.png

First, install the Altmetric.com browser bookmarklet (visit this page
and drag the “Altmetric It!” button into your browser menu bar). Then,
find your article on the publisher’s website and click the “Altmetric
it!” button. The altmetrics for your article will appear in the upper
right-hand side of your browser window, in a pop-up box similar to the
one at right.
Next, follow the “Click for more details” link in the
Altmetric pop-up. You’ll be taken to a drill-down view of the metrics.
At the bottom left-hand corner of the page, you can sign up to receive
notifications whenever someone mentions your article online.
The only drawback of these
notification emails is that you have to sign up to track each of your
articles individually, which can cause inbox mayhem if you are tracking
many publications.


Social media metrics via Impactstory

9GtkvJ6.png
Here at Impactstory, we recently launched similar notification emails. Our emails differ in that they alert you to new social media metrics, bookmarks, and citations for all of your articles, aggregated into a single report.
To get started, create an Impactstory profile
and connect your profile to ORCID, Google Scholar, and other
third-party services. This will allow you to auto-import your articles.
If a few of your articles are missing, you can add them one by one by
clicking the “Import stuff” icon, clicking the “Import individual
products” link on the next page, and then providing links and DOIs. Once
your profile is set up, you’ll start to receive your notification
emails once every 1-2 weeks.
When you get your first email, take a look at your “cards”.
Each card highlights something unique about your new metrics for that
week or month: if you’re in a top percentile related to other papers
published that year or if your PLOS paper has topped 1000 views or
gotten new Mendeley readers. You’ll get a card for each type of new
metric one of your articles receives.
Note that Impactstory notification emails also contain
alerts for metrics that your other types of outputs–including data, code
and slide decks–receive, but we’ll cover that in more detail in our
next post.

Now you’ve got more time for the things that matter

No more wasting your days scouring 10+ websites for
evidence of your articles’ impact; it’s now delivered to your inbox, as
new impacts accumulate.
Do you have more types of research outputs, beyond journal
articles? In our next post, we’ll tell you how to set up similar
notifications to track the impact of your data, software, and more.
Updates:

12/17/2014: 
Updates to describe the revamped Impactstory interface and new notification options for ResearchGate and Academia.edu

5/27/2014: Added information about PubChase notification emails.






The ultimate guide to staying up-to-date on your articles’ impact - Impactstory blog

Sunday, 29 May 2016

7 Step to Do Academic Research Using Digital Technologies ~ Educational Technology and Mobile Learning

Source: http://www.educatorstechnology.com/2016/05/7-step-to-do-academic-research-using.html



May 29, 2016

Research is a process. It is a continuum of stages that together make up
a research plan. Below is a tentative sketch of what we think are the
seven important steps of a research plan. For each of these stages we
featured a short collection of web tools to help you carry it out. We
have also created a poster capturing the steps and tools we covered
here. You can download, share and use the poster the way you want as
long as you use it for educational purposes.



The first stage you embark on after you have decided on your research
topic and defined your research question is to start collecting data
(e.g journal articles, blog posts, books, documents, PDFs…etc) related
to your topic.  This process involves searching, curating and organizing
your materials. Here are some tools to help you do that:









1- Search for data

2- Curate your data

 3- Save and organize your data

4- Review your reading materials

After you have collected the data related to your topic, the reading
marathon starts. Here are some good tools to help you with your reading:

5-Synthesize and take notes

After you have familiarized yourself with the reading materials under
hand (first reading), you will start taking notes and senthesizing your
information. Here are some tools for this purpose:

6- Write-up

These are some excellent word editors you may want to use to prepare the write-up or report of your findings:

7- Bibliography, citations and references

In the editing and proofreading process and before you share your
report, you will have to pay close attention to your bibliography and
check your references and citations. These are some tools to help you
better manage your bibliography:

7 Step to Do Academic Research Using Digital Technologies ~ Educational Technology and Mobile Learning

Saturday, 28 May 2016

ScHARR Information Resources Blog: An introduction to Altmetrics for Librarians, Researchers and Academics

 Source: http://scharrlibrary.blogspot.my/2016/05/an-introduction-to-altmetrics-for.html






















Friday, 27 May 2016





An introduction to Altmetrics for Librarians, Researchers and Academics


Andy Tattersall
Andy Tattersall
Andy
Tattersall has an edited book coming out in June on the topic of
Altmetrics. Altmetrics - A practical guide for librarians, researchers
and academics is published by Facet Books. As part of the book launch
Andy has created a short video explaining altmetrics in addition to
writing a blog post for Cilip which can read in full below.




The book can be pre-ordered and purchased from various outlets. 

Facet 

Waterstones

Amazon








Altmetrics: What they are and why they should matter to the library and information community

Altmetrics is probably a term that many readers of this blog will have
heard of but are not quite sure what it means and what impact it could
have on their role. The simple answer is that altmetrics stands for
alternative metrics.
When we say alternative we mean alternative to traditional metrics used
in research and by libraries, such as citations and journal impact
factors. They are by no means a replacement to traditional metrics but
really to draw out more pertinent information tied to a piece of
academic work. A new way of thinking about altmetrics is to refer to
them as alternative indicators.
https://www.youtube.com/watch?v=VP7ND5PBcbA&feature=youtu.be

Scholarly communication is instrumental to altmetrics

There is also the focus on scholarly communication as altmetrics are
closely tied to established social media and networks. Scholarly
communication is instrumental to altmetrics and much of what it sets out
to measure. These include tools such as Twitter, LinkedIn and blogs as
well others including Mendeley and Slideshare.
The main protagonists of the altmetrics movement are ImpactStory which was set up by Jason Priem who coined the term ‘altmetrics’. They are joined by FigshareAltmetric.comMendeley,
PLOS and Kudos, amongst others. These were mostly established by young
researchers who were concerned that research was being measured on the
grounds of just a few metrics. These were metrics that gave an
unbalanced view of research and did not take into account the
technologies that many academics were using to share and discuss their
work.
Altmetrics is not just about bean counting, though obviously the more
attention a paper gets whether that be citations or Tweets the more
interesting it may be to a wider audience, whether that be academics,
students or the wider world. The more Tweets a paper gets does not
necessarily mean it is better quality than those that do not get Tweeted
as much, but the same applied to traditional metrics, more citations
does not always mean a great piece of research, it can occasionally
highlight the opposite.

Altmetrics provide an insight into things we have not measured before

What altmetrics sets out to do is provide an insight into things we have
not measured before, such as social media interaction, media attention,
global reach and the potential to spot hot topics and future pieces of
highly cited work. In addition altmetrics allows content to be tracked
and measured that in the past had been wholly ignored. Such as datasets,
grey literature, reports, blog posts and other such content of
potential value. 
The current system recognised a slim channel of academic content in a
world that is diversifying constantly at a much faster pace than ever.
The academic publishing model has struggled to catch up with the modern
world of Web 2.0 and social media and therefore academic communication
has been stunted. Tools such as Twitter, blogs and Slideshare have
allowed researchers to get their content onto the Web instantly, often
before they have released the content via the formal channels of
conferences and publications.
Tools such as ImpactStory, Figshare and Altmetric.com look at the
various types of scholarly content and communication and provide metrics
to help fund holders, publishers, librarians, researchers and other
aligned professionals get a clearer picture of the impact of their
work. 
Fundholders can see where their funded research is being discussed and
shared, as can researchers who may get to discover their research is not
being talked about; which at least gives them reason to perhaps act on
that. Publishers can view in addition to existing paper citations, how
else they are being discussed and shared. Library and information
professionals have an important part to play in all of this.

What is the role of the library and information professional?

There
are certain roles in the library and information profession that have
plenty to gain by becoming involved with altmetrics. Firstly those that
deal with journal subscriptions and hosting content in repositories can
gain a new insight into which journals and papers are being shared and
discussed via altmetrics. This becomes increasingly important when
making yearly subscription choices when journal and book funds are being
constantly squeezed. Obviously this is not a solution or get-out clause
for librarians when deciding which subscriptions to cancel, as you
should not always pick the most popular journals at the expense of
minority, niche journal collections, but altmetrics do offer a new set
of identifiers when making those tough budgetary decisions. 
LIS professionals are often technically proficient and for those who
deliver outreach services and support for academics and students there
is much they can do to help explain the new forms of scholarly
communication and measurement. Many library and information staff are
expert users of social media and tools such as slideshare, Mendeley and
blogs. Whilst library and information professionals are in the position
where they are often in a neutral role, so can make informed decisions
on what is the best way to aid staff discover and communicate research.
These skills are starting to spread slowly within the academic community
and LIS professionals are in an ideal position to capitalise on
altmetrics.

The future

Certainly how academic outputs are measured in the future is anyone’s
guess. We could move away from metrics to something that focuses on case
studies, or move more towards open public peer review of research.
Certainly the impact factor and citation indexes are with us for the
foreseeable future. It’s likely we will see an amalgamation of systems
with some regarded as more uniform and formal than others. 
As each month passes we see another set of tools appear on the Web that
promises to aid researchers share, communicate and discover research, so
we could be at risk of information overload and decision fatigue when
it comes down to choosing the right tools for the job. The reality is
that we are unlikely to discover a magic silver bullet solution for how
we measure scholarly work. All of the options offer something and if
they can be designed and coerced to work together better; scholarly
communication and measurement could reach a plateau of productivity.
Yet this requires an awful lot more engagement from the academic
community, one that is already under pressure from various angles to
deliver research and extract from it examples of impact. Nevertheless,
altmetrics clearly look like they are here to stay for the mid-term at
the very least and are gaining acceptance in some parts of the research
and publishing sphere. 
For now I suggest you investgate Figshare, ImpactStory, Mendeley and
Altmetric.com to name but a few in addition to signing up for an
Altmetric.com librarian account and installing their web bookmarklet. 
To summarise, if we were to draw a Venn Diagram with social media in one
bubble, metrics in another we would clearly see librarians in the
overlapping area alongside altmetrics. It’s really down to whether you
want a share of that space?

No comments:



ScHARR Information Resources Blog: An introduction to Altmetrics for Librarians, Researchers and Academics

Thursday, 26 May 2016

New guides for researchers – Altmetric

 Source: https://www.altmetric.com/blog/new-guides-for-researchers/


New guides for researchers


Cat Williams, 25th May 2016


We get asked a lot how researchers can best make
use of altmetrics and the data we provide. Our thinking around this
splits the potential use cases into 3 main categories:


  • Reputation management: because Altmetric tracks attention sources
    in real time to provide a collated record of the mentions and shares of
    your research as soon as it’s published, it’s much easier to see who is saying what about it.
    This gives you the opportunity to step in early to engage or clarify
    details of your research where necessary and helps ensure your findings
    are being communicated and interpreted correctly.
  • Showcasing reach and influence: if you’re looking
    for ways to better demonstrate the reach and influence of your work or
    engagement beyond just academic audiences, altmetrics can be really
    useful. You might like to consider including examples of where your
    publications have been covered in the media, discussed amongst different
    communities, or have achieved international reach in things like your
    CV, funding applications or annual reviews. You could also use the
    altmetrics data as part of the discussion of the aims and outcomes of
    your research with your line or program manager – a great way to start
    getting and analysing some feedback if your research has only recently
    been published and does not yet have any formal academic citations.
  • Discovery: Altmetric has tracked attention to about
    5 million outputs to date. That’s a mixture of journal articles, books,
    datasets, images, reports, and thousands of other research items that
    have received mentions or shares in the sources we track. Altmetric badges can now be found on the abstract pages of over 6,000 journals, and tools such as the free Bookmarklet and Explorer for Institutions
    can be used to gather context for new publications in your field –
    making it easy to see what’s trending and why it’s popular. Some
    researchers have even found new collaborators by checking the Altmetric details page for their work to see who is discussing it!
To support researchers in understanding and making use of altmetrics
we’ve created some brand new guides that we hope will be useful:





Using altmetrics in your NIH Biosketch

Screen Shot 2016-05-25 at 10.39.41Showcasing
the value of your previous work to funders is becoming increasingly
important. They want to see what the broader impacts of your research
are, how you’ve engaged with different audiences, and to gather as much
context as possible in relation to the grant you are applying for.


Making use of Altmetric (and other altmetrics) tools and selecting
highlights to include in your personal summary can really help support
your case.


Download this guide
to find out more about why these kind of insights are likely to be of
interest to your funder, how to find the data and some examples of what
including it might look like.


Download-guide









Tips and tricks for promoting your research

At Altmetric we assign each research output we find online attention for an ‘Altmetric score‘.
This score is a weighted count of the online attention an item has
received and is intended to reflect the reach of that engagement (the
higher the score the bigger the reach and volume of attention). We know
that researchers are always keen to make their score as high as
possible, and whilst that in itself should never be the end goal, it’s
great if you want to get out there and make your research more visible
to audiences who might have an interest in it.


Screen Shot 2016-05-25 at 10.36.41This
doesn’t need to be big; it can be as simple as making sure your
department is aware that you’ve published the work, including a link to
it in your social media profile or email signature, or making sure you
share it online when presenting at conferences.


We’ve created this guide
that offers some top tips for making your research more discoverable –
take a look and see which of the techniques would be most suitable for
attracting the audiences you want to reach, and what you might be able
to easily integrate into your existing workflows.


Download-guide






A researcher guide to getting started with Altmetric Explorer for Institutions

Many institutions all around the world are now using the Altmetric Explorer for Institutions
platform to monitor and report on the online attention surrounding
their research outputs at the author, department and institutional
level. For researchers, this is an opportunity to get to grips with a
lot of altmetrics! The database contains not only Altmetric data for
your institution’s publications, but also the millions of other research
outputs Altmetric have tracked attention to – making it possible to
compare and contrast the attention your research is receiving with that
of your peers from other organisations, identify trends, and discover
interesting new content to read.


Screen Shot 2016-05-25 at 11.18.46Our new getting started guide
covers the basics: an overview of what the score and Altmetric donut
show; how to find out what attention your research is receiving;
registering to export data and set up email alerts to be notified of new
mentions; pulling out interesting mentions or coverage of your work for
your CV, annual review or funding application; and how to use the
platform to find recent publications in your field that are attracting
attention from a broader audience.





Download-guide








We hope you find these guides useful – let us know what you think and feel free to send us suggestions (via email, Twitter, or in the comments below) for others that would be good to have!





New guides for researchers – Altmetric

What If Academic and Scholarly Publishers Paid Research Authors? | The Scholarly Kitchen

Source: https://scholarlykitchen.sspnet.org/2016/05/25/what-if-academic-and-scholarly-publishers-paid-research-authors/

What If Academic and Scholarly Publishers Paid Research Authors?

Pay It Forward
Pay It Forward (Photo credit: Wikipedia)
A recent article in the Chronicle of Higher Education sought to explore what the authors of the 10 most-downloaded Sci-Hub articles think of Sci-Hub.
Aside from cherry-picking its facts (the journalist interviewed three
authors out of dozens involved in the papers), the question itself is a
red herring. After all, because publishers assume financial risk for scholarly and academic authors, the economic answer is obvious a priori
— authors should have little problem with piracy of their material if
said piracy might increase the number of people reading their work.
After all, they suffer no harm from this.
It’s like asking publishers if they care that academics are having their bank accounts hacked — the question is not relevant because it’s not their money at stake.


Money has been a central and simmering issue in the access debates, including the fact that research authors are unpaid. In the Chronicle article, the perceived unfairness of unpaid research authors is referenced in two sentences regarding an author named Pober:


Mr. Pober says he doesn’t mind that many people download
his paper free since he didn’t make money from its publication. In fact,
like most academics, he paid to submit his article.
(Two small corrections for the journalist — Pober is an MD/PhD, and I doubt Dr. Pober paid to submit his article as submission charges are rare. Instead, he most likely paid some fee after acceptance.)


The issue came up again recently in a Bloomberg Views overview of Elsevier’s acquisition of SSRN, as the writer tried to explain the economy of research publication:


The university-professor authors, editors and referees of
the journals, meanwhile, usually receive no monetary compensation for
their work.
It was also raised in a comment this week on a Kitchen post.


The implicit complaint each time someone mentions paying research
authors in passing is that not paying them is unfair. To generate this
effect, the idea of paying research authors is presented as if it makes
perfect sense and would be normal. But I’ve never seen the idea and its
potential consequences explored at length, and normal is a relative
measure.


Of course, most of the readers here know it’s not unfair or abnormal.
Authors have the economic relationship to publishers in which they are
not paid, and in many situations pay fees prior to publication (color
charges, data charges, submission fees, page charges, and/or APCs),
because their financial risk is eliminated and their rewards for
publication are indirect but significant — by
publishing, they lay claim to their findings, making them rivalrous; by
publishing, they claim priority over potential competitors; and by
publishing, they show their employers and authorities in their field
they are not shirking
. Publish a paper in the right journal or earn a
stellar publication reputation, and you can go far. Many careers have
been made in this way.


Putting aside the fact that Pober and his co-authors likely benefited
indirectly from a strong research effort and well-cited and popular
paper in a very good journal — getting more grants, more budget, better
postdocs, better facilities, more speaking invitations, more press
coverage, more influence, a greater reputation — let’s pursue a scenario
in which they also would make money directly from
their paper’s publication.


The strawman question
for today: What if publishers paid research authors to publish, on a
widespread basis, and in amounts that would be meaningful?
For payments like this to make sense in the long-term, the positives
would have to outweigh the negatives. The question becomes: Do they?


There are some potential
positives — for instance, authors might be more willing to defend
publishers against piracy sites like Sci-Hub, and might understand
publishing economics more completely. Authors might be greater advocates
of driving traffic to their papers (although incentives and
abilities might remain mismatched here). The PR problems publishers have
faced over the past 15-20 years might be blunted if academics saw
publishers as a path to self-enrichment. It could also quiet the voices
crying “exploitation” to some degree, as payments would appropriate
authors into the financial upside of publishing.
But now we have to look for the downsides.


For the sake of this part of the discussion, I’ll avoid a
standard royalties model, as it is overly complicated and unlikely to be
implemented on a wide scale. There is an example of this. In 2008, Cold
Spring Harbor Laboratory Press began paying authors and editors
royalties for one of their journals (CSH Protocols), setting aside 10% of subscription revenues and then dividing this up based on the share of usage each author received. Authors received about $300 for papers getting the highest usage, while some received as little as $3
(publishing at the end of the year meant low usage, so low royalties
for that year). If they received less than $25, Cold Spring Harbor
rolled the payments over to the next year. Protocols typically have few
authors, and the program apparently didn’t generate many more papers for
the journal, according to one of its founders. The amounts certainly
were below the $600/year threshold that would have required Cold Spring
Harbor to provide the royalty recipients with 1099-MISC tax forms and
file these earnings with the IRS. The program has since been
discontinued. In its place, a one-time honoraria has been substituted,
something not unusual for commissioned works in journals (e.g., review
articles). According to those who inherited the program, the
administrative burden of a royalty program was too high given the small
amounts being paid out.


Since the royalty model is unrealistic, I’ll instead focus on a
per-article fee to authors, as this illuminates many of the downsides
that would arise in either case, while allowing us to do some
straightforward math.


The first downside — paying authors in a meaningful way would
probably kill Gold OA and CC-BY in their tracks and move the entire
industry fully back to the subscription and licensing model, with
stricter copyright enforcement and authors more directly appropriated
into copyright enforcement. Piracy would not be tolerated throughout
academia, and payment terms would probably require authors to control
distribution of their works beyond the publisher’s own channels. Authors
would be more reluctant to share reprints without payment as this could
imperil their reputation for future payments, which would mean that
ResearchGate and Academia would probably wither and die. You only need
to think a moment, and these effects become clear — publishers would
have more control over author behavior in general having paid them. (Note: Some people may view some of these effects as positives.)


But let’s continue to explore the idea of paying authors beyond this one major set of consequences.


One important baseline difference in academic and scholarly journal
publishing is that the sheer number of authors we deal with far
outstrips the number in trade publishing, either book or magazine. A
single research paper in some fields could have more authors than exist
on a book publisher’s entire author list or a magazine publisher’s
entire cadre of writers. This is a key difference.


Because of the scale generated by large author lists — lists that are growing longer with each passing year
— a major and clear negative is that aggregate expenses across the
board would rise, as more money would be needed in the system than
before in order to pay authors while leaving other aspects funded as
they are now. Because the scenario of authors being paid would cancel
revenues from author payments to publishers, payments would trigger two
sources of price increase — the costs of paying authors, and the costs
of recovering revenues no longer coming from authors. Estimates aren’t
easy to make, but to sketch a model, let’s assume payments to authors
would be large enough to be personally significant, to serve as an
incentive.


Identifying who would get paid would not be a simple matter for many
papers. There are many types of “authors” for scholarly and scientific
papers. There are contributors, first authors, last authors, and
research team participants. There are authorship groups that sometimes
number in the thousands. Creating a payment system might lead to a great
deal of negotiation around payment contracts, authorship roles, the
authorship list itself, and so forth. This could slow the production of
papers, the publication of results, and the advancement of early career
scientists who would be negotiating from a weak position to be included
on paying papers.


Let’s assume, for simplicity’s sake, that each author on a scientific
research paper receives $100 for getting a paper published in a
journal. For some articles, especially in astrophysics and epidemiology,
there can be more than 1,000 authors per article, with an upper bound
of more than 3,000 listed authors in some cases. This means that a
publisher would have to pay as much as $100,000-300,000 per
article and cut 1,000-3,000 checks. Let’s pick the midpoint, and assume
the model’s paper has 2,000 authors. The administrative costs for one of
these papers would be astronomical (pun intended), while the
consequences aren’t clear. Would authorship lists shrink to incentivize
publishers to prefer one group’s work over another’s (because it would
cost less in author payments)? Would authorship lists grow, so more
people could get on the gravy train? Would the model last long, or would
it move to a flat per-paper payment model (discussed below)?


The per-author approach would make fields with large and
collaborative authorship groups far more expensive for institutions and
individuals paying to access content, while each individual author would
receive a token amount at most. How much would prices rise in order to
support these mini-payments to authors? Right now, I would speculate
that an astrophysics article can be published for at most a few thousand
dollars in costs to the publisher. Imagine that small amount
ballooning by $200,000 or more. Imagine what carrying those costs in
author-intensive fields — astrophysics, microbiology, epidemiology,
geological sciences — would do to library budgets.


Let’s continue the scenario, where to the new $200,000 in author
payments, we add the administrative costs to deal with checks,
snafus, and disputes, which I’ll peg at a normal 30% overhead. Now we’re
adding another $60,000 in processing costs. So an article that might
have cost $2,500 in expenses for the publisher to publish now costs
$262,500 to publish. Assuming 1,000 articles per year for a robust
astrophysics journal, an expense line of $2.5 million explodes to
expenses of $262,500,000, and this does not take into account
the previous payments from authors now foregone, which may be another $1
million in a year. That’s at least $260 million in expenses the
publisher would have to absorb — costs that would certainly be passed on
to libraries, subscribers, and others. Estimating 5,000 institutions
for a rough estimate, that would be a greater than $50,000 price
increase for one title. Clearly, that is not going to work.


Astrophysics has an extremely robust and collaborative authorship
community, so let’s move to an area where fewer authors typically work
together — biomedicine. Here, let’s assume an average of 12 authors for
the sake of discussion to see what problems might occur in groups this
small.


Biomedicine is also collaborative, and multi-center trials are not
unusual. First-author position and last-author position mean a great
deal. Multi-national collaborations are fairly common in this and other
fields. Ethical issues are more front and center.


For this, let’s write the story as it might occur a few years after
author payments have become the norm involving a paper that’s slightly
above average in size, and slightly more complex than average in
composition:


A savvy editor receives a multi-national,
multi-center study with 15 authors from six institutions in three
countries. She experiences a few still-unfamiliar thoughts before
commencing with scientific review, as now authors are being paid.
There’s been a lot of training over the past few years, and the editors
have quickly learned the tricks of the trade that come with experience,
but it still feels strange. First, how will the authors in the other
countries want to be paid? In local currency? How will their tax
authorities deal with the payments? Do we have nexus there? Will this
create it? What about the institutions involved? One of them, the editor
knows, requires that payments go to the researcher’s department, while
one of the institutions insists that any payments go to a fund for
scholarships.
The editor also sees that there is a long
“contributor” list, which she knows now means that there are going to be
authorship disputes down the road, especially if the manuscript moves
toward acceptance and publication. Why? Because ever since publishers
started paying authors, senior academics have become notorious for
pushing younger colleagues off the author list and into this unpaid
“contributor” category to make things go more smoothly early, and then
to demand that some or all of the contributors move onto the author list
once the journal has made a decision. The big names starting throwing
their weight around when they think the editor is invested in the paper.
She’s seen these battles before, and they can be bruising. The senior
academics have probably made promises they shouldn’t have, there will be
a battle between the authors and her publisher, and she’ll be caught in
the middle. How much does she want to deal with this?
Then she catches that the first author
(B.A. Payne) is actually the Payne who is notorious for negotiating
after initial decision for a higher-than-normal author payment,
sometimes 5-10x normal, like it’s a speaker’s fee for keynote at a major
conference or meeting. He’s even been known to require that every
author be paid this amount — and he’s gotten it from time to time at
some of the glamour journals, so he’s unashamed to ask for it
everywhere. While the publisher has a contingency fund set aside
which editors can use to secure a certain number of key papers per year,
she’s used her allotment by now, and would have to ask for an exception
if she moved the paper forward. And it’s the week before Thanksgiving,
making it unlikely that she’ll get the person she needs on the phone.
There have been a lot of new people hired in Finance to handle author
payments, but they still tend to sneak away just before major holidays,
and they’re overworked the rest of the year. She sighs. The fact the
organization is nearing year-end makes it less likely she’ll get
approval, as well.
All considered, even without reading the
paper, she decides this one is too much trouble right now. It’s easier
to reject it. Let someone else deal with this payment prima donna
and his unsorted crew of collaborators. It will save her
organization money and headaches. Besides, she had a paper earlier in
the day that was different but nearly as good, and the group involved
is known to be easy to work with and to donate their author payments to
the publisher’s parent society.
This illustrates a number of potential scenarios and responses to
incentives. But did this seem like a set of decisions an editor should
be involved in? Does it help or hurt the evaluation of the science?


Let’s assume a milder model, in which there is a flat per-paper
payment that the authors themselves have to divvy up in some manner,
leaving the editor and publisher out of it. Let’s assume the payment is
$1,000/article. At approximately 1.5 million published articles per year,
the financial impact would be $1.5 billion on the industry. With the
industry estimated to be between $10B and $25B in total revenues, we’re
talking a 9-15% tax on the system to pay authors. Assuming publishers
pass along that tax, institutions would face a 9-15% addition bundled
into an annual price increase for a site license. This would not go down
well, and would cause ruptures in many library budgets if generally
applied in short order. But every publisher would have to agree to do
this, which runs into anti-trust issues immediately. For one group of
publishers to do it and other to not would create chaotic market
dynamics — potentially better papers for some, but a raft of
cancellations as the costs hit the market. We are in an era of stringent
budgeting. How this stalemate would be broken — a standoff between
author payments and risks involved in recovering the fees — is unclear.


There is also the very real possibility that author payments would
become another mini-economy within scholarly publishing. We can see this
in other industries, where each point of the transaction trail adds
costs (credit card fees, agency fees, transaction fees, courtesy
charges, and so forth). The $1.5 billion gross estimate could increase
another 15%, easily.


In the case of the 15-author biomedical paper, each author would
receive about $67. For the 2,000-author paper, each receives 50¢.


Either per-paper model may encourage more “salami slicing” of
results, which occurs probably too frequently already — this is the
practice of squeezing as many papers as possible out of one study.
Imagine this behavior incentivized with money as well as publication.


But let’s assume the market dynamics make this an irresistible
change, and it soon sweeps the industry. At $1,000 per article, the
costs might be absorbed in a few years, if library budgets could grow to
match (we’ll ignore the effect on tuition and fees for this exercise).
But how long does the flat-fee scenario last? And what does that do
for/to authors?


Soon, a natural market might emerge, one that would defeat the
flat-fee model. Some papers are worth more than others, some authors are
worth more than others, and so forth. Rather quickly, you’d have
bidding. The bidding would have two sides — journals bidding for
authors/papers, and researchers bidding for authorship. These both
already occur as journals woo authors with promises of priority
handling, good placement, cover positioning, and bells and whistles,
while researchers ask to be added to papers (or are asked to join papers
— in medicine, some authors are already paid by sponsors to do this).


While competition for papers occurs in the current reputational
system, we find it distasteful when a senior author is paid to sign onto
a paper (as noted above, it does occur). What would a senior author ask
to receive in order to join a promising paper and lend gravitas? Would
this researcher’s participation increase the bargaining position of the
authors to drive up their payments from the publisher? How long do these
negotiations take? Is more science published more quickly, or does the
system slow to a crawl as everyone is now focused on their immediate
financial position?


The bidding war is an interesting item to examine more closely. It brings to mind the
story of Jack Andraka, who won a science prize for an approach to
pancreatic cancer screening, which he then leveraged into grants and
celebrity
. No paper was published, his approach was found wanting
(and non-novel) as word spread, and nothing substantial other than fame
and noise came from it. But at one point, he might have benefited from a
bidding system, pocketing a high fee based on unsubstantiated claims.
How would a bidding system avoid these problems? How “in the blind”
would the bidders be? Would they only be given the author names, the
grant number, and the abstract, then asked to bid? Authors would be
tempted to pimp their papers, as they would gain notoriety for being
pursued. What if the winning bidder found fatal flaws in the paper?
Could they get a refund? What kind of retraction would this be?


This scenario also brings up the ethical problems already extant in
scientific and scholarly publishing — plagiarism, exaggerated claims,
fraud, image manipulation — and seemingly amplifies them. After all,
adding an incentive to an already heady pile of incentives would promise
to only bring out more bad behaviors.


It’s entirely possible that new disclosure rules and limits akin to the Sunshine Act
would be developed. Readers may come to implicitly trust journals that
don’t pay authors to publish, an inversion of the current situation in
which some today trust subscription journals more than Gold OA journals.
To many, money is a barometer of ethical purity.


There is also the issue of US government researchers, who certainly
could not accept payments. This likely goes for other governments’
employees.


Which publishers would be better-positioned to bid for papers? Large,
multi-national publishers — they have scale, work in multiple
currencies, and have deeper pockets to withstand an extended bidding
war. Which publishers would have an easier time dealing with the
administrative overheads of such a system? Large, multi-national
publishers. There are already many forces working toward large-scale
consolidation of journals and books under the auspices of large,
multi-national publishers. Paying authors could add another.


Even if all these issues could be addressed, how much could we pay
authors before other complications and costs emerged? Remember that in
the US, any individual receiving more than $600 from an organization in
any year is required to submit a tax form prior to payment and file a
1099-MISC to the IRS at tax time. Publishers would have to collect tax
documents beforehand, and issue 1099-MISC forms for their authors to
pass along to the IRS. This amount of paperwork and overhead, with
dozens to thousands of authors per paper, would be difficult and costly
to maintain. Errors would occur, forms would be lost, etc. And gathering
tax forms from all the authors prior to publication, while certainly
possible, seems unlikely to speed research along the path to
publication.


In fact, it may be that after exploring the potential to get paid,
some researchers would begin to prefer publishers who didn’t pay them.
After all, they would be competing only on the quality of their
research, not on distracting elements driven by the payment scheme. At
the same time, publishers and editors would start to prefer authors who
waived their payments. The mutual benefits would be real — nobody
would have tax headaches, papers would be published sooner, and
the science rather than the complexity of paying authors (available
budget, processing) would be the focus again. Overall, not being paid
may be preferred by both parties. Because the strong incentives around
publication would remain, there would still be good reasons to get works
published soon and in strong journals.


We already have seen a hint of this scenario — when Company of
Biologists paid peer reviewers $25, the program was discontinued when
reviewers themselves asked to stop being paid. It turns out it was more
trouble to arrange to receive the payment in terms of time and effort
than the $25 was worth to them. It’s easy to imagine researchers feeling
similarly, especially after taking the time away from their labs or
wards to publish a paper.


Indirect incentives allow authors to shift risk to publishers for
publishing their papers, and allow editors and researchers to focus on
core scientific and intellectual issues. Paying authors would add a
great deal of expense to academic publishing, while tempting authors to
game the system, play the market, push the limits, and incite bidding
wars.


Perhaps the current practice of “paying it forward” is better for everyone all around.




What If Academic and Scholarly Publishers Paid Research Authors? | The Scholarly Kitchen

EconPapers: Major trends in knowledge management research: a bibliometric study

Major trends in knowledge management research: a bibliometric study

Peyman Akhavan (peyman_akv@yahoo.com),
Nader Ale Ebrahim (aleebrahim@um.edu.my),
Mahdieh A. Fetrati (m_afetrati@yahoo.com) and
Amir Pezeshkan (apezeshkan@ubalt.edu)


Additional contact information



Scientometrics-
An International Journal for all Quantitative Aspects of the Science of
Science, Communication in Science and Science Policy
, 2016, vol. 107, issue 3, pages 1249-1264


Abstract:
Abstract This study provides an overview of the knowledge management
literature from 1980 through 2014. We employ bibliometric and text
mining analyses on a sample of 500 most cited articles to examine the
impact of factors such as number of authors, references, pages, and
keywords on the number of citations that they received. We also
investigate major trends in knowledge management literature including
the contribution of different countries, variations across publication
years, and identifying active research areas and major journal outlets.
Our study serves as a resource for future studies by shedding light on
how trends in knowledge management research have evolved over time and
demonstrating the characteristics of the most cited articles in this
literature. Specifically, our results reveal that the most cited
articles are from United States and United Kingdom. The most prolific
year in terms of the number of published articles is 2009 and in terms
of the number of citations is 2012. We also found a positive
relationship between the number of publications’ keywords, references,
and pages and the number of citations that they have received. Finally,
the Journal of Knowledge Management has the largest share in publishing
the most cited articles in this field.


Keywords: Bibliometric; Citation analysis; Knowledge management; Research productivity (search for similar items in EconPapers)

Date: 2016

References: Add references at CitEc
Citations Track citations by RSS feed



Downloads: (external link)
http://link.springer.com/10.1007/s11192-016-1938-x Abstract (text/html)

Access to the full text of the articles in this series is restricted.



Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.


Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text




Persistent link: http://EconPapers.repec.org/RePEc:spr:scient:v:107:y:2016:i:3:d:10.1007_s11192-016-1938-x


Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/11192


Access Statistics for this article

Scientometrics-
An International Journal for all Quantitative Aspects of the Science of
Science, Communication in Science and Science Policy is currently
edited by Wolfgang Glänzel


More articles
in Scientometrics- An International Journal for all Quantitative
Aspects of the Science of Science, Communication in Science and Science
Policy from Akadémiai Kiadó
Series data maintained by Sonal Shukla (sonal.shukla@springer.com).


EconPapers: Major trends in knowledge management research: a bibliometric study

Re-evaluation: Maintaining high-quality content in Scopus | Elsevier Scopus Blog

 Source: http://blog.scopus.com/posts/re-evaluation-maintaining-high-quality-content-in-scopus

Re-evaluation: Maintaining high-quality content in Scopus











on Tue, 05/03/2016 - 02:29
Almost
a year ago we announced the launch of the new Re-evaluation program for
Scopus content. This program was created as an incentive for journals
to maintain their high content quality. When a journal is originally
suggested for Scopus, it must undergo a rigorous evaluation and
selection process to ensure it meets all the high-quality title
selection criteria required for acceptance into Scopus. However,
journals must also demonstrate the ability to maintain their quality status year over year.

An
additional focus for the first year of the re-evaluation program was to
ensure all journals met the same baseline of quality standards. When
Scopus launched in 2004, content originally came from different sources
with different levels of evaluation.  Over time, evaluation criteria for
new titles has evolved to become stricter and standardized.  The
re-evaluation process sets a standard level of quality expectations and
applies it across all titles, regardless of when they were first
accepted into Scopus.

The first analysis of all journals in Scopus flags any title that did not meet a least one of the 6 metric benchmark
criteria and initiates the re-evaluation workflow.  Publishers of a
flagged journal are notified and provided information on which metric
benchmarks were not met, along with the journal's overall performance
over time. This sets the expectation for both meeting and maintaining
quality standards.

If
the journal shows improvement in the next annual evaluation by meeting
at least one metric benchmark, coverage in Scopus will continue. In the
following year, when all journals in the database are again reviewed,
the journal will be checked to ensure improvement has been maintained. 
If the journal does not meet any of the six benchmarks for 2 consecutive
years, it moves to into re-evaluation by the independent Content Selection and Advisory Board (CSAB).

The CSAB’s Re-evaluation process is based on the same Scopus title selection criteria
used for newly suggested titles: journal policy, quality of the
content, journal standing, regularity and quality of the homepage. If a
journal does not meet all the selection criteria, the CSAB may decide
that the journal should continue in Scopus but checked again in another
12 months (at the same time as the entire Scopus journal base is
reviewed), or that it should be discontinued and the forward flow of
content stopped. A journal that is discontinued from Scopus enters an
embargo period of 5 years before it can be re-suggested for coverage.

In
this way, Scopus and the CSAB can work with journal publishers in a
fair and non-biased way to maintain an overall quality standard and
provide you with content that is of high quality and reliable.

If you want to check if a title is included in Scopus, you can:

  1. Go to the Scopus 'Browse sources' page, or
  2. Download the title list

If you want to learn more about content in Scopus, take a look at these resources:

  1. Webcast: Why Scopus content is relevant to you
  2. Scopus Content Coverage Guide (download)


Re-evaluation: Maintaining high-quality content in Scopus | Elsevier Scopus Blog