Rethinking the Journal Impact Factor and Publishing in the Digital Age

ulike air 10

Ulike Air 10

$399.00 $279.00

  • Acne
  • Atopic Dermatitis
  • Aesthetic Dermatology
  • Cutaneous Oncology
  • Dermatologic Surgery
  • Psoriasis
  • Device Based Therapies
  • Eczema
  • Hair and Nails
  • Hidradenitis Suppurativa
  • Infectious Disease
  • Psoriasis
  • Pigmentation Disorders
  • Practice Management
  • Skin of Color

J Clin Aesthet Dermatol. 2020;13(1):12–17

by Mark S. Nestor, MD, PhD; Daniel L. Fischer, DO; David Arnold, DO;
Brian Berman, MD, PhD; and James Q. Del Rosso, DO

Drs. Nestor, Fischer, Arnold, and Berman are with the Center for Clinical and Cosmetic Research in Aventura, Florida. Drs. Nestor and Berman are with the Department of Dermatology and Cutaneous Surgery, with Dr. Nestor also serving in the Department of Surgery, Division of Plastic Surgery, at the University of Miami Miller School of Medicine in Miami, Florida. Dr. Del Rosso is with JDR Dermatology Research/Thomas Dermatology in Las Vegas, Nevada and Touro University Nevada in Henderson, Nevada.

FUNDING: No funding was provided.

DISCLOSURES: Dr. Nestor and Dr. Berman are members of the Journal of Clinical and Aesthetic Dermatology Editorial Advisory Board. Dr. Del Rosso serves as Editor-In-Chief (Clinical Dermatology) of the Journal of Clinical and Aesthetic Dermatology.

ABSTRACT: Clinical and experimental literature search has changed significantly over the past few decades, and with it, the way in which we value information. Today, our need for immediate access to relevant and specific literature, regardless of specialty, has led to a growing demand for open access to publications. The Journal Impact Factor (JIF) has been a long-time standard for representing the quality or “prestige” of a journal, but it appears to be losing its relevance. Here, we define the JIF and deconstruct its validity as a modern measure of a journal’s quality, discuss the current models of academic publication, including their advantages and shortcomings, and discuss the benefits and shortcomings of a variety of open-access models, including costs to the author. We have quantified a nonsubscribed physician’s access to full articles associated with dermatologic disease and aesthetics cited on PubMed. For some of the most common dermatology conditions, 23.1 percent of citations (ranging from 17.2% for melasma to 31.9% for malignant melanoma) were available as free full articles, and for aesthetic procedures, 18.9 percent of citations (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) were available as free full articles. Finally, we discuss existing alternative metrics for measuring journal impact and propose the adoption of a superior publishing model, one that satisfies modern day standards of scholarly knowledge pursuit and dissemination of scholarly publications for dermatology and all of medical science.

KEYWORDS: Journal impact factor, impact factor, JIF, IF, hybrid model, open access, open-access, publishing, OA gold, OA bronze, OA green, hybrid gold

The Journal Impact Factor (JIF) has been a staple of the academic publishing community for nearly 60 years. It is the ratio of citations received to articles published by a journal and purports to represent the prestige and value of a scholarly journal.1,2 While initially a useful metric in the days of library-based, hard-copy learning, our movement into the digital and cross-specialty age has fundamentally changed the way we search for information, and growing reliance upon and desire for open-access and hybrid models of knowledge distribution have led to the depreciation of the JIF. Yet, inversely, academic publishers have become fixated on the JIF to the point of obsession. It is time to move beyond the trope of JIF-centric publishing in favor of a superior method of qualifying, quantifying, and sharing our search for knowledge.

The Web of Knowledge

The web of human knowledge is interconnected and growing constantly as data is gathered and conclusions are drawn. New ideas, findings, and paradigms trace the threads of their conception to build upon previously established nodes on the inner radials. Inspection of the web would reveal many nodes with few connecting threads and some nodes with hundreds or thousands of connections. Logically, these points with extensive links must be of great import in the formation of the web and our continued pursuit of knowledge. Sir Isaac Newton eloquently expressed this homage to our forebears when he admitted, “If I have seen further it is by standing on the shoulders of Giants.”3

This concept is the basis upon which Eugene Garfield proposed the development of a citation index in 1955.4 Its purpose was to measure a journal’s influence, or “Impact Factor,” based upon the number of citations its articles received.4,5 Articles that generate voluminous citations are typically seen as important or influential. Thus, journals that average a large number of citations per article are often viewed with higher regard in the eyes of the community of knowledge seekers.1,5,6 Since its establishment, the JIF has been used in this manner as a way to roughly gauge a journal’s credibility and importance.3–5 This was particularly valuable before the widespread use of the internet, when libraries had to decide on which journals to spend their limited budgets for the use of their patrons.6,7

It is fair to say that the JIF has fulfilled its purpose in this respect, as it supplies a general measure of citation rates and provides an idea of the attention given a journal. The highest quality journals in a field do typically have the highest JIF scores.1,6,8 However, over the years, it has become apparent that there are many biases and flaws inherent in the JIF score.2,6,8–12 As technology has advanced and the method of searching for information has evolved, the JIF has become less and less useful to the scientific community.

Definition and Value of the JIF

The JIF represents the mean number of citations received per article published in a given academic journal.13,14 Nearly any citation is counted in the numerator, but items counted in the denominator include only those classified as “Article,” “Review,” or “Proceedings Paper” by the Institute for Scientific Information (ISI), the purveyor of the JIF.1,6 Other types of publications, such as editorials, letters, notes, corrections, retractions, and discussions are excluded. The publication timespan used to calculate each year’s JIF score is the preceding two calendar years. Thus, the calculated score for a journal in 2018 includes citations to articles published in 2016 and 2017.

Eugene Garfield founded the ISI in 1960 to provide scientometric database services, including the indexing of journal citations and calculation of metrics, such as the JIF.4,15 Over time, the database has expanded to include over 14,000 academic journals from which citations are extracted and counted.2 A subset of these journals qualify to have their JIF calculated, although the ISI does not reveal what criteria include or exclude journals from that group.2,6 There is evidence that high self-citation rates might be a disqualifying factor, but otherwise little is known about the process and rules for selection.1,2 The specifics of the JIF’s calculation itself are opaque. Which items are actually included in the numerator and denominator is a mystery. Multiple attempts have been made to validate the JIF by journal editors and other parties using various databases, including the ISI’s own, yet none have succeeded in reaching reasonable reconciliation.1,2,6,8 The lack of reproducibility contrasts sharply with that key tenet of scientific research.

Biases and Flaws of the JIF

Many paperskey points

The way we search has changed. It is an understatement to say that searching for information has evolved over the last several decades. Apart from any personal subscriptions a researcher might have, performing a primary literature search used to involve a visit to the local library, sorting through each journal’s table of contents and indices, finding articles of interest, and making notes and copies for use. If a library didn’t have a local copy of a particular journal, waiting days to weeks after requesting one was the next step. The JIF was quite useful during that era, as a library could best utilize its limited budget to keep a selection of journal subscriptions likely to meet most of the needs of its patrons.

With the proliferation of computers and the internet, we now can generate thousands of relevant results in a matter of milliseconds. Filtering by year of publication, keyword, authors, and various other options allows for fine-tuned querying. With a few clicks, nearly any article can then be downloaded and saved, although payment for access is often required. The granularity and breadth afforded by the modern literature search have shifted the search mechanics from journal-oriented to article-oriented, and with that shift, the JIF has diminished in value. A growing concern now is accessibility to the full article without the searcher paying a significant fee.

The most widely-used platform for primary literature search is PubMed, which contains over 30.1 million records dating back to 1800 and represents over 7,000 journals.19–21 Even so, out of the 1.87 million daily search queries on PubMed’s database, about 46 percent of queries were found to result in relevant citations, of which only 4.7 percent were available as a full-text article.22,23 Google Scholar provides nearly three times
more links to full-text documents than PubMed, while also filtering for relevance; these features are some of the main reasons why Google Scholar has become a formidable competitor to PubMed.23,24 This further attests to the increasing demand for more accessible literature.

The fields of dermatology and cosmetics are certainly not exempt from restrictions to full access. We performed a literature search in PubMed’s search engine and filtered for full-text citations on 11 clinical dermatology topics and found that only 23.1 percent of citations were available as full text (ranging from 17.2% for melasma to 31.9% for malignant melanoma) (Table 2). In another search for three cosmetic dermatology topics, we found that only 18.9 percent of citations were available as full text (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) (Table 2).

The cross-specialty age. A benefit of medical specialization is the provision of care from highly experienced and trained physicians. This is particularly tangible in the surgical subspecialties, where caseload and hours in the operating room correlate strongly with provider confidence and patient outcomes.25–27 As we have become more specialized, however, significant overlap in the management of human disease has developed. The body is a unit, and diverse effects from illness and dysfunction are felt across the specialties, from the skin to the internal organs to the psyche. As a result, management of a particular disease is frequently handled by several different subspecialists, either collaboratively in the case of complicated disease, or individually as a natural outcome of skill set overlap, and there is often referral among providers.28,29

A common example of this overlap is in antibiotic use for bacterial respiratory infections, where the same illnesses are treated by family doctors, internists, and geriatricians. Keeping abreast of the latest data and practices regarding treatment regimens and antibiotic stewardship requires cross-specialty dissemination and access.30 In dermatology, many conditions are also managed by rheumatology, immunology, oncology, allergy, infectious disease, or psychiatry. In caring for our patients, we might need to access studies and clinical information from many allied specialties for optimal patient care.

In the realm of aesthetics, the “Core Four” specialties of dermatology, plastic surgery, oculoplastic surgery, and facial plastic surgery have significant overlap in procedures and surgeries performed for patients. Cross-specialty collaboration and literature access are even more vital to these fields given their focused scope of practice and relatively fewer practicing physicians, the benefits of which can already be seen internationally.31–33 Unfortunately, the traditional model of subscription-based or toll-gated publication practices inhibits medical advancement and barricades many doctors from gaining the pearls of wisdom provided by their peers across the specialty aisle.

Subscription and open-access models. Historically, access to scholarly journals has been through paid subscriptions, largely by libraries and universities, or individual memberships in various organizations and societies. The costs for these subscriptions are high and originally based upon the significant overhead of managing, printing, and distributing physical copies of the journals. Yet, while the shift toward online access has allowed for declining overhead costs, subscription prices have climbed higher year after year. This has been exacerbated as many journals have been bought by large corporations focused on profit generation. In 2012, Elsevier was reported to have a profit of $3.2 billion with a 38-percent margin.2 Average annual subscription prices run from $2,000 to $4000, with some reaching $20,000 and above.2,34 Many of these subscription services bundle journals, which raises prices even further. Without a subscription to a given journal, the cost of access to individual articles is exorbitant–viewing a single article from Elsevier currently costs $35.95.35 This restrictive method of distribution is cost prohibitive for individuals and many smaller entities, and it negatively impacts both clinicians and researchers, especially as we move into the cross-specialty age.

The open-access (OA) model, in which journals are freely accessible online with limited restrictions, eliminates costly and exclusive subscription services as a barrier to readers. OA publications are more likely to be viewed by a wider audience, reaching both academics and the general public, including those in developing countries who would otherwise not have access to scientific literature. Various subtypes of OA exist, differing slightly in the rights provided for viewing, using, and sharing the published articles, with “Gold OA” generally representing the ideal of fully open. “Green OA” and “Bronze OA” are two other major subtypes, although several other variations exist as well. In this model, submitting authors typically pay a publication fee from a few hundred to several thousand dollars.2 Although this cost might be offset by the author’s employer or grants, the high costs limits who can afford to publish their findings or opinions in an OA format.

Despite the advantages of the OA model, conventional culture of the scientific community continues to exert a preference for publishing in traditional journals that have an established, recognizable brand and high JIF scores.36 There is some merit to this, as there are a number of OA journals that are neither peer-reviewed nor indexed in PubMed, leading to doubts of the quality and accuracy of the work. However, PLOS and BMC Biology are examples of well-known OA journals that have been ranked first and fourth in JIF within their field, respectively.2

Likely, the most demonstrative example of the burden of restrictive access to scientific literature was the Aaron Swartz case in 2011, in which Swartz, a research fellow at Harvard, was prosecuted for wire fraud after connecting a computer to the network at the Massachusetts Institute of Technology (MIT) and downloading millions of academic journals.2,37 After being charged with a federal offense with a maximum penalty of $1 million and up to 35 years in prison, Swartz committed suicide.37,38 This event was significant in spurring the “crisis of conscience for open access” as many institutions and politicians applauded his efforts in fighting for open access and began advocating for the OA model in the wake of his death.2,37,38 Harvard and other universities now urge their academics to submit to OA journals.3,38 In 2013, the White House Office of Technology and Policy passed legislation to make published results of certain federally funded research studies available to the public within one year.37,39 The movement has made strides as many traditional journals are transitioning to hybrid OA models, in which individual authors can choose certain articles to be open to the public.40

The Hybrid Model

Today there exists a strong movement toward a hybrid OA model as a preferred means of distributing scholarly literature, combining both OA and traditional subscription-based plans. In this model, some articles are OA while the remainder are available only to subscribers. The author or their funder has the option of paying an article-processing charge (APC) in order to make their article available as OA.41 This is not a novel idea. The concept for a hybrid model was first proposed and adopted by Thomas Walker who created the first hybrid journal, Florida Entomologist, in 1988.42 Since then, it has gained increasing popularity as it offers many benefits, such as allowing authors to make their articles available for a wider audience while still maintaining the perceived prestige of publishing in a well-recognized or high-impact-factor journal. Based on a study from Springer Nature, it was estimated that OA articles in hybrid journals generate 1.6 times more citations, 4 times more downloads, and 2.4 times more attention than those not freely available.2,43

We live in an era in which literature searches are no longer done at local libraries through individual journals, but rather through digital search queries that span hundreds of journals and cross-specialty literature in milliseconds. The hybrid model caters to this by providing open and cross-specialty access that conventional subscription-based plans do not provide.44 This is done through cluster-searching, which groups OA articles based on subject, a much more effective method of searching that has made PubMed the most popular search engine for scholarly literature.45

One major disadvantage to authors, however, is that APCs in hybrid journals average around $2,700, about double the cost of publishing in a non-subscription full OA journal (around $1,400), and still more costly than a subscription, full OA journal (around $2100). Some of these include Elsevier (APC: $500 to $5,000), Springer Open Choice (APC: $3,000), Oxford Open (APC: $3,000), and Cambridge University Press (APC: $2,700).46 Cost is most likely the main factor restricting hybrid OA models from becoming the standard at this time. The cost factor may also bias what can be published as OA in these journals, as few independent researchers and fewer scientists with concerns or alternative viewpoints can afford the cost. Another problem with this model is the possibility of “double-dipping,” in which publishers receive profits from the same article twice—once from the APC, and again through subscription—further increasing costs to those paying for hybrid charges.46,47 Despite the cost of APCs, hybrid journals have continued to grow. From 2013 to 2017, at least 6.4 million OA articles were published in hybrid journals and at least 4,500 subscription journals from 59 publishers have adopted a hybrid OA option. Even so, as of 2018, only about five percent of the articles in hybrid journals were available as open access.48 While not quite as ideal as fully open access, the hybrid model makes strides towards making otherwise inaccessible literature more public.49

Gold Hybrid: A New Model

In the midst of this movement towards open-access publishing, a new model for hybridization has emerged. Taking the best of both worlds, some journals, such as the Journal of Clinical and Aesthetic Dermatology (JCAD), provide a peer-reviewed, PubMed-indexed publication platform wherein every article published is available as full-text and free via PubMed, but with all costs covered by advertising and subscriptions, rather than APCs. This allows for a business model to run with the intent of publishing high-quality work while eliminating the cost barrier and bias for potential authors and achieving the ideal of open-access publishing. JCAD is not alone in providing this type of publication model. For example, a similar model is employed by the Anais Brasileiros de Dermatologia, the official publication of the Brazilian Society of Dermatology. Their articles are provided via subscription to their members, but also available in English as free, full-text articles in SciELO and PubMed.50 Borrowing from the established categorization of OA subtypes—Gold OA, Bronze OA, Green OA, and the like—we propose that this model be given the name Gold Hybrid and recommend this as the current best-practice method of scholarly publication.

Next Steps, Time for Change

It is not clear when we will achieve the ideal of a fully open-access scholarly world, but strides are being made toward that goal. New initiatives and tools have been introduced to encourage the scientific community to publish more OA articles. Plan S, short for Plan Shock, due to its attempt to disrupt conventional standards of publishing, is one such initiative.51 Launched in September 2018 by a consortium of major European national research entities, the plan calls for state-funded research institutions to publish their research in journals that offer OA by 2021.52,53 It also asks journals to make article processing charges more transparent so that authors can make more informed decisions on where to publish rather than relying on the “prestige factor.”51

As the scientific community begins to adjust to a more open-access method of information sharing, it is inevitable that the JIF will need to be reconciled or replaced. Currently, there exist a number of alternate metrics, although none have become popular enough to replace the JIF. Some are essentially variations of the JIF with improved calculation parameters, while others branch out into different measurements entirely. These are called Altmetrics, and they often take into consideration variables beyond citation frequencies, such as article downloads, citations in news, social media, blogs, social bookmarking, reference management services, and others.53,54

The Eigenfactor, for example, is one such metric designed as an alternative.55,56 It is based on the same ratio used to calculate the JIF with three main differences: it includes citations in social sciences, discounts self-citations, and gives greater weight to citations from highly-ranked journals. To simplify the distinction, JIF is generally viewed as an answer to the question, “how many people will read my article?” while the Eigenfactor answers the question, “how many people will read the journal in which my article is published?”57

The Article Influence is another metric that is based on the Eigenfactor, which divides it by the number of articles in a journal to estimate the influence of an article over its first five years after publication. A score greater than 1.00 indicates above-average influence, while less than 1.00 is below average.55 The h-index, or Hirsch index, attempts to measure the influence of a particular author based on a set of their most cited publications and the variety of journals they are cited in. Just like the JIF, the h-index can also be used to estimate the influence of a scholarly journal.58 Google Scholar Metric (GSM) includes the h-index in its formula to gauge an article’s visibility.58 The SCImago Journal Rank (SJR indicator) derives its metric from a broader array of journals supplied by Scopus and uses a three-year period of articles published, rather than the two preceding years as per the JIF.58 Finally, the Immediacy Index looks at the speed at which an article is cited, taking the average number of times the article is cited within the year it is published.59All of these Altmetrics have been proposed as alternatives or complements to measuring journal impact, but none have attained the level of widespread use and acceptance of the JIF.2 Altmetric toolkits and services, such as Metrics Toolkit, ImpactStory, Acumen, Mendeley, Altmetric.com, PeerEvaluation, and Plum Analytics continue to provide alternative methods of determining journal and article influence that include more comprehensive and detailed metrics.58

Conclusion

The JIF is an unreliable, biased, and inherently flawed method of measuring the quality, accessibility, and value of a research journal. While it has played an important and valuable role in helping scientists find and acquire knowledge over the last six decades, our movement into the digital and cross-specialty age has depreciated the value of the JIF as the manner in which we seek and obtain knowledge has fundamentally changed. With this evolution has come a growing demand for open access to scholarly work. Unfortunately, the transition to a fully open-access world has been slow, due to continued fixation on profit generation and JIF-centric publishing motivation. While we continue to work toward an open-access ideal, the hybrid publishing model has become a favored stepping stone in that direction and has gained traction with both publishers and authors. However, expensive APCs charged to authors for open-access publishing create a significant barrier to the adoption of OA and might introduce bias. Ultimately, as with everything in this information age, immediate open access to important clinical research is fundamental to the advancement of science. Perhaps, as with our ability to navigate the streets with precision and without a fee, a Google-like company (or Google itself) will provide the means for the free flow of critical scientific knowledge. Until that time, a new type of hybrid publishing model, which we term Gold Hybrid, has emerged and provides the best of both worlds by supplying full open-access to every article via PubMed while charging no APCs to the authors. Our advancement into the digital and cross-specialty age requires moving beyond the paradigm of closed and JIF-centered publishing to a better way of discovering and sharing knowledge, and this new model might be the best way to do it.

References

  1. Citrome L. Impact factor? Shmimpact factor!: the journal impact factor, modern day literature searching, and the publication process. Psychiatry (Edgmont) 2007;4(5):54–57.
  2. Cope B. Kalantzis, M. (2014). The future of the academic journal, 2nd edition. Elsevier (pp. 9–83,85–112).
  3. Newton, Isaac. “Letter from Sir Isaac Newton to Robert Hooke”. Historical Society of Pennsylvania. Feb 5, 1976.
  4. Garfield, E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90–93.
  5. Garfield, E. Journal impact factor: a brief review. Can Med Assoc J. 1999;161(8):979–980.
  6. Saha S, Saint S, Christakis DA. Impact Factor: a valid measure of journal quality? J Med Libr Assoc. 2003; 91(1):42–46.
  7. Holbrook J, Britt. Philosopher’s corner: Open science, open access, and the democratization of knowledge. Issues in Science and Technology. 2019;35(3): 26–28.
  8. Seglen P. Why the impact factor of journals should not be used for evaluating research. Education and Debate. BMJ. 1997;314:498–502.
  9. Elliott, DB. The impact factor: a useful indicator of journal quality or fatally flawed? Ophthalmic Physiol Opt. 2014;34:4–7.
  10. Favaloro EJ. Measuring the quality of journals and journal articles: the impact factor tells but a portion of the story. Semin Thromb Hemost. 2008;34(1):7–25.
  11. Callaway E. “Beat it, impact factor! Publishing elite turns against controversial metric”. Nature. 2016;535(7611):210–112.
  12. Todd PA, Ladle RJ. Hidden dangers of a ‘citation culture.’ Ethics in Science and Environmental Politics. 2018;8:13–16.
  13. Natarajan, S. The impact factor story: Part I. Indian J Ophthalmol. 2016;64(9):619.
  14. The Hong Kong Polytechnic University. Journal impact: journal impact factor (JIF) and other metrics. Guides & Tutorials. Oct 31, 2018. https://libguides.lb.polyu.edu.hk/journalimpact. Accessed Jan 17, 2020.
  15. Hicks D, Wouters P. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520:429–431.
  16. Brumback RA. Worshiping false idols: the impact factor dilemma. J Child Neurol. 2008;23: 365–367.
  17. Larivière V, Kiermer V, MacCallum CJ, et al. A simple proposal for the publication of journal citation distributions. BioRxiv. 2016.
  18. Hemmingsson A, Mygind T, Skjennald A, et al. Manipulation of impact factors by editors of scientific journals. Am J Roentgenol. 2002;178:767.
  19. Williamson PO, Minter C. Exploring PubMed as a reliable resource for scholarly communications services. J Med Libr Assoc. 2019:107(1);16–29.
  20. Note: To see the current size of the database, type “1800:2100[dp]” into the search bar at https://www.ncbi.nlm.nih.gov/pubmed/ and click “search”. Retrieved on Oct 7, 2019.
  21. Johnson R, Watkinson A, Mabe M. Open Access.The STM Report: an overview of scientific and scholarly publishing. 2018;5e:97–147.
  22. Islamaj Dogan R, Murray GC, Névéol A, Lu Z. Understanding PubMed user search behavior through log analysis. Database (Oxford). 2009;2009:bap018.
  23. Shariff SZ, Bejaimal SA, Sontrop JM, et al. Retrieving clinical evidence: a comparison of PubMed and Google Scholar for quick clinical searches. J Med Internet Res. 2013;15(8):e164.
  24. Yoo I, Mosa AS. Analysis of PubMed user sessions using a full-day PubMed query log: a comparison of experienced and nonexperienced PubMed users. JMIR Med Inform. 2015:3(3),e25.
  25. Yen TW, Laud PW, Sparapani RA, Nattinger AB. Surgeon specialization and use of sentinel lymph node biopsy for breast cancer. JAMA Surg. 2014;149(2):185–192.
  26. Barbas A, Turley R, Mantyh C, Migaly J. Advanced fellowship training is associated with improved lymph node retrieval in colon cancer resections. J Surg Res. 2011;170(1):e41–e46.
  27. Sorensen MJ. Surgical subspecialization: escape route for surgeons or added benefit for patients? J Grad Med Educ. 2014;6(2):215–217.
  28. Heinskou T, Maarbjerg S, Rochat P, et al. Trigeminal neuralgia–a coherent cross-specialty management program. J Headache Pain. 2015;16:66.
  29. Grimes PE. A closer look at the role of the dermatologist in championing total women’s health through the dermatology gateway. Int J Womens Dermatol. 2018;4(4):189–192.
  30. Rawson TM, Moore LS, Gilchrist MJ, Holmes AH. Antimicrobial stewardship: are we failing in cross-specialty clinical engagement?. J Antimicrob Chemother. 2016;71(2):554–559.
  31. Association of American Medical Colleges. Number of People per Active Physician by Specialty, 2017. Physician Specialty Data Report. AAMC site. 2018. https://www.aamc.org/data-reports/workforce/interactive-data/number-people-active-physician-specialty-2017. Accessed Jan 17, 2020.
  32. Casas LA. Why should we foster core specialty collaboration in cosmetic medicine? Aesthetic Surg J. 2013;33:171–173.
  33. Long-fei F, Xue-ping O, Xiang-yi W. Core specialty collaboration and integrated subject formation of cosmetic edicine. Aesthet Surg J. 2014;34(2): 328–330.
  34. Noorden RV. Open access: the true cost of science publishing. Nature. 2013;495(7442):426–429.
  35. ‘Get Access’ Link. Elsevier. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S1748681514002496.
  36. Dodds F. The future of academic publishing: revolution or evolution? Learned publishing. 2018;31(2): 163–168.
  37. Skaggs, P. Aaron Swartz remembered as internet activist who changed the world. Patch. Jan 15, 2013. https://patch.com/illinois/evanston/aaron-swartz-remembered-as-internet-activist-who-chanf229b36e26. Accessed Jan 17, 2020.
  38. Seidman, B. Internet activist charged with hacking into MIT network. PBS. July 22, 2011. https://www.pbs.org/wnet/need-to-know/the-daily-need/internet-activist-charged-with-hacking-into-mit-network/. Accessed Jan 17, 2020.
  39. Publishers Weekly. White House Issues Public Access Directive. Feb 2, 2013. https://www.publishersweekly.com/pw/by-topic/digital/copyright/article/56076-in-historic-act-obama-administration-issues-public-access-directive.html. Accessed Jan 17, 2020.
  40. Piwowar H, Priem J, Larivière V, et al. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ. 2018;6: e4375.
  41. Hybrid open access. Common Ground Research Networks site. 2019. https://cgnetworks.org/journals/hybrid-open-access. Accessed Jan 17, 2020.
  42. Walker T. Free internet access to traditional journals. American Scientist. 1998;86(5):463.
  43. Draux H, Lucraft M, Walker J. Assessing the open access effect for hybrid journals. Springer Nature. 2018.
  44. Chumbe S, Kelly B, MacLeod R. Hybrid journals: ensuring systematic and standard discoverability of the latest open access articles. The Serials Librarian. 2015;68:1–4,143–155.
  45. Belikov AV, Belikov VV. A citation-based, author- and age-normalized, logarithmic index for evaluation of individual researchers independently of publication counts. F1000 Research. 2015;4:884.
  46. Björk BC, Solomon D. Developing an effective market for open access article processing charges. Open access publishing. March 2014. https://wellcome.ac.uk/sites/default/files/developing-effective-market-for-open-access-article-processing-charges-mar14.pdf
  47. Open Access Authors Fund. University of Colorado Boulder. 2019. https://www.colorado.edu/libraries/research-assistance/open-access/open-access-fund#targetText=Open%20Access%20Fund&targetText=The%20Open%20Access%20Fund%20is,by%20full%20open%20access%20publishers. Accessed Jan 17, 2020.
  48. Jahn, N. About the hybrid OA dashboard. Jan 2020. https://subugoe.github.io/hybrid_oa_dashboard/about.html. Accessed Jan 17, 2020.
  49. Swan A, Willmers M, King T. Costs and benefits of open access: a guide for managers in southern african higher education. Scholarly communication in Africa Programme. Paper 2. 2014.
  50. Anais Brasileiros de Dermatologia. http://www.anaisdedermatologia.org.br/.
  51. Rabesandratana T. Radical open-access plan is delayed a year. Science. 2019;364(6444):919.
  52. Plan S: Accelerating the transition to full and immediate Open Access to scientific publications. Science Europe. Sept 4, 2018. https://www.scienceeurope.org/our-priorities/open-access. Accessed Jan 17, 2020.
  53. Else, H. Radical open-access plan could spell end to journal subscriptions. Nature 2018;561:17–18.
  54. Careless, J. Altmetrics 101: Primer. Information Today, Inc. 2013;30(2). Retrieved from: http://www.infotoday.com/it/feb13/Careless–Altmetrics-101-A-Primer.shtml. Accessed Jan 17, 2020.
  55. West JD, Bergstrom, CT, Bergstrom TC. The Eigenfactor Metrics: A network approach to assessing scholarly journals. UC Santa Barbara: Department of Economics site. 2010. https://escholarship.org/uc/item/41h94387. Accessed Jan 17, 2020.
  56. Noorden RV. Controversial impact factor gets a heavyweight rival. Nature. 2016;540(7633): 325–326.
  57. Fersht, A. The most influential journals: Impact Factor and Eigenfactor. PNAS. 2009;106 (17):6883–6884.
  58. Assessing Journal Quality: AltMetrics. Boston College Libraries. Sept 5, 2019. https://libguides.bc.edu/journalqual/metrics. Accessed Jan 17, 2020.
  59. Diaz-Ruiz A, Orbe-Arteaga U, et al. Alternative bibliometrics from the web of knowledge surpasses the impact factor in a 2-year ahead annual citation calculation: Linear mixed-design models’ analysis of neuroscience journals. Neurol India. 2018;66: 96–104.

Recent Articles:

Effect of Topical Microencapsulated Benzoyl Peroxide on the Skin Microbiome in Rosacea: A Randomized, Double-Blind, Crossover, Vehicle-Controlled Clinical Trial

Read More

Clindamycin: A Comprehensive Status Report with Emphasis on Use in Dermatology

Read More

Ergonomics in Dermatologic Laser Procedures

Read More

Synchronizing the Nomenclature Surrounding Synchronous Primary Cutaneous Melanomas: A Systematic Review

Read More

Read More

Pathological and Immunohistochemical Assessment of Aging of the Abdominal Skin Treated with Carboxytherapy: A Randomized, Split-body Trial

Read More

Review of Statistical Considerations and Data Imputation Methodologies in Psoriasis Clinical Trials

Read More

Selected Abstracts from Elevate-Derm East Conference

Read More

Dermatological Conditions in Skin of Color—Overburdened and Undertreated: Hidradenitis Suppurativa in Skin of Color

Read More


J Clin Aesthet Dermatol. 2020;13(1):12–17

by Mark S. Nestor, MD, PhD; Daniel L. Fischer, DO; David Arnold, DO;
Brian Berman, MD, PhD; and James Q. Del Rosso, DO

Drs. Nestor, Fischer, Arnold, and Berman are with the Center for Clinical and Cosmetic Research in Aventura, Florida. Drs. Nestor and Berman are with the Department of Dermatology and Cutaneous Surgery, with Dr. Nestor also serving in the Department of Surgery, Division of Plastic Surgery, at the University of Miami Miller School of Medicine in Miami, Florida. Dr. Del Rosso is with JDR Dermatology Research/Thomas Dermatology in Las Vegas, Nevada and Touro University Nevada in Henderson, Nevada.

FUNDING: No funding was provided.

DISCLOSURES: Dr. Nestor and Dr. Berman are members of the Journal of Clinical and Aesthetic Dermatology Editorial Advisory Board. Dr. Del Rosso serves as Editor-In-Chief (Clinical Dermatology) of the Journal of Clinical and Aesthetic Dermatology.

ABSTRACT: Clinical and experimental literature search has changed significantly over the past few decades, and with it, the way in which we value information. Today, our need for immediate access to relevant and specific literature, regardless of specialty, has led to a growing demand for open access to publications. The Journal Impact Factor (JIF) has been a long-time standard for representing the quality or “prestige” of a journal, but it appears to be losing its relevance. Here, we define the JIF and deconstruct its validity as a modern measure of a journal’s quality, discuss the current models of academic publication, including their advantages and shortcomings, and discuss the benefits and shortcomings of a variety of open-access models, including costs to the author. We have quantified a nonsubscribed physician’s access to full articles associated with dermatologic disease and aesthetics cited on PubMed. For some of the most common dermatology conditions, 23.1 percent of citations (ranging from 17.2% for melasma to 31.9% for malignant melanoma) were available as free full articles, and for aesthetic procedures, 18.9 percent of citations (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) were available as free full articles. Finally, we discuss existing alternative metrics for measuring journal impact and propose the adoption of a superior publishing model, one that satisfies modern day standards of scholarly knowledge pursuit and dissemination of scholarly publications for dermatology and all of medical science.

KEYWORDS: Journal impact factor, impact factor, JIF, IF, hybrid model, open access, open-access, publishing, OA gold, OA bronze, OA green, hybrid gold

The Journal Impact Factor (JIF) has been a staple of the academic publishing community for nearly 60 years. It is the ratio of citations received to articles published by a journal and purports to represent the prestige and value of a scholarly journal.1,2 While initially a useful metric in the days of library-based, hard-copy learning, our movement into the digital and cross-specialty age has fundamentally changed the way we search for information, and growing reliance upon and desire for open-access and hybrid models of knowledge distribution have led to the depreciation of the JIF. Yet, inversely, academic publishers have become fixated on the JIF to the point of obsession. It is time to move beyond the trope of JIF-centric publishing in favor of a superior method of qualifying, quantifying, and sharing our search for knowledge.

The Web of Knowledge

The web of human knowledge is interconnected and growing constantly as data is gathered and conclusions are drawn. New ideas, findings, and paradigms trace the threads of their conception to build upon previously established nodes on the inner radials. Inspection of the web would reveal many nodes with few connecting threads and some nodes with hundreds or thousands of connections. Logically, these points with extensive links must be of great import in the formation of the web and our continued pursuit of knowledge. Sir Isaac Newton eloquently expressed this homage to our forebears when he admitted, “If I have seen further it is by standing on the shoulders of Giants.”3

This concept is the basis upon which Eugene Garfield proposed the development of a citation index in 1955.4 Its purpose was to measure a journal’s influence, or “Impact Factor,” based upon the number of citations its articles received.4,5 Articles that generate voluminous citations are typically seen as important or influential. Thus, journals that average a large number of citations per article are often viewed with higher regard in the eyes of the community of knowledge seekers.1,5,6 Since its establishment, the JIF has been used in this manner as a way to roughly gauge a journal’s credibility and importance.3–5 This was particularly valuable before the widespread use of the internet, when libraries had to decide on which journals to spend their limited budgets for the use of their patrons.6,7

It is fair to say that the JIF has fulfilled its purpose in this respect, as it supplies a general measure of citation rates and provides an idea of the attention given a journal. The highest quality journals in a field do typically have the highest JIF scores.1,6,8 However, over the years, it has become apparent that there are many biases and flaws inherent in the JIF score.2,6,8–12 As technology has advanced and the method of searching for information has evolved, the JIF has become less and less useful to the scientific community.

Definition and Value of the JIF

The JIF represents the mean number of citations received per article published in a given academic journal.13,14 Nearly any citation is counted in the numerator, but items counted in the denominator include only those classified as “Article,” “Review,” or “Proceedings Paper” by the Institute for Scientific Information (ISI), the purveyor of the JIF.1,6 Other types of publications, such as editorials, letters, notes, corrections, retractions, and discussions are excluded. The publication timespan used to calculate each year’s JIF score is the preceding two calendar years. Thus, the calculated score for a journal in 2018 includes citations to articles published in 2016 and 2017.

Eugene Garfield founded the ISI in 1960 to provide scientometric database services, including the indexing of journal citations and calculation of metrics, such as the JIF.4,15 Over time, the database has expanded to include over 14,000 academic journals from which citations are extracted and counted.2 A subset of these journals qualify to have their JIF calculated, although the ISI does not reveal what criteria include or exclude journals from that group.2,6 There is evidence that high self-citation rates might be a disqualifying factor, but otherwise little is known about the process and rules for selection.1,2 The specifics of the JIF’s calculation itself are opaque. Which items are actually included in the numerator and denominator is a mystery. Multiple attempts have been made to validate the JIF by journal editors and other parties using various databases, including the ISI’s own, yet none have succeeded in reaching reasonable reconciliation.1,2,6,8 The lack of reproducibility contrasts sharply with that key tenet of scientific research.

Biases and Flaws of the JIF

Many paperskey points

The way we search has changed. It is an understatement to say that searching for information has evolved over the last several decades. Apart from any personal subscriptions a researcher might have, performing a primary literature search used to involve a visit to the local library, sorting through each journal’s table of contents and indices, finding articles of interest, and making notes and copies for use. If a library didn’t have a local copy of a particular journal, waiting days to weeks after requesting one was the next step. The JIF was quite useful during that era, as a library could best utilize its limited budget to keep a selection of journal subscriptions likely to meet most of the needs of its patrons.

With the proliferation of computers and the internet, we now can generate thousands of relevant results in a matter of milliseconds. Filtering by year of publication, keyword, authors, and various other options allows for fine-tuned querying. With a few clicks, nearly any article can then be downloaded and saved, although payment for access is often required. The granularity and breadth afforded by the modern literature search have shifted the search mechanics from journal-oriented to article-oriented, and with that shift, the JIF has diminished in value. A growing concern now is accessibility to the full article without the searcher paying a significant fee.

The most widely-used platform for primary literature search is PubMed, which contains over 30.1 million records dating back to 1800 and represents over 7,000 journals.19–21 Even so, out of the 1.87 million daily search queries on PubMed’s database, about 46 percent of queries were found to result in relevant citations, of which only 4.7 percent were available as a full-text article.22,23 Google Scholar provides nearly three times
more links to full-text documents than PubMed, while also filtering for relevance; these features are some of the main reasons why Google Scholar has become a formidable competitor to PubMed.23,24 This further attests to the increasing demand for more accessible literature.

The fields of dermatology and cosmetics are certainly not exempt from restrictions to full access. We performed a literature search in PubMed’s search engine and filtered for full-text citations on 11 clinical dermatology topics and found that only 23.1 percent of citations were available as full text (ranging from 17.2% for melasma to 31.9% for malignant melanoma) (Table 2). In another search for three cosmetic dermatology topics, we found that only 18.9 percent of citations were available as full text (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) (Table 2).

The cross-specialty age. A benefit of medical specialization is the provision of care from highly experienced and trained physicians. This is particularly tangible in the surgical subspecialties, where caseload and hours in the operating room correlate strongly with provider confidence and patient outcomes.25–27 As we have become more specialized, however, significant overlap in the management of human disease has developed. The body is a unit, and diverse effects from illness and dysfunction are felt across the specialties, from the skin to the internal organs to the psyche. As a result, management of a particular disease is frequently handled by several different subspecialists, either collaboratively in the case of complicated disease, or individually as a natural outcome of skill set overlap, and there is often referral among providers.28,29

A common example of this overlap is in antibiotic use for bacterial respiratory infections, where the same illnesses are treated by family doctors, internists, and geriatricians. Keeping abreast of the latest data and practices regarding treatment regimens and antibiotic stewardship requires cross-specialty dissemination and access.30 In dermatology, many conditions are also managed by rheumatology, immunology, oncology, allergy, infectious disease, or psychiatry. In caring for our patients, we might need to access studies and clinical information from many allied specialties for optimal patient care.

In the realm of aesthetics, the “Core Four” specialties of dermatology, plastic surgery, oculoplastic surgery, and facial plastic surgery have significant overlap in procedures and surgeries performed for patients. Cross-specialty collaboration and literature access are even more vital to these fields given their focused scope of practice and relatively fewer practicing physicians, the benefits of which can already be seen internationally.31–33 Unfortunately, the traditional model of subscription-based or toll-gated publication practices inhibits medical advancement and barricades many doctors from gaining the pearls of wisdom provided by their peers across the specialty aisle.

Subscription and open-access models. Historically, access to scholarly journals has been through paid subscriptions, largely by libraries and universities, or individual memberships in various organizations and societies. The costs for these subscriptions are high and originally based upon the significant overhead of managing, printing, and distributing physical copies of the journals. Yet, while the shift toward online access has allowed for declining overhead costs, subscription prices have climbed higher year after year. This has been exacerbated as many journals have been bought by large corporations focused on profit generation. In 2012, Elsevier was reported to have a profit of $3.2 billion with a 38-percent margin.2 Average annual subscription prices run from $2,000 to $4000, with some reaching $20,000 and above.2,34 Many of these subscription services bundle journals, which raises prices even further. Without a subscription to a given journal, the cost of access to individual articles is exorbitant–viewing a single article from Elsevier currently costs $35.95.35 This restrictive method of distribution is cost prohibitive for individuals and many smaller entities, and it negatively impacts both clinicians and researchers, especially as we move into the cross-specialty age.

The open-access (OA) model, in which journals are freely accessible online with limited restrictions, eliminates costly and exclusive subscription services as a barrier to readers. OA publications are more likely to be viewed by a wider audience, reaching both academics and the general public, including those in developing countries who would otherwise not have access to scientific literature. Various subtypes of OA exist, differing slightly in the rights provided for viewing, using, and sharing the published articles, with “Gold OA” generally representing the ideal of fully open. “Green OA” and “Bronze OA” are two other major subtypes, although several other variations exist as well. In this model, submitting authors typically pay a publication fee from a few hundred to several thousand dollars.2 Although this cost might be offset by the author’s employer or grants, the high costs limits who can afford to publish their findings or opinions in an OA format.

Despite the advantages of the OA model, conventional culture of the scientific community continues to exert a preference for publishing in traditional journals that have an established, recognizable brand and high JIF scores.36 There is some merit to this, as there are a number of OA journals that are neither peer-reviewed nor indexed in PubMed, leading to doubts of the quality and accuracy of the work. However, PLOS and BMC Biology are examples of well-known OA journals that have been ranked first and fourth in JIF within their field, respectively.2

Likely, the most demonstrative example of the burden of restrictive access to scientific literature was the Aaron Swartz case in 2011, in which Swartz, a research fellow at Harvard, was prosecuted for wire fraud after connecting a computer to the network at the Massachusetts Institute of Technology (MIT) and downloading millions of academic journals.2,37 After being charged with a federal offense with a maximum penalty of $1 million and up to 35 years in prison, Swartz committed suicide.37,38 This event was significant in spurring the “crisis of conscience for open access” as many institutions and politicians applauded his efforts in fighting for open access and began advocating for the OA model in the wake of his death.2,37,38 Harvard and other universities now urge their academics to submit to OA journals.3,38 In 2013, the White House Office of Technology and Policy passed legislation to make published results of certain federally funded research studies available to the public within one year.37,39 The movement has made strides as many traditional journals are transitioning to hybrid OA models, in which individual authors can choose certain articles to be open to the public.40

The Hybrid Model

Today there exists a strong movement toward a hybrid OA model as a preferred means of distributing scholarly literature, combining both OA and traditional subscription-based plans. In this model, some articles are OA while the remainder are available only to subscribers. The author or their funder has the option of paying an article-processing charge (APC) in order to make their article available as OA.41 This is not a novel idea. The concept for a hybrid model was first proposed and adopted by Thomas Walker who created the first hybrid journal, Florida Entomologist, in 1988.42 Since then, it has gained increasing popularity as it offers many benefits, such as allowing authors to make their articles available for a wider audience while still maintaining the perceived prestige of publishing in a well-recognized or high-impact-factor journal. Based on a study from Springer Nature, it was estimated that OA articles in hybrid journals generate 1.6 times more citations, 4 times more downloads, and 2.4 times more attention than those not freely available.2,43

We live in an era in which literature searches are no longer done at local libraries through individual journals, but rather through digital search queries that span hundreds of journals and cross-specialty literature in milliseconds. The hybrid model caters to this by providing open and cross-specialty access that conventional subscription-based plans do not provide.44 This is done through cluster-searching, which groups OA articles based on subject, a much more effective method of searching that has made PubMed the most popular search engine for scholarly literature.45

One major disadvantage to authors, however, is that APCs in hybrid journals average around $2,700, about double the cost of publishing in a non-subscription full OA journal (around $1,400), and still more costly than a subscription, full OA journal (around $2100). Some of these include Elsevier (APC: $500 to $5,000), Springer Open Choice (APC: $3,000), Oxford Open (APC: $3,000), and Cambridge University Press (APC: $2,700).46 Cost is most likely the main factor restricting hybrid OA models from becoming the standard at this time. The cost factor may also bias what can be published as OA in these journals, as few independent researchers and fewer scientists with concerns or alternative viewpoints can afford the cost. Another problem with this model is the possibility of “double-dipping,” in which publishers receive profits from the same article twice—once from the APC, and again through subscription—further increasing costs to those paying for hybrid charges.46,47 Despite the cost of APCs, hybrid journals have continued to grow. From 2013 to 2017, at least 6.4 million OA articles were published in hybrid journals and at least 4,500 subscription journals from 59 publishers have adopted a hybrid OA option. Even so, as of 2018, only about five percent of the articles in hybrid journals were available as open access.48 While not quite as ideal as fully open access, the hybrid model makes strides towards making otherwise inaccessible literature more public.49

Gold Hybrid: A New Model

In the midst of this movement towards open-access publishing, a new model for hybridization has emerged. Taking the best of both worlds, some journals, such as the Journal of Clinical and Aesthetic Dermatology (JCAD), provide a peer-reviewed, PubMed-indexed publication platform wherein every article published is available as full-text and free via PubMed, but with all costs covered by advertising and subscriptions, rather than APCs. This allows for a business model to run with the intent of publishing high-quality work while eliminating the cost barrier and bias for potential authors and achieving the ideal of open-access publishing. JCAD is not alone in providing this type of publication model. For example, a similar model is employed by the Anais Brasileiros de Dermatologia, the official publication of the Brazilian Society of Dermatology. Their articles are provided via subscription to their members, but also available in English as free, full-text articles in SciELO and PubMed.50 Borrowing from the established categorization of OA subtypes—Gold OA, Bronze OA, Green OA, and the like—we propose that this model be given the name Gold Hybrid and recommend this as the current best-practice method of scholarly publication.

Next Steps, Time for Change

It is not clear when we will achieve the ideal of a fully open-access scholarly world, but strides are being made toward that goal. New initiatives and tools have been introduced to encourage the scientific community to publish more OA articles. Plan S, short for Plan Shock, due to its attempt to disrupt conventional standards of publishing, is one such initiative.51 Launched in September 2018 by a consortium of major European national research entities, the plan calls for state-funded research institutions to publish their research in journals that offer OA by 2021.52,53 It also asks journals to make article processing charges more transparent so that authors can make more informed decisions on where to publish rather than relying on the “prestige factor.”51

As the scientific community begins to adjust to a more open-access method of information sharing, it is inevitable that the JIF will need to be reconciled or replaced. Currently, there exist a number of alternate metrics, although none have become popular enough to replace the JIF. Some are essentially variations of the JIF with improved calculation parameters, while others branch out into different measurements entirely. These are called Altmetrics, and they often take into consideration variables beyond citation frequencies, such as article downloads, citations in news, social media, blogs, social bookmarking, reference management services, and others.53,54

The Eigenfactor, for example, is one such metric designed as an alternative.55,56 It is based on the same ratio used to calculate the JIF with three main differences: it includes citations in social sciences, discounts self-citations, and gives greater weight to citations from highly-ranked journals. To simplify the distinction, JIF is generally viewed as an answer to the question, “how many people will read my article?” while the Eigenfactor answers the question, “how many people will read the journal in which my article is published?”57

The Article Influence is another metric that is based on the Eigenfactor, which divides it by the number of articles in a journal to estimate the influence of an article over its first five years after publication. A score greater than 1.00 indicates above-average influence, while less than 1.00 is below average.55 The h-index, or Hirsch index, attempts to measure the influence of a particular author based on a set of their most cited publications and the variety of journals they are cited in. Just like the JIF, the h-index can also be used to estimate the influence of a scholarly journal.58 Google Scholar Metric (GSM) includes the h-index in its formula to gauge an article’s visibility.58 The SCImago Journal Rank (SJR indicator) derives its metric from a broader array of journals supplied by Scopus and uses a three-year period of articles published, rather than the two preceding years as per the JIF.58 Finally, the Immediacy Index looks at the speed at which an article is cited, taking the average number of times the article is cited within the year it is published.59All of these Altmetrics have been proposed as alternatives or complements to measuring journal impact, but none have attained the level of widespread use and acceptance of the JIF.2 Altmetric toolkits and services, such as Metrics Toolkit, ImpactStory, Acumen, Mendeley, Altmetric.com, PeerEvaluation, and Plum Analytics continue to provide alternative methods of determining journal and article influence that include more comprehensive and detailed metrics.58

Conclusion

The JIF is an unreliable, biased, and inherently flawed method of measuring the quality, accessibility, and value of a research journal. While it has played an important and valuable role in helping scientists find and acquire knowledge over the last six decades, our movement into the digital and cross-specialty age has depreciated the value of the JIF as the manner in which we seek and obtain knowledge has fundamentally changed. With this evolution has come a growing demand for open access to scholarly work. Unfortunately, the transition to a fully open-access world has been slow, due to continued fixation on profit generation and JIF-centric publishing motivation. While we continue to work toward an open-access ideal, the hybrid publishing model has become a favored stepping stone in that direction and has gained traction with both publishers and authors. However, expensive APCs charged to authors for open-access publishing create a significant barrier to the adoption of OA and might introduce bias. Ultimately, as with everything in this information age, immediate open access to important clinical research is fundamental to the advancement of science. Perhaps, as with our ability to navigate the streets with precision and without a fee, a Google-like company (or Google itself) will provide the means for the free flow of critical scientific knowledge. Until that time, a new type of hybrid publishing model, which we term Gold Hybrid, has emerged and provides the best of both worlds by supplying full open-access to every article via PubMed while charging no APCs to the authors. Our advancement into the digital and cross-specialty age requires moving beyond the paradigm of closed and JIF-centered publishing to a better way of discovering and sharing knowledge, and this new model might be the best way to do it.

References

  1. Citrome L. Impact factor? Shmimpact factor!: the journal impact factor, modern day literature searching, and the publication process. Psychiatry (Edgmont) 2007;4(5):54–57.
  2. Cope B. Kalantzis, M. (2014). The future of the academic journal, 2nd edition. Elsevier (pp. 9–83,85–112).
  3. Newton, Isaac. “Letter from Sir Isaac Newton to Robert Hooke”. Historical Society of Pennsylvania. Feb 5, 1976.
  4. Garfield, E. The history and meaning of the journal impact factor. JAMA. 2006;295(1):90–93.
  5. Garfield, E. Journal impact factor: a brief review. Can Med Assoc J. 1999;161(8):979–980.
  6. Saha S, Saint S, Christakis DA. Impact Factor: a valid measure of journal quality? J Med Libr Assoc. 2003; 91(1):42–46.
  7. Holbrook J, Britt. Philosopher’s corner: Open science, open access, and the democratization of knowledge. Issues in Science and Technology. 2019;35(3): 26–28.
  8. Seglen P. Why the impact factor of journals should not be used for evaluating research. Education and Debate. BMJ. 1997;314:498–502.
  9. Elliott, DB. The impact factor: a useful indicator of journal quality or fatally flawed? Ophthalmic Physiol Opt. 2014;34:4–7.
  10. Favaloro EJ. Measuring the quality of journals and journal articles: the impact factor tells but a portion of the story. Semin Thromb Hemost. 2008;34(1):7–25.
  11. Callaway E. “Beat it, impact factor! Publishing elite turns against controversial metric”. Nature. 2016;535(7611):210–112.
  12. Todd PA, Ladle RJ. Hidden dangers of a ‘citation culture.’ Ethics in Science and Environmental Politics. 2018;8:13–16.
  13. Natarajan, S. The impact factor story: Part I. Indian J Ophthalmol. 2016;64(9):619.
  14. The Hong Kong Polytechnic University. Journal impact: journal impact factor (JIF) and other metrics. Guides & Tutorials. Oct 31, 2018. https://libguides.lb.polyu.edu.hk/journalimpact. Accessed Jan 17, 2020.
  15. Hicks D, Wouters P. Bibliometrics: The Leiden Manifesto for research metrics. Nature. 2015;520:429–431.
  16. Brumback RA. Worshiping false idols: the impact factor dilemma. J Child Neurol. 2008;23: 365–367.
  17. Larivière V, Kiermer V, MacCallum CJ, et al. A simple proposal for the publication of journal citation distributions. BioRxiv. 2016.
  18. Hemmingsson A, Mygind T, Skjennald A, et al. Manipulation of impact factors by editors of scientific journals. Am J Roentgenol. 2002;178:767.
  19. Williamson PO, Minter C. Exploring PubMed as a reliable resource for scholarly communications services. J Med Libr Assoc. 2019:107(1);16–29.
  20. Note: To see the current size of the database, type “1800:2100[dp]” into the search bar at https://www.ncbi.nlm.nih.gov/pubmed/ and click “search”. Retrieved on Oct 7, 2019.
  21. Johnson R, Watkinson A, Mabe M. Open Access.The STM Report: an overview of scientific and scholarly publishing. 2018;5e:97–147.
  22. Islamaj Dogan R, Murray GC, Névéol A, Lu Z. Understanding PubMed user search behavior through log analysis. Database (Oxford). 2009;2009:bap018.
  23. Shariff SZ, Bejaimal SA, Sontrop JM, et al. Retrieving clinical evidence: a comparison of PubMed and Google Scholar for quick clinical searches. J Med Internet Res. 2013;15(8):e164.
  24. Yoo I, Mosa AS. Analysis of PubMed user sessions using a full-day PubMed query log: a comparison of experienced and nonexperienced PubMed users. JMIR Med Inform. 2015:3(3),e25.
  25. Yen TW, Laud PW, Sparapani RA, Nattinger AB. Surgeon specialization and use of sentinel lymph node biopsy for breast cancer. JAMA Surg. 2014;149(2):185–192.
  26. Barbas A, Turley R, Mantyh C, Migaly J. Advanced fellowship training is associated with improved lymph node retrieval in colon cancer resections. J Surg Res. 2011;170(1):e41–e46.
  27. Sorensen MJ. Surgical subspecialization: escape route for surgeons or added benefit for patients? J Grad Med Educ. 2014;6(2):215–217.
  28. Heinskou T, Maarbjerg S, Rochat P, et al. Trigeminal neuralgia–a coherent cross-specialty management program. J Headache Pain. 2015;16:66.
  29. Grimes PE. A closer look at the role of the dermatologist in championing total women’s health through the dermatology gateway. Int J Womens Dermatol. 2018;4(4):189–192.
  30. Rawson TM, Moore LS, Gilchrist MJ, Holmes AH. Antimicrobial stewardship: are we failing in cross-specialty clinical engagement?. J Antimicrob Chemother. 2016;71(2):554–559.
  31. Association of American Medical Colleges. Number of People per Active Physician by Specialty, 2017. Physician Specialty Data Report. AAMC site. 2018. https://www.aamc.org/data-reports/workforce/interactive-data/number-people-active-physician-specialty-2017. Accessed Jan 17, 2020.
  32. Casas LA. Why should we foster core specialty collaboration in cosmetic medicine? Aesthetic Surg J. 2013;33:171–173.
  33. Long-fei F, Xue-ping O, Xiang-yi W. Core specialty collaboration and integrated subject formation of cosmetic edicine. Aesthet Surg J. 2014;34(2): 328–330.
  34. Noorden RV. Open access: the true cost of science publishing. Nature. 2013;495(7442):426–429.
  35. ‘Get Access’ Link. Elsevier. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S1748681514002496.
  36. Dodds F. The future of academic publishing: revolution or evolution? Learned publishing. 2018;31(2): 163–168.
  37. Skaggs, P. Aaron Swartz remembered as internet activist who changed the world. Patch. Jan 15, 2013. https://patch.com/illinois/evanston/aaron-swartz-remembered-as-internet-activist-who-chanf229b36e26. Accessed Jan 17, 2020.
  38. Seidman, B. Internet activist charged with hacking into MIT network. PBS. July 22, 2011. https://www.pbs.org/wnet/need-to-know/the-daily-need/internet-activist-charged-with-hacking-into-mit-network/. Accessed Jan 17, 2020.
  39. Publishers Weekly. White House Issues Public Access Directive. Feb 2, 2013. https://www.publishersweekly.com/pw/by-topic/digital/copyright/article/56076-in-historic-act-obama-administration-issues-public-access-directive.html. Accessed Jan 17, 2020.
  40. Piwowar H, Priem J, Larivière V, et al. The state of OA: a large-scale analysis of the prevalence and impact of Open Access articles. PeerJ. 2018;6: e4375.
  41. Hybrid open access. Common Ground Research Networks site. 2019. https://cgnetworks.org/journals/hybrid-open-access. Accessed Jan 17, 2020.
  42. Walker T. Free internet access to traditional journals. American Scientist. 1998;86(5):463.
  43. Draux H, Lucraft M, Walker J. Assessing the open access effect for hybrid journals. Springer Nature. 2018.
  44. Chumbe S, Kelly B, MacLeod R. Hybrid journals: ensuring systematic and standard discoverability of the latest open access articles. The Serials Librarian. 2015;68:1–4,143–155.
  45. Belikov AV, Belikov VV. A citation-based, author- and age-normalized, logarithmic index for evaluation of individual researchers independently of publication counts. F1000 Research. 2015;4:884.
  46. Open Access Authors Fund. University of Colorado Boulder. 2019. https://www.colorado.edu/libraries/research-assistance/open-access/open-access-fund#targetText=Open%20Access%20Fund&targetText=The%20Open%20Access%20Fund%20is,by%20full%20open%20access%20publishers. Accessed Jan 17, 2020.
  47. Jahn, N. About the hybrid OA dashboard. Jan 2020. https://subugoe.github.io/hybrid_oa_dashboard/about.html. Accessed Jan 17, 2020.
  48. Swan A, Willmers M, King T. Costs and benefits of open access: a guide for managers in southern african higher education. Scholarly communication in Africa Programme. Paper 2. 2014.
  49. Anais Brasileiros de Dermatologia. http://www.anaisdedermatologia.org.br/.
  50. Rabesandratana T. Radical open-access plan is delayed a year. Science. 2019;364(6444):919.
  51. Plan S: Accelerating the transition to full and immediate Open Access to scientific publications. Science Europe. Sept 4, 2018. https://www.scienceeurope.org/our-priorities/open-access. Accessed Jan 17, 2020.
  52. Else, H. Radical open-access plan could spell end to journal subscriptions. Nature 2018;561:17–18.
  53. Careless, J. Altmetrics 101: Primer. Information Today, Inc. 2013;30(2). Retrieved from: http://www.infotoday.com/it/feb13/Careless–Altmetrics-101-A-Primer.shtml. Accessed Jan 17, 2020.
  54. West JD, Bergstrom, CT, Bergstrom TC. The Eigenfactor Metrics: A network approach to assessing scholarly journals. UC Santa Barbara: Department of Economics site. 2010. https://escholarship.org/uc/item/41h94387. Accessed Jan 17, 2020.
  55. Noorden RV. Controversial impact factor gets a heavyweight rival. Nature. 2016;540(7633): 325–326.
  56. Fersht, A. The most influential journals: Impact Factor and Eigenfactor. PNAS. 2009;106 (17):6883–6884.
  57. Assessing Journal Quality: AltMetrics. Boston College Libraries. Sept 5, 2019. https://libguides.bc.edu/journalqual/metrics. Accessed Jan 17, 2020.
  58. Diaz-Ruiz A, Orbe-Arteaga U, et al. Alternative bibliometrics from the web of knowledge surpasses the impact factor in a 2-year ahead annual citation calculation: Linear mixed-design models’ analysis of neuroscience journals. Neurol India. 2018;66: 96–104.

Deja un comentario

Ten en cuenta que los comentarios deben aprobarse antes de que se publiquen.

  • Comments
  • DISQUS
    1 out of ...

    You May Also Like

    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10

    1

    13

    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10
    Ulike Air 10

    Ulike Air 10

    $279.00
    $399.00

    1343 reviews

    96% Hair Reduction in 2 Weeks

    90-Day 100% Money Back Guarantee

    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X

    1

    3

    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X
    Ulike X

    Ulike X

    $349.00
    $429.00

    292 reviews

    94% Hair Reduction fo Man in 2 Weeks

    90-Day 100% Money Back Guarantee

    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.

    1

    31

    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.
    Ulike Sapphire Air3 IPL Hair Removal Handset.

    Aparato de depilación IPL Ulike Sapphire Air3

    $199.00
    $329.00

    6352 reviews

    90% Hair Reduction in 4 Weeks

    90-Day 100% Money Back Guarantee

    The best IPL hair removal device in 2024

    Top IPL Hair Removal Devices of 2024

    Oct 21, 2024
    by
    Brandy Williams

    Explore the top IPL hair removal devices in 2024 with insights and expert reviews to choose the best option for you.