Published at MetaROR

March 11, 2026

Table of contents

Cite this article as:

Font-Julián, C. I., Orduna-Malea, E., Lopezosa, C., & Aguillo, I. (2025, October 22). SEO Metrics for Enhanced Academic Research Visibility: A Comparative Study of Global and Country-Level Traffic. https://doi.org/10.31235/osf.io/j4ryx_v1

Curated

Article

SEO Metrics for Enhanced Academic Research Visibility: A Comparative Study of Global and Country-Level Traffic

Cristina I. Font-Julián1 EmailORCID, Enrique Orduna-Malea1 EmailORCID, Carlos Lopezosa2 EmailORCID, Isidro Aguillo3 EmailORCID
  1. The iMetrics Lab. Department of Audiovisual Communication, Documentation and History of Art, Universitat Politècnica de València, Valencia (Spain)
  2. Departament de Biblioteconomia, Documentació i Comunicació Audiovisual, Universitat de Barcelona, Barcelona (Spain)
  3. Cybermetrics Lab, Institute of Public Goods and Policies (IPP), Spanish National Research Council (CSIC), Madrid (Spain)

Originally published on October 22, 2025 at: 

This article was presented at the STI-ENID 2025 conference. 

Abstract

Search engines play a crucial role in discovering and disseminating academic work, making academic search engine optimisation (A-SEO) vital for enhancing research visibility. Consequently, the collection of robust A-SEO data is essential. This study compares four leading SEO tools—Ahrefs, Semrush, Serpstat, and Ubersuggest—to evaluate their performance in measuring the organic traffic of gold open access academic publishers, using MDPI and Frontiers as case studies. The findings reveal significant discrepancies in the web traffic metrics reported by each platform, likely attributable to their diverse and often opaque traffic estimation methodologies. These differences may lead to divergent interpretations, thereby limiting the replicability and reproducibility of studies, and hindering the development of standardised web traffic indicators. This study highlights the need for greater methodological rigour and standardisation in academic SEO research, offering both theoretical insights and practical guidance to improve the online visibility of research within a platform-driven scholarly ecosystem.

1.  Introduction

Online visibility has rapidly become a critical concern for researchers, journals, and universities as digital platforms transform the way in which scholarly outputs are published, disseminated and discovered (Beel et al., 2010; Rovira et al., 2018). In parallel, the emergence of search engines such as Google and Google Scholar, along with other specialized scholarly databases, has dramatically reshaped how scientific content is indexed, discovered and retrieved (Van Dijck, 2010). Moreover, new AI-driven technologies, including large language models (LLMs) and generative chatbots, are poised to further alter the way in which academic research is accessed (Urbano, 2024).

To address these developments, academic search engine optimization (A-SEO) has emerged as a means of ensuring that scholarly literature is indexed and ranked in such a way that it can be readily found via organic (unpaid) searches (Beel et al., 2010; Font-Julian et al., 2024). Although the techniques of SEO may appear to be more closely tied to commercial marketing strategies, they are increasingly being recognized as legitimate activities in library science, repository management, research dissemination, and science communication—especially given the exponential growth in digital scholarly outputs (Schilhan et al., 2021).

Today, A-SEO occupies a critical space within the broader field of science studies, reflecting the interplay of scientific production, distribution, and reception, as knowledge circulates through increasingly powerful digital market-based infrastructures (Van Dijck, 2010). These online platforms enhance ‘attention economies’, wherein reputation, citation, and broader societal engagement hinge on visibility (Franck, 2002). In such a setting, the emergence of SEO data providers (e.g., Semrush and Ahrefs) ensures the collection of a wide variety of web metrics related to the visibility and use of online research objects, thereby fuelling the growth of A-SEO studies.

Notwithstanding the work of these data providers, authors generally select their data sources based on convenience, institutional license availability, or cost considerations and rarely conduct a thorough cross-comparison (Reyes-Lillo et al., 2023). This is especially problematic because coverage, accuracy, nomenclature, and data acquisition methods may vary markedly from one platform to another, resulting in inconsistent findings that can hamper the cumulative study of academic web visibility (Orduña-Malea and Font-Julian, 2024).

Although evaluating the reliability of SEO data providers is vital for ensuring methodologically robust research on research visibility (Orduña-Malea and Font-Julian, 2024), there has been limited attention to the variability and reliability of the different tools that generate these metrics (Jansen et al., 2022).

One of the most widely used SEO-based metrics, often referred to as web statistics (Jana and Chatterjee, 2004), is the volume of organic web traffic arriving at a website from a search engine. This traffic can be further segmented by such variables as the search term, the destination URL, or the country from which the search query was made.

If significant differences do exist in in the calculation of web traffic from one data provider to another, then the choice of tool could directly influence the results and conclusions obtained, thereby affecting the reliability of the scientific research. Similar challenges are faced when conducting bibliometric analyses, where differences in sources can produce divergent results due to varying coverage (Visser et al., 2021). Although the bibliometric literature has addressed this problem, it has yet to be studied systematically from the perspective of A-SEO.

2.  Objectives

The overarching aim of this study is to identify potential discrepancies in organic traffic estimates made by four widely used SEO tools—Ahrefs, Semrush, Serpstat, and Ubersuggest—that may lead to divergent conclusions in academic SEO research. To this end, this study seeks to answer the following research questions:

Q1. Does global organic traffic differ from one SEO tool to another?

Q2. Does organic traffic by country differ from one SEO tool to another?

By examining two large open-access academic mega-publishers — MDPI and Frontiers—, the study aims to provide insights and recommendations for researchers, librarians, journal editors, and technology developers interested in measuring or enhancing scholarly visibility.

3.  Methodology

3.1 Research Design and Data Sources

The following four SEO tools were selected based on their popularity, comprehensive feature sets, and professional reputation within the SEO industry:

(a) Ahrefs (https://ahrefs.com). Lite edition: ~28.7 billion
(b) Semrush (https://semrush.com). Guru edition: ~26.4 billion keywords.
(c) Serpstat (https://serpstat.com). Trial edition: ~8.07 billion
(d) Ubersuggest (https://app.neilpatel.com/en/ubersuggest). Business edition: ~1.25 billion keywords (inferred from external sources as it leverages Moz’s API).

Each tool provides estimates of organic web traffic, both global and country-level, for a given domain name.

Consistent with previous A-SEO research designs (Beel et al., 2010; Nanda et al., 2021), this study adopts a comparative case study approach considering two major academic publishers: MDPI (mdpi.com) and Frontiers (frontiersin.org). These publishers were selected because they host vast and varied sets of scientific articles across multiple disciplines, generating millions of potential keywords and thousands of URLs. This diversity is crucial for robust cross-comparisons, as narrower-scope publishers might yield simpler, less conclusive findings. In addition, these publishers offer full-text access to their publications, and use extensive promotional and online marketing strategies on their journal websites (Orduña- Malea and Aguillo, 2022).

3.2 Data Collection Procedure

A general search for each domain name (mdpi.com and frontiersin.org) was conducted using the four SEO tools selected. On platforms requiring a language/country combination (e.g., Ubersuggest), English/USA was selected, while on those requiring a search engine market (e.g., Serpstat), google.com was selected. Regardless of these parameters, the results collected were those offered on the results page for each domain name at a general level.

Web scraping techniques were used to capture both global and country-level web traffic data from each of the SEO tool’s results page. These were then exported to a spreadsheet for further analysis. Since Ubersuggest reports traffic data disaggregated by country/language combinations, these data were manually consolidated.

Using the capabilities of the tools analysed, global traffic data were collected retrospectively for the five-month period (February to June 2024, both inclusive), while country-level traffic data corresponded specifically to June 2024. All data were extracted on the same day (July 5, 2024) to minimize temporal variability.

4.  Results

4.1 Global traffic counts

Figure 1 compares the monthly organic traffic that Google drives to each publisher’s website (MDPI on the left and Frontiers on the right) based on Ahrefs, Semrush, Serpstat, and Ubersuggest estimates. Each data point on the chart corresponds to a specific tool’s traffic estimate for that domain and month. The y-axis indicates the approximate number of monthly visits, and the x-axis shows the months for which data were collected from the four SEO tools.

Figure 1: Monthly organic traffic from Google Search to MDPI (left) and Frontiers (right) domain names. Note: The y-axis uses a logarithmic scale.

The results indicate that Ahrefs and Semrush provide coverage of a similar order of magnitude, while Serpstat and Ubersuggest provide significantly higher traffic estimates. This pattern holds for both the MDPI and Frontiers analyses. An analysis of the ratios between each pair of SEO tools further confirms these results, highlighting the differences in the order of magnitude between the tools.

Table 1: Monthly organic traffic ratios between SEO tools for MDPI (top) and Frontiers (bottom).

Focusing on longitudinal data allows us to observe divergent trends between the tools. For example, traffic to MDPI, as measured by Semrush and Ahrefs, grew from February to March, whereas Serpstat data indicated a slight decrease in the same period (from 3,127,915 to 2,944,571). Thus, drawing definitive conclusions about the domain’s traffic evolution over time is challenging.

This discrepancy is likely attributable to Serpstat’s search filter, which selects a version of Google Search, unlike Semrush and Ahrefs, which focus on country-specific filters. Although all the tools provide general search results regardless of the filter initially chosen, these differences in methodology may influence the traffic figures reported by Serpstat.

4.2 Traffic per country

Figure 2 compares the volume of traffic that each SEO tool attributes to specific countries for a given domain. Here, we report the results for the MDPI domain only. The x-axis shows the four SEO platforms, while the y-axis shows the estimated monthly traffic from a particular country. For clarity, the top 10 countries according to Semrush data serve as the baseline for comparison.

Figure 2: Country traffic counts per SEO tool for MDPI domain name

As observed in Figure 2, the differences in the volume of organic traffic for each country across the tools are significant. For example, in the case of United States, Ahrefs estimates 274,000 visits, while Serpstat reports 6.1 million, making direct comparisons of absolute traffic values by country across platforms highly unreliable.

If we consider the relative rankings of countries by traffic, the United States consistently emerges as the country generating the most traffic to MDPI (Figure 3). However, we find notable inconsistencies in the case of the remaining countries. For example, India ranks second for three of the SEO tools, but falls to tenth position in Serpstat’s data. Likewise, Australia holds a lower ranking in Ubersuggest, while the Philippines ranks high in Semrush (5th) and Ahrefs (4th), but much lower in Ubersuggest (13th) and Serpstat (15th).

Figure 3: Country traffic counts ranking per SEO tool for MDPI domain name

5. Discussion

The results obtained allow us to answer both research questions in the affirmative: that is, global organic traffic and traffic by country differ substantially from one SEO tool to another. At the country-level, the data discrepancies concern not only absolute traffic volumes but also the relative rankings of countries, making robust country-based analyses especially problematic.

As such, this study confirms previous concerns about the methodological challenges encountered when comparing analytics tools (Jansen et al., 2022). However, this work represents the first attempt to date at collecting and comparing web traffic from multiple sources when undertaking an analysis of academic websites.

A plausible explanation for the inconsistencies found is that each SEO provider varies in its coverage, filtering criteria, and data-sampling methods. In addition, the variation in their respective capacities to detect and filter out bots might cause data discrepancies. However, as many of these data-collection mechanisms remain proprietary and opaque to researchers (Ma, 2023), it is not currently possible to confirm the precise origin of these discrepancies.

Further complicating matters, each tool varies in its definition and labelling of ‘traffic’. Thus the metric employed can be the estimated number of monthly visits (the case of Ahrefs and Ubersuggest), the number of users expected to visit the domain (the case of Semrush), or the number of visitors per month (the case of Serpstat). These definitional differences seriously hinder attempts at standardization.

For meta-researchers and A-SEO scholars, these outcomes highlight the need to triangulate data from multiple tools whenever possible. In a similar way to cross-bibliographic database comparisons, no single data provider offers comprehensive coverage.

Journals and repositories seeking to track their ‘most visited’ articles may also consider using first-party analytics (e.g., Google Analytics) to validate or contextualize third-party estimates. Likewise, librarians might adopt a standardized pipeline for A-SEO measurement, including disclaimers about each tool’s limitations and methodological differences.

A pressing need for metric standardization initiatives emerges from this discussion. Indeed, SEO tool developers should consider collaborating with the academic community to define and adopt standard metrics. Such cooperation would help enhance web data literacy and improve reproducibility (Orduña-Malea and Font-Julian, 2024).

Finally, as LLM-driven chatbots begin to transform how users discover academic content, new metrics and methods will be needed to measure ‘visibility’ in generative AI results (Font- Julian et al., 2024; Urbano, 2024). The need for academic generative engine optimisation (A- GEO) tools will soon become essential to ensure a broader and more accurate measure of research visibility.

6.  Conclusions

We have presented a comparative analysis of four widely used SEO tools—Ahrefs, Semrush, Serpstat, and Ubersuggest—as applied to two major academic mega-publishers—MDPI and Frontiers. The results reveal substantial discrepancies in their traffic estimates, suggesting that SEO tool selection can significantly determine findings.

Based on the results obtained, we propose the following practical recommendations:

Use multiple tools for triangulation: Researchers seeking robust web visibility metrics should use at least two, preferably more, SEO platforms to mitigate the risk of relying on skewed or incomplete data.

Interpret traffic as a relative measure: Absolute values vary considerably across tools; therefore, researchers should treat these figures as relative indicators rather than definitive counts.

Document the data source: Researchers should clearly state which tool was used, the date of measurement, the search filter applied (e.g., google.com vs. US/English), and the definition of each metric provided by the tool, so as to enable future replication.

This study underscores the importance of methodological transparency and calls on researchers to critically evaluate SEO metrics, ensuring that measures of ‘visibility’ are reproducible and verifiable. As knowledge systems continue to evolve, bridging the gap between SEO industry practices and academic research standards will be critical for the long- term credibility and utility of A-SEO studies.

Finally, the problems highlighted in this study echo challenges encountered in other quantitative research domains, in which the choice of tools must be declared and justified, and their limitations described and contextualized. Each SEO tool is different for a variety of reasons— primarily commercial—and their methods and implementation are often opaque. Consequently, there is an obvious need for the development of tools created made by and for the scientific community that can guarantee greater rigour and transparency.

Open science practices

Available at https://riunet.upv.es/handle/10251/223019

Author contributions

CIFJ: Data curation, Investigation, Writing – review & editing. CL: Data curation, Investigation, Writing – original draft. EOM: Conceptualization, Formal analysis, Funding acquisition, Visualization, Writing – original draft. IA: Conceptualization, Supervision, Writing – review & editing.

Competing interests

The authors have no competing interests.

Funding information

Grant PID2022-142569NA-I00, funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF – A way of making Europe”

References

Beel, J., Gipp, B., & Wilde, E. (2010). Academic Search Engine Optimization (ASEO) Optimizing Scholarly Literature for Google Scholar & Co. Journal of Scholarly Publishing, 41(2), 176–190.

Franck, G. (2002). The scientific economy of attention: A novel approach to the collective rationality of science. Scientometrics, 55, 3–26.

Font-Julian, C. I., Orduña-Malea, E., & Codina, L. (2024). ChatGPT Search as a tool for scholarly tasks: evolution or devolution? Infonomy, 2(5).

Jana, S., & Chatterjee, S. (2004). Quantifying Web‐site visits using Web statistics: an extended cybermetrics study. Online Information Review, 28(3), 191–199.

Jansen, B. J., Jung, S. G., & Salminen, J. (2022). Measuring user interactions with websites: A comparison of two industry standard analytics approaches using data of 86 websites. PLoS ONE, 17(5), e0268212.

Ma, L. (2023). Information, platformized. Journal of the Association for Information Science and Technology, 74(2), 273–282.

Nanda, J. K., Hay, J. L., & Marchetti, M. A. (2021). Analysis of keywords used in Internet searches for melanoma information: observational study. JMIR Dermatology, 4(1), e25720.

Orduña-Malea, E., & Aguillo, I. F. (2022). Are link-based and citation-based journal metrics correlated? An Open Access megapublisher case study. Quantitative Science Studies, 3(3), 793–814.

Orduña-Malea, E., & Font-Julian, Cristina I. (2024). Research on Research Visibility. BiD: Textos universitaris de biblioteconomia i documentació, 53(1).

Reyes-Lillo, D., Morales-Vargas, A., & Rovira, C. (2023). Reliability of domain authority scores calculated by Moz, Semrush, and Ahrefs. Profesional de la información, 32(4).

Rovira, C., Guerrero-Solé, F., & Codina, L. (2018). Received citations as a main SEO factor of Google Scholar results ranking. Profesional de la Información, 27(3), 559–569.

Schilhan, L., Kaier, C., & Lackner, K. (2021). Increasing visibility and discoverability of scholarly publications with academic search engine optimization. Insights, 34(1).

Urbano, C. (2024). Los chatbots de IA generativa como fuentes de tráfico web: aproximación desde la analítica web de un portal de revistas. Anuario ThinkEPI, 18.

Van Dijck,  J.  (2010).  Search  engines  and  the  production  of  academic  knowledge. International Journal of Cultural Studies, 13(6), 574–592.

Visser, M., Van Eck, N. J., & Waltman, L. (2021). Large-scale comparison of bibliographic data sources: Scopus, Web of Science, Dimensions, Crossref, and Microsoft Academic. Quantitative Science Studies, 2(1), 20–41.

Editors

Ludo Waltman
Editor-in-Chief

Ludo Waltman
Handling Editor

Editorial Assessment

by Ludo Waltman

DOI: 10.70744/MetaROR.276.1.ea

In this article the authors compare four search engine optimization tools for measuring web traffic for academic work, focusing on two scientific publishers, MDPI and Frontiers. The article has been reviewed by two reviewers. The reviewers emphasize the relevance of the research for librarians, publishers, journal editors, researchers, and other stakeholders. The reviewers point out that the inclusion of only two publishers in the study is a significant limitation. They recommend justifying the choice of these publishers. Reviewer 1 also suggests the article could be strengthened by clarifying definitions of key concepts, providing more detailed explanations of the algorithms used by the different search engine optimization tools, and deepening the empirical analysis. Reviewer 2 suggests the authors could improve how they describe the value of the research for readers.

Recommendations for enhanced transparency

  • All analytical code used to produce results, figures, and tables should be made publicly available in an appropriate repository (e.g., OSF, Zenodo, Dryad), and the article should contain a link to the code files.

For more information on these recommendations, please refer to our author guidelines.

Competing interests: None.

Peer Review 1

Zohreh Zahedi

DOI: 10.70744/MetaROR.276.1.rv1

This paper compares discrepancies in traffic tracking among four different SEO tools across two publishers. The topic is interesting and highly relevant, as understanding inconsistencies in SEO metrics is important for publishers, researchers, and other stakeholders who rely on such data for evaluation and/or strategic decisions. However, there are some limitations and areas that require further clarification.

First, the study includes only two publishers. While this allows for an in-depth comparison, the limited sample size restricts the generalizability of the findings. The authors should acknowledge this limitation more explicitly and, if possible, justify the selection of these publishers or discuss how representative they are of broader publishing contexts.

Second, the paper would benefit from a more detailed explanation of the mechanisms and algorithms used by the four SEO tools. Many readers may not be familiar with how these tools estimate traffic, what data sources they rely on, or how their approaches differ.

Additionally, key concepts such as “organic traffic” should be clearly defined.

Finally, a more in-depth analysis of the results would provide greater insight. For example, the authors could explore patterns in discrepancies across countries, etc. and discuss possible structural or methodological reasons behind these differences.

Overall, the study addresses an important issue, but expanding the methodological explanations and clarifying conceptual definitions would significantly strengthen the manuscript.

Competing interests: None.

Peer Review 2

Angelo Salatino

DOI: 10.70744/MetaROR.276.1.rv2

This study investigates potential discrepancies in organic traffic estimates among four leading SEO tools—Ahrefs, Semrush, Serpstat, and Ubersuggest—and how these differences might impact academic SEO research conclusions. By analysing global and country-specific organic traffic for two major academic publishers (MDPI and Frontiers), the research provides actionable insights and recommendations for researchers, librarians, journal editors, and technology developers on measuring and enhancing scholarly visibility.

Please note I was also the reviewer of the same manuscript for STI 2025. I can see that the authors took into account some of my early feedback in preparation of this manuscript. Specifically, now figure 1 includes ubersuggest. They have included Table accounting for the ratio between sources rather than simply focussing on their magnitude.

I encourage the authors in justifying the choice of MDPI and Frontiers. Why these two publishers?

Additionally, I encourage the authors in describing what value researchers and librarians can gain from A-SEO. By adding these it will help reader in easily contextualising the paper, and easily appreciate its value.

Finally, I gave a look at the repository. (–in the previous review I couldn’t). The data is there. As you have done some data pre-processing, do you think you can provide also some code to support researchers with replicating your analysis on other publishers?

Competing interests: None.

Leave a comment