The State of Tech Journalism in 2024

Rex Freiberger Avatar
Christen da Costa Avatar

,

Christen da Costa
Updated Oct 10, 2024 12:11 AM

Introduction

Our three-year investigation of trust in tech journalism reveals a thriving for-profit fake review industry dominating the web (and Google), with a shocking 45% of corporate owned and small publishers producing fake product tests and 85% being labeled untrustworthy.

Our annual State of Tech Reviews Report examines the trustworthiness of 496 tech journalists across 30 categories (electronics and appliances) using 55 indicators to measure expertise, transparency, data science, and authenticity.

Our Trust Rating creates a score from zero to 100 for each publication classifying them into six categories: Highly Trusted, Trusted, Mixed Trust, Low Trust, Not Trusted and Fake Reviewers. The Trust Ratings power our True Score rating of products, the web’s most accurate quality score for a product.

Every reviewer receives a Trust Rating for each category, such as TVs, soundbars, and computer monitors, which is then averaged for an overall Publication Trust Rating.

This year, our State of Tech Reviews ranks Top 10 VPN as the most trusted in the Americas, with a score of 102.2%. RTINGs is second at 99.58% and VPN Mentor is third with 97.45%.  HouseFresh finishes fourth with a score of 95.95%, while Air Purifier First earns a fifth place finish with 88.65%.

Our report finds that the top 3 most trusted reviewers are independent journalists, while the 10 most trusted tech journalists are heavily skewed towards independents, with just one publication being owned by “Big Media”.

Cool Material and Turbo Future rank lowest among the fake reviewers, with Trust Ratings of 0% and 2.5% respectively. After them are LoLVV, Antenna Junkies and Reliant.co.uk.

Our report includes the Fake Five – tech journalists with the most web traffic faking their reviews. Forbes ranks biggest Fake Reviewer with a Trust Rating of 34.96% and monthly traffic of 65.5M visitors. The Guardian and CNET rank second and third with Trust Ratings of 30.8M and 22.4M respectively, along with Trust Ratings of 38.51% and 61.70%. CBS and Business Insider score 9.00% and 55.14% respectively rounding out our Fake Five.

Key Takeaways

Our investigations uncovered the following key takeaways:

  1. Almost half of all tech reviews are fake online: 45% of online tech reviews of the 500 pubs in our data set across 30 categories are faking their product tests.
  2. 85% of online tech review publications are untrustworthy: Among the nearly 500 publications in our dataset, a significant portion are faking reviews, and an additional 39.7% are generally unreliable. This means there is no reason to trust 84.8% of review publications.
  3. When looking at monthly traffic numbers, four out of five of the highest traffic sites in our dataset are fake reviewers. Together, these four publications alone bring in almost 176M monthly views – that’s about 18% of the total traffic in our entire dataset.
  4. The majority of corporate-owned publications suffer from fake testing. 54% of all the corporate-owned tech reviewers in our dataset have been labeled fake reviewers.
  5. Fake testing is a problem so significant that it makes your chances of finding a publication that isn’t faking things into a coin flip. Of the nearly 500 publications we researched, fake testing was a notable problem in 45% of them. 
  6. Fake reviews are alarmingly common on the first page of a Google search. For terms like “best office chair” or “best computer monitors”, 22% of the results will link directly to websites that claim to test products but provide no proof of their testing or even fake it.
  7. The only Highly Trusted publications (the highest trust classification a publication can earn) are independent, and there are not a lot of them. There are just 4 publications that managed our “Highly Trusted” classification – 0.8%.
  8. Household brand names traditionally trusted by people, including Good Housekeeping, Forbes, and Consumer Reports, showed clear and distressing signs of unreliable reviews and fake testing.
  9. Not one of the 30 categories we researched into has more trusted reviewers than untrusted ones. 
  10. Projectors are the least trustworthy category in our entire dataset. 66.7% of all of the tech reviewers we analyzed are faking their testing.
  11. While routers are the most trustworthy category in our dataset, only 19.5% of tech reviewers in this category were rated as “Trusted” or “Highly Trusted.” This highlights a generally low level of trust across all categories, despite routers leading the pack.

Why We Created This Report:

The findings of this report highlight a widespread issue, but our intention is not to simply expose and dismiss these companies.

Instead, we aim to engage them constructively, encouraging a return to the fundamental purpose of journalism: to speak truth to power and serve the public.

Our goal is to hold powerful corporations and brands accountable, ensuring that consumers don’t waste their money and time on low-quality products.

We view this as part of a broader issue stemming from a decline in trust in media over recent decades, and we are committed to being part of the solution by helping restore that trust.

Methodology

Reliable statistical insights depend on a solid, transparent methodology that serves as the foundation for every conclusion drawn.

Here, we outline the rigorous processes behind our numbers, beginning with data collection methods designed to capture relevant, high-quality information. Each source undergoes assessment through our Trust Rating System, ensuring that only credible publications inform our analyses. Finally, by applying a range of analytical methods—including statistical, regression, and quantitative techniques—we transform raw data key takeaways and important insights on the nature of tech reviews.

1. Data Collection

This study utilized a comprehensive data collection approach to analyze the trustworthiness of tech journalism.

The data was gathered over a three-year period, from June 24, 2021, to June 21, 2024, encompassing a wide range of sources and methods.

Sources and Methods:

  • Web Scraping and Manual Review: Data was collected through web scraping techniques, Google Keyword Planner, Ahrefs, and supplemented with manual reviews to ensure accuracy and completeness. The 55 indicators were manually assessed by human researchers (more on this below).
  • Publications and Reviews: The study analyzed 497 tech publications and 3088 reviews across 30 product categories. These categories included popular tech products such as air conditioners, air purifiers, blenders, coffee makers, gaming chairs, gaming headsets, and more. While ~200 publications tend to be more prominent in traffic, we believe the 497 captures the smaller publisher in more niche verticals and thus what we have assessed as the majority if not the majority of sites that receive traffic on the web.

Criteria for Selection:

  • Publications: Selected based on their presence in Google, relevance and prominence in tech journalism, covering a wide spectrum of both well-known and lesser-known sites.
  • Reviews: Included based on their availability, detail, and the presence of indicators relevant to the trust rating system.

2. Trust Rating System

The trustworthiness of each publication and review was assessed using a detailed Trust Rating System.

Our Trust Rating is a proprietary system that analyzes the credibility and trustability of a product reviewer.

This system comprises 55 indicators categorized into 8 subcategories to provide a comprehensive evaluation of expertise on a specific product category and overall. 

The rating exists on a logarithmic scale of “0 to 100” for each publication classifying them into six categories: Highly Trusted, Trusted, Mixed Trust, Low Trust, Not Trusted and Fake Reviewers.

Our Trust Ratings powers our True Score, which we call the web’s most accurate quality score for a product since it filters out all the fake reviews on the web.

Our expertise as product testers allows us a unique perspective to evaluate the findings of our Trust Ratings.

Scoring and Classification:

  • Scoring: Each indicator was scored on a predefined scale, with higher scores indicating higher trustworthiness. The scores for individual indicators were aggregated to produce an overall trust rating for each publication.
  • Classification: Based on their trust ratings, publications were classified into 6 categories:Highly Trusted, Trusted, Mixed Trust, Low Trust, Not Trusted, and Fake Reviewer.
    • Highly Trusted: 90 – 100+ 
    • Trusted: 80 – 89
    • Mixed Trust: 60 – 79
    • Low Trust: 50 – 59
    • Not Trusted: 0 – 49
    • Fake Reviewer: 0-49 and 30% of covered categories show signs of faked testing OR 3 covered categories show sign of faked testing; fulfilling either criteria triggers this classifier

Indicators and Categories:

  • Indicators: The 55 indicators encompass aspects such as review authenticity, evidence of testing, reviewer expertise, transparency, and consistency.
  • Categories and Subcategories: Indicators are grouped into categories like Review Authenticity, Testing Evidence, Reviewer Expertise, and Transparency. Each category is further divided into subcategories for a more granular analysis.
Trust Rating CategoryDefinition
1. Human Authenticity
9% Total Score
General Trust Grouping – Publication staff are real humans.
2. Review System
1.95% Total Score
General Trust Grouping – 
Publication uses a thorough, numerical scoring system to differentiate products from each other.
3. Integrity
4% Total Score | 0.4% Bonus Score
General Trust Grouping – Publication promotes editorial integrity and prioritizes genuinely helping consumers.
4. Helpfulness
4.05% Total Score | 0.8% Bonus Score
General Trust Grouping – Content is structured to effectively communicate product information to consumers.
5. Category Qualification
4% Total Score
Category Trust Grouping – The publication is actually claiming to test the category, whether directly or through implication.
6. Category Expertise
8% Total Score
Category Trust Grouping – The reviewer and publication is an experienced expert in a category.
7. Visual Evidence
24% Total Score | 4% Bonus Score
Category Trust Grouping – The publication provides visual evidence to show they’re testing and using products in real-world scenarios or testing labs.
8. Data Science
44% Total Score
Category Trust Grouping – The reviewer tested the product and provided their own quantitative measurements from their testing.

The point distribution was carefully calibrated to reflect the relative importance of different factors in establishing a publication’s trustworthiness. For example:

  • Human Authenticity accounted for 9% of the total score
  • Integrity contributed 4% to the total score
  • Visual Evidence was weighted heavily at 24% of the total score
  • Data Science, representing the core of testing practices, constituted 44% of the total score

This weighting system allowed us to create a nuanced trust rating that prioritized the most crucial aspects of reliable tech reviews, such as demonstrable testing practices and transparency. While it may seem unusual that Human Authenticity and Integrity occupy so little of the scoring compared to Visual Evidence and Data Science, this is intentional for a very simple reason: producing visual evidence and data points is difficult and time-consuming, and by doing both, good scores in authenticity and integrity more strongly confirm the validity and expertise of a publication.

3. Analysis Methods

With our trust ratings established, we employed a variety of analytical techniques to explore the data and uncover meaningful insights:

  1. Statistical Analysis: We utilized descriptive statistics to understand the distribution of trust ratings across publications. This included measures of central tendency (mean, median, mode) and dispersion (standard deviation, range) to characterize the overall landscape of tech review trustworthiness.
  2. Regression Analysis: To explore relationships between various factors, we conducted regression analyses. For instance, we examined the correlation between a publication’s trust rating and its traffic data to determine if there was any relationship between a site’s credibility and its popularity.
  3. Quantitative Analysis: We performed in-depth quantitative analyses on several key aspects:
    • a) Trust Ratings: We analyzed the distribution of trust ratings across different types of publications (e.g., independent vs. corporate-owned) and across different product categories.
    • b) Traffic Data: We examined traffic data in relation to trust ratings and other factors to understand if there were any patterns in user engagement with more or less trustworthy sites.
    • c) Covered Categories: We quantified the breadth of coverage for each publication, analyzing how the number and type of product categories covered related to overall trustworthiness.
    • d) Testing Claims: We scrutinized the claims made by publications about their testing processes, cross-referencing these claims with our visual evidence and data science findings to verify their authenticity.
    • e) Parent Companies: We investigated the ownership structure of publications, analyzing how corporate ownership versus independent status correlated with trust ratings and testing practices.
  4. Comparative Analysis: We conducted comparative analyses across different subgroups of publications, such as comparing the top 10 most trustworthy publications against the 10 least trustworthy to identify key differentiating factors.

This multifaceted analytical approach allowed us to not only quantify the trustworthiness of individual publications but also to uncover broader trends and patterns in the tech review landscape.

By combining rigorous data collection with comprehensive analysis, we were able to provide a detailed, data-driven picture of the state of tech reviews and the factors that contribute to review integrity and consumer trust.

To analyze the collected data and derive meaningful insights, various tools were employed.

Tools and Software:

Good tools and software are essential in producing reliable statistical reports, as they directly influence the accuracy and integrity of the data collected.

Advanced software streamlines data gathering and processing, minimizing errors and ensuring consistency throughout the analysis.

By leveraging data analysis tools and data visualization software, we can efficiently manage large datasets, apply complex analytical methods, and visualize trends with precision.

These resources lay the groundwork for trustworthy insights, making it possible to draw meaningful conclusions that stand up to scrutiny. 

  • Data Analysis Tools: AirTable and Google Sheets were the primary tools used for data capture and analysis.
  • Data Visualization: Tools like Google Sheets and ChartGPT were used to create visual representations of the data, aiding in the interpretation and presentation of findings.

Data Quality and Missing Data:

Ensuring data quality is foundational to any statistical analysis, as accurate and complete data allows for credible results and valid conclusions. Verifying data quality involves checking for consistency, accuracy, and relevance, all of which safeguard against misleading insights.

Missing data, however, is a common issue and must be carefully managed; unaddressed gaps can skew results and obscure real trends.

  • Handling Missing Data: Missing data was addressed by flagging entries for further review, ensuring the integrity of the analysis.
  • Data Quality Assurance: Rigorous quality assurance procedures were implemented to verify the accuracy and consistency of the collected data, including cross-validation and manual checks, along with several audits that helped confirm, prune, and adjust the scores, data points, and cited reviews and buying guides used to execute the analysis of the publications.

By employing these methodologies, this study provides a robust and comprehensive analysis of the trustworthiness of tech journalism, offering valuable insights into the prevalence of fake reviews and unreliable testing.

The Broader Issue/Problem

To understand the depth of this issue, we focused on five key areas: the overall decline in reviewer quality, the fake review industry on Google, the significant role of corporate-owned media in disseminating misinformation, the trust gap between corporate and independent publishers, and the issues with various product categories.

While corporate giants are often more manipulative, independents also struggle with credibility.

This report underscores the urgent need for a renewed commitment to transparent and honest journalism, ensuring that tech reviews genuinely serve consumers and restore public trust.

Google Is Serving Up Fake Reviews

Every day, Google processes around 8.5 billion searches. That’s a mind-blowing number. And with that amount of influence, Google plays a huge role in what we see online.

Despite their efforts to remove fake reviews from search results, our investigation into 30 tech categories shows that Google is still serving up a whole lot of fake reviews. These untrustworthy reviews are sitting right at the top of the search results on page 1, where most of us click without thinking twice.

Big names like CNN, Forbes, WIRED, Rolling Stone, and the most popular tech reviewers like Consumer Reports, TechRadar, and The Verge, along with independent reviewers, are all part of this huge problem of fake reviews.

Key Takeaways

  • Half of Google search results for tech reviews are untrustworthy. 49% of the results on page one of a Google search for terms like “best tv” will direct you to an unhelpful site with low trust, no trust, or even outright fake testing. Meanwhile, only 51% of the results will be some degree of trustworthy.
  • A quarter of the results on the first page of Google are fake. 24% of the results on the first page of a search for terms like “best office chair” will link directly to websites that claim to test products but provide no proof of their testing or even fake it.
  • More than half of the top results for computer keyboard searches are fake reviews. A staggering 58% of page one results belong to sites that fake their keyboard testing.
  • Google provides mostly helpful results when looking for 3D printers. An impressive 82% of page one results lead to trusted or highly trusted sites, with zero fake reviews in sight.

Trust Rating Classifications in Google Search Results

To figure out the key takeaways above, we had to first calculate the Trust Ratings across publications. Then, we Googled popular review-related keywords across the categories and matched the results with their respective Trust Ratings.

This allowed us to see, for example, how many of the results for “best air conditioners” on page 1 were fake, trusted, highly trusted, etc. We pulled it all together into the table below to make it easy to visualize. 

In the table, the percentages in the Classification columns are the share of results that fall into each trust class. They’re calculated by dividing the number of results in that class by the total results we found in each category (Total Results in Category column).

As you can see, these categories are swamped with fake or untrustworthy reviews. High-traffic, transactional keywords—where people are ready to buy—are overrun with unreliable reviews.

╱ Results From…→↓Category Fake ReviewerNot TrustedLow TrustMixed TrustTrustedHighly TrustedTotal Results in CategoryAverage Trust Rating
3D Printers0.00%9.09%4.55%4.55%56.82%25.00%4473.24%
Air Conditioners25.00%13.89%13.89%0.00%47.22%0.00%3657.79%
Air Purifiers26.53%26.53%2.04%14.29%8.16%22.45%4951.47%
Blenders32.43%6.76%8.11%12.16%21.62%18.92%7462.88%
Cell Phone Insurance0.00%100.00%0.00%0.00%0.00%0.00%418.20%
Coffee Maker48.28%8.62%3.45%6.90%32.76%0.00%5846.82%
Computer Monitors19.51%10.98%2.44%2.44%46.34%18.29%8269.79%
Drones37.50%0.00%21.43%7.14%21.43%12.50%5663.07%
Electric Bikes16.95%32.20%0.00%13.56%30.51%6.78%5958.40%
Electric Scooters12.50%0.00%21.88%12.50%46.88%6.25%3267.72%
Fans42.86%17.86%0.00%3.57%35.71%0.00%2845.92%
Gaming Chairs0.00%60.00%8.57%8.57%22.86%0.00%3553.11%
Gaming Headsets0.00%36.51%7.94%26.98%12.70%15.87%6361.58%
Gaming Mouse0.00%20.93%16.28%13.95%18.60%30.23%4368.02%
Headphones4.05%21.62%16.22%24.32%10.81%22.97%7466.52%
Keyboards58.18%16.36%1.82%1.82%0.00%21.82%5556.75%
Mouse10.67%56.00%6.67%9.33%6.67%10.67%7537.31%
Office Chairs27.59%20.69%10.34%20.69%20.69%0.00%2950.84%
Printers0.00%23.64%3.64%9.09%49.09%14.55%5565.75%
Projectors30.36%0.00%7.14%44.64%14.29%3.57%5658.54%
Router16.13%3.23%0.00%6.45%67.74%6.45%3167.48%
Soundbars51.90%5.06%0.00%0.00%16.46%26.58%7963.11%
Speakers56.06%10.61%1.52%10.61%10.61%10.61%6650.29%
TVs14.52%23.39%0.00%11.29%34.68%16.13%12458.96%
Vacuum Cleaners25.42%3.39%5.08%27.12%6.78%32.20%5970.07%
VPNs4.69%1.56%9.38%31.25%50.00%3.13%6469.07%
Webcams45.90%4.92%4.92%6.56%37.70%0.00%6154.65%
Total Results in Classification352255921923952051,491

This is how we concluded that half of the results (46.88%) are fake or untrustworthy. It’s disappointing that 25% are fake reviews, with no quantitative data backing up their testing claims.

Now, to Google’s credit, they do show some trustworthy reviews—26.49% of the results are trusted. However, only 13.74% of the results come from Highly Trusted publications. We wish that number was higher.

In reality, the split between trusted and untrusted results is almost even, but it’s concerning how much of the content is unreliable or fraudulent. With so many fake reviews dominating the top spots, it’s clear there’s a serious trust issue in the results that 1.5 million people rely on every month.

Our Dataset of Keywords

To accurately reflect what a shopper is facing on Google for each of the 30 tech categories, we analyzed 433 transactional keywords that total 1.52 million searches per month.

These types of keywords are:

Type of KeywordDefinition of KeywordExamples
Buying GuideThese keywords help shoppers find the best product for their needs based on a guide format, often used for comparisons.best tv for sports, best gaming monitor, best drone with cameras
Product ReviewConnect the user to reviews of individual products, often including brand or model names, targeting users seeking detailed product insights.dyson xl review, lg 45 reviews
Additional SuperlativesHighlight specific features or superlative qualities of a product, helping users find products with specific attributes.fastest drone, quietest air conditioner

Here’s a sample of the 433 keywords. We used a mix of keywords from each of the three types.

Type of KeywordKeywordMonthly Search Volume
Buying Guidebest free vpn57000
best cordless vacuum48000
best bluetooth speakers6700
best portable monitor6600
best gaming tv4900
Product Reviewdyson am07 review1200
unagi scooter review1000
apple studio display review900
dji mini 2 review800
windmill air conditioner review600
Additional Superlativefastest electric scooter5900
fastest electric bike5600
quietest blender700
quietest air purifier500
largest 3d printer450

The 433 keywords gave us a total of 5,184 search results.

Out of the 5,184 results, 1,491 were actual reviews from publications with Trust Ratings. The rest? They were mostly e-commerce pages, videos, and forums. Since those types need a different system to measure trust, we excluded them from our analysis to keep things accurate.

Diving Deeper into The Google Fake Review Problem

Researching products online has become a lot harder in recent years. Google’s constantly shifting search results and a steady drop in the quality of reviews from big outlets haven’t helped. Now, many reviews make bold testing claims that aren’t supported by enough or any quantitative test results..

We believe the 30 categories we analyzed paint a strong picture of tech journalism today. Sure, there are more categories out there, but given our timelines and resources, these give us a pretty accurate view of what’s really going on in the industry.

Nearly half the time, you’re dealing with unreliable reviews. And while some publications are faking tests, others may just be copying from other sites, creating a “blind leading the blind” effect. It’s almost impossible to tell who’s doing what, but it seriously undercuts the entire landscape of tech reviews.

So why are fake reviews such a big problem? Money. Real testing is expensive, and for some publishers, it’s easier—and cheaper—to cut corners or just fake it to get a better return on investment.

It’s not just small players doing this. The biggest names in the industry are guilty, too. These corporate giants are leveraging their influence and authority to flood the web with fake reviews, all in the name of bigger profits. Let’s break down how they’re fueling this problem.

The Corporate-Media Problem

While our dataset features hundreds and hundreds of publications, there’s another layer to them that’s often invisible: the parent company.

While it might seem like individual websites are the primary offenders, the reality is more complex. Many of these websites are often owned by the same corporation. Imagine picking fifteen publishers out of a bucket—despite their different names, several of them might belong to the same company.

For example Future PLC owns 29 sites in our database and 17 have been designated as “Fake Reviewers”, while another 8 are labeled as untrustworthy. These cites aren’t tiny, either – on the contrary, Future owns extremely high traffic sites that are full of fake reviews, like TechRadar, GamesRadar and What Hi-Fi?

This is troubling, because it means that the benefits of being owned by a single parent company (consistency in branding and management) aren’t facilitating what really matters: rigorous, objective testing of products.

Instead, we’ve noticed that publications are more likely to share information and photos, amplifying the spread of half-baked work – or outright misinformatio -n across multiple platforms.

In the worst cases, it means the parent company has the opportunity to be unscrupulous to push products, viewpoints, or worse. Most publishers, instead of holding companies and brands accountable, are calling them clients and selling them ad space – so they are incentivized to avoid negative, unbiased reviews.  

Corporate media outlets are more likely to manipulate their reviews than independent sources (though independents still have significant problems.) In fact, every “Big Media” company has more fake reviewers under their belts than any other classification.

Key Takeaways

  1. Corporate publications are overwhelmingly unhelpful, untrustworthy, or outright fake. 89% of the corporate publications in our dataset are fake reviewers or labeled untrustworthy.
  2. You’re more likely to run into a fake reviewer when reading corporate-owned publications. 54% of the corporate-owned publications in our dataset have been classified as fake reviewers.
  3. No corporate publication is “Highly Trusted” according to our data and trust ratings. Out of the 202 corporate publications, there isn’t a single one that manages a Trust Rating of 90%. The highest Trust Rating a corporate publication earns is 82.4%.
  4. Corporate publications dominate web traffic despite being extremely unhelpful. Of the 973 million total monthly visits every site in our dataset sees combined, corporate publications receive 85% of that traffic.
  5. You have a better chance of getting useful information from an independent publication – but not by much. 6.8% of the independent sites we researched are trustworthy or highly trustworthy, while just 4.9% of corporate sites are trustworthy (and not a single one is highly trusted.)
  6. Future PLC
    • Future owns the most publications out of any of the corporate media companies in our dataset, and has the most fake reviewers as well. Of the 27 publications they have, 17 of them are fake reviewers, including major outlets like TechRadar, GamesRadar, What HiFi? and Windows Central.
    • The two highest traffic publications that Future PLC owns are Tom’s Guide and TechRadar, and together account for 63% of the traffic that Future PLC publications receive. Unfortunately, TechRadar is classified as a Fake Reviewer (57.06% Trust Rating), while Tom’s Guide is stuck with Mixed Trust (65.66% Trust Rating.)
  7. Dotdash Meredith
    • Dotdash Meredith features some of the highest aggregate traffic numubers, and leads all of the parent companies in combined traffic. Unfortunately, 9 of the 13 publications they own in our dataset are Fake Reviewers.
    • A staggering 35% of the total traffic that Dotdash Meredith’s publications receive goes directly to Fake Reviewers, including websites like Lifewire, The Spruce and Better Homes & Gardens.
    • The two highest traffic publications Dotdash Meredith owns, People and All Recipes, are classified as Not Trusted. This is troubling – despite not claiming to test products, they fail to establish what limited trustworthiness they can.
  8. Hearst
    • The 9 Hearst publications we analyzed attract a hefty 38.8 million monthly visitors (source: Hearst), yet their average trust rating is just 32.99%. High traffic, but low trust.
    • Example: Goodhousekeeping has a troubling amount of fake testing plaguing 59% of the categories that Good Housekeeping claims to test. 
    • Only one publication owned by Hearst manages a Trust Rating higher than 50%, and that’s Runner’s World. Unfortunately, even it can’t break 60%, with a Trust Rating of just 59.98%
  9. Conde Nast
    • 6 of the 7 premium brands we analyzed from billion dollar conglomerate Conde Nast faked their product tests including WIRED, Vogue and GQ.
    • Wired is faking their reviews with a 32.36% trust rating across the 26 categories it covers. Worse still, of the 21 categories they claim to test in, 15 of them have faked testing.
  10. Valnet
    • Valnet is the only major parent company to have fewers publications faking their testing than not. Unfortunately, the 5 they own that aren’t faking their testing often aren’t testing at all, and none of them are classified higher than Not Trusted.
    • Valnet also earns the very bizarre accolade of receiving the least amount of traffic to publications labeled Fake Reviewers, meaning consumers are less likely to be served fake testing – unfortunately, the rest of the traffic is going entirely you publications that are Not Trusted.

In our dataset, corporate-owned publications are those marked with “No” in the independent column. These are publications owned by larger media conglomerates, publicly traded companies, or those that have raised external capital.

Below is a table of the top 15 media conglomerates that dominate the product and tech review industry. For our study, we analyzed publications from the biggest one, DotDash/Meredith, as well as Hearst (the oldest), Conde Nast (the most well known), and Future (almost $1 Billion in revenue) and Valnet ($534.1 million)

Parent CompanyNumber of PublicationsEstimated Annual RevenueMonthly Estimated Traffic (Similar Web)
DotDash/Meredith40 (13 analyzed)$1.6 billion
ZiffDavis
Red Ventures
Hearst176 (9 analyzed)$12 billion (source)
Internet Brands
Valnet25+ (9 analyzed)$534.1 million
Conde Nast37 (6 analyzed)$1.7 billion (source)
PMC
Arena Group
Future250+ (27 analyzed)$986.1 million (source)
Vox Media
Accelerate360
Digital Trends
Recurrent
BDG

Trust Rating Distribution 

We analyzed 70 publications across 5 corporate media giants,and the Trust Rating ranged from a low ranging from 2.5% Trust to “a high” 65.66%. Not – that’s right, not one parent company in our dataset scored above 66%. Here’s some descriptive statistics of trust ratings for corporate-owned publications: 

n = 70 Mean = 35.47% Median = 34.38% Range = 2.50% to 65.66% Standard Deviation = 15.61% 

The graph below serves to illustrate just how entrenched the problem of fake reviews is in the largest parent companies in our dataset. Every single corporation has more fake reviewers under their belts than any other Trust classification.

Let’s dive into our 5 shortlisted parent companies, starting with the biggest fake reviewer of them all, Future PLC.

Future PLC

You’ve likely encountered Future PLC’s sites, even if the company name doesn’t ring a bell. They own some of the biggest names in tech and entertainment, like TechRadar, Tom’s Guide, and Laptop Mag—popular destinations for phone, TV, and gadget reviews. With over 250 brands under their umbrella, Future is the largest parent company we investigated.

On the surface, Future’s brands appear trustworthy, but a deeper look tells a different story. Despite their massive reach—having over 100 million monthly visitors and generating about $800 million a year—Future’s trustworthiness crumbles, earning a low 44.5% Trust Rating across 29 of their publications. 

The issue? Their reviews often lack quantitative test results which prove the true performance of the product. For instance, you need a sophisticated luminance meter to measure the brightness of a TV in nits (cd/m2). You need to make informed decisions—especially from sites like TechRadar and Laptop Mag, where you expect detailed performance numbers.

What’s even worse? Future has scaled its less trustworthy sites like TechRadar because it’s more profitable to do so. To the point where 17 of their sites earned the “Fake Tester” label.  By Sskipping proper testing cuts costs, and the Fake Testers not only minimize overhead, but can still draw massive traffic and profits, proving that fakery pays off.

Meanwhile, their smaller sites we do trust like Mountain Bike Rider and AnandTech—which was recently shut down—receive far less traffic and attention. Why? Building genuine trust is more expensive and harder to scale.

The bottom line? Future is chasing profits at the expense of their readers. To win back trust, they need to stop prioritizing quick cash and scale, and instead focus on real testing, transparency, and putting reader trust ahead of shareholder demands.

Look at all of their trust ratings in a list. See the huge group of sites labelled as Fake Testers versus the few that we actually trust. And to make it worse, the most trusted sites barely get any traffic, while the ones publishing fakery are raking in millions of visitors.

ClassificationWebsiteMonthly TrafficTrust Rating
TrustedMountain Bike Rider88,30078.10%
Home and Gardens981,60073.31%
AnandTech (Shut Down)79,40071.40%
Live Science2,700,00070.10%
Mixed TrustTom’s Hardware1,781,82467.79%
Tom’s Guide17,303,58765.66%
Fit & Well32,86564.35%
Low TrustIdeal Home408,56352.45%
Not TrustedGuitar World227,60043.38%
Marie Claire1,031,08840.55%
PCGamer40,64738.38%
IT Pro59,00037.77%
Fake ReviewerTechRadar16,098,15857.06%
Windows Central648,22351.41%
Laptop Mag455,63750.22%
Real Homes9,78249.09%
Top Ten Reviews39,10041.16%
T3254,91540.95%
iMore295,80240.06%
Gamesradar6,546,05439.01%
Digital Camera World484,90231.82%
What Hifi1,261,92331.44%
Android Central671,70031.25%
Twice1,20028.45%
Creative Bloq635,73224.43%
Music Radar253,18523.77%
Cycling Weekly378,17219.97%
Livingetc127,10019.53%
Louder3,11511.90%

n=29 // Mean: 44.65% // Median: 40.95% // Range: 11.90 – 78.10% // Standard Deviation: 18.03%

The mean being a failing Trust Rating is disappointing. The wide range of trust ratings and high standard deviation of 18.03% show how inconsistent Future’s reliability is.

As for patterns, we noticed how Future brands score better in certain Trust Rating categories versus others.

They’re transparent about their staff and authors who award-winning tech journalists, which is why they score high in Authenticity and Expertise. And they actually use the products they review.

But the big problem is that they rarely provide quantitative test results, units of measurement, and testing equipment.

Their scoring systems also lack precision, so they scored poorly in the Scoring System and Testing & Data Science categories.

For more detail on how Future does in each Category of Performance, check out the following table.

Millions of readers are coming to these sites expecting trustworthy reviews to guide important purchasing decisions, but the Trust Ratings say otherwise. When you’re serving content at that scale, missing the mark this often is a huge red flag.

Let’s take a look at an example from one of their publications to see the lack of test results in action.

TechRadar

TechRadar is one of the most popular tech sites in the world. They earned an overall Fake Reviewer classification with a 57.06% Publication Trust Rating. 

We investigated 27 of their product categories, and 8 received a Fake Reviewer classification such as gaming monitors and drones. We trust their coffee maker (85.27% Trust Rating) and VPN (81.00%) reviews, but we steer clear of them when it comes to fans (11%) and cell phone insurance (15%) reviews.

TechRadar’s review style often gives the impression of thoroughness—they definitely use these products they review. They almost always score well on Test Indicator 8.5 that looks for the reviewer using the product in a realistic scenario. But they tend to stop short of real performance testing, leaving out the test results and benchmarks needed to back up their test claims.

They provide units of measurement only half the time and barely include quantitative test results. So sometimes they test, but it’s not consistent enough.

Here’s what we found in their Gaming Monitor category for example. They earned a 39% Trust Rating in this category, and we found their claim to test to be untruthful (Test Indicator 8.11).

We investigated their HyperX Armada 27 review, and right off the bat, and you’ll notice at the top of all their product reviews, TechRadar provides this message that they test every product or service they review for hours.

So they’re setting our expectations immediately that we should see test results on the Armada 27 in this review.

We know that the author (John) definitely had this monitor in front of him at one point and used it thanks to all of the real photos. But we couldn’t find any quantitative test results in the review–only specifications, which the manufacturer already provides.

For monitors, you need to be testing quantitative test results like brightness, input lag, color accuracy, and response time.

If they were actually testing color gamut and brightness, they would be mentioning equipment like luminance meters, colorimeters, and calibration software.

They also didn’t get any points on Indicator 8.4, where we look for correct units of measurement on test results they provide. There weren’t any mentions of nits or /cd/m² (for brightness), milliseconds (input lag), etc. aside from on specs.

The How We Test section at the bottom of the review isn’t helpful either. There’s no dedicated Gaming Monitor Testing Methodology to be found on TechRadar. That’s another indicator (8.1) that they never get points for across their categories.

The Armada 27 is actually a great gaming monitor, so John gives a Performance score that makes sense. But without units of measurement or test results to back up their claim to test, the review is unreliable by itself.

Again, this is a disappointing pattern across many of TechRadar’s other categories where they also end up labelled as Fake Reviewers. You can dig into more in the table at the bottom of this section.

But TechRadar doesn’t have bad Trust Ratings all around. They still get some credit for testing in certain categories, like coffee makers (85.27% Trust Rating). They’re the third most Trusted publication for coffee maker reviews behind CNET and TechGear Lab.

Our team investigated their Zwilling Enfinigy Drip Coffee Maker review by Helen McCue. She definitely used the coffee maker to brew a full carafe plus provided her own quantitative test results.

In the screenshot below, she measured brewing speed by brewing a full carafe in about nine minutes. Notice how she included the unit of measurement (Indicator 8.4).

While this section is already helpful, it would be even more helpful if she named the equipment she used like a timer.

She also measured the temperature of the coffee (°F or °C) immediately after brewing and 30 minutes after sitting on a warming plate.

Again it’d be helpful to know the model of thermometer she used to obtain the temperatures.

She answered two out of three Test Criteria Indicators, so the only test result missing was the flavor of the coffee brew (measured in pH or with total dissolved solids).

Helen’s review is still very helpful thanks to her test results and experience using the Zwilling coffee maker. Her review contributed to TechRadar’s great coffee maker Trust Rating.

For more reliable reviews from Tech Radar like VPNs and printers, check out the table below.

[insert airtable embed of other categories]

GamesRadar

If you’re a gamergaming enthusiast or someone who loves all things entertainment, you’ve likely come across the popular GamesRadar. They were given the Fake Tester label with a mediocre 39.01% Publication Trust Rating.

We looked at 6 of their product categories, and half of them contain fake testing claims. We only trust one category of theirs–TVs (73.80% Trust Rating) which had enough test results and correct units of measurement to pass. We’re definitely avoiding their routers (23.00%) and office chair (73.80%) reviews though.

GamesRadar’s reviews look legit at first. They’re usually written by expert journalists with at least 5 years of writing experience, and they definitely use the products. They take tons of real photos, so GamesRadar tends to score well on Test Indicators 7.1 and 7.2.

But they skimp out on testing by providing no hard numbers and units of measurment, despite claiming to test. They also barely have any category-specific test methodologies published (Test Indicator 8.1).

We found in this fake testing claim and lack of evidence in their Router category, for example. Hence why it got a Fake Tester label.

We looked into their ASUS ROG Rapture GT-AX11000 Pro review, and right away, you’ll notice that while they claim to test their routers, there’s little evidence of test results.

At the bottom of the review, you’ll notice how the author Kizito Katawonga claims to test the router.

[screenshot]

He explains his process for “testing”. The author explains how he set up the ASUS as his main router, connecting 16 to 20 household devices and dividing them into different network channels. He then tested it through regular usage, including gaming and streaming, but this approach lacks the objective, data-driven testing needed for a comprehensive review.

If you scroll up to the Performance section, he admits he don’t have the equipment to properly test the router’s performance objectively.

[screenshot]

So, if he isn’t able to test it properly… why is he saying he tested it? Now I’ve lost confidence in the reliability of this review.

If you scroll further up, he’ll mention the specifications in detail, but when it comes to actual performance data, nothing is provided. He simply talks about the specs like maximum speeds and talk about how the router should perform.

[screenshot]

But he says nothing about how the router did perform in regards to quantitative test results. The author should have tested the router’s download/upload speeds, latency, and range using tools like browser speed software, ping tester apps and heat map software.

This lack of real testing isn’t just limited to routers—it’s a recurring issue in several of GamesRadar’s other categories with low trust scores. You’ll find similar patterns across the board in the table below.

However, not all of GamesRadar’s reviews are unreliable. We trust their TV category, for instance, which got a passing Trust Rating of (73.80%).

A team member investigated this insightful LG OLED C1 review

The author, Steve May found the HDR peak brightness to be 750 nits, so he got GamesRadar points for one Test Criteria and the correct units of measurement (Indicator 8.4).

[screenshot]

The only thing missing for that measurement is what kind of luminance meter he used.

Same story with his input lag measurement of 12.6ms.

[screenshot]

These measurements are helpful, and for even better transparency, we’d like to know the input lag tester and/or camera he used.

He provides some real photos of the TV’s screen and back panel.

[screenshot]

So there’s even more evidence that he used this TV.

We’re overall more confident in the reliability of this review on the LG OLED C1 TV versus that ASUS router review. These test results are why GamesRadar’s claim to test in TVs was found to be truthful.

If you want to dig into the other 4 categories we investigated on GamesRadar, check out the table below.

[insert airtable embed of other categories]

What’s the future hold for Future PLC?

As you see, Future’s reviews lack test results, units of measurement, and clear methodologies needed to back up their test claims. They’re still written by expert journalists who definitely use the products. But without the testing evidence, their credibility takes a big hit. Ultimately, this reveals a huge problem—Future is prioritizing profits over readers.

To rebuild trust, they need to make some changes. If they have the hard data, equipment, and methodologies, then simply show the work.

If Future can’t provide the evidence to back up the testing claims, it’s time to adjust the language in their reviews. Rather than saying products are “tested,” they should call these reviews “hands-on”, meaning that they’ve used the products without rigorous testing.

They should also remove the “Why Trust Us” banners at the top of every review on fraudulent sites like TechRadar and Digital Camera World.

These changes would eliminate any perception of fakery and bring a level of transparency that could help restore trust. Future still publishes valuable reviews, but they need to align with what they’re actually doing.

Transparency is key, and Future has the potential to lead with honest, hands-on reviews, even if they aren’t conducting full-on tests.

Dotdash Meredith

Google receives over one billion health-related searches every day—and you’ve probably been one of them at some point. Health.com often tops the list of results when people look for advice. It’s one of 40 brands under Dotdash Meredith, a media giant founded in 1902, now generating $1.6 billion annually. While Dotdash has fewer publications than Future, it’s the biggest parent company we investigated in terms of revenue.

Given its legacy, you’d expect all their content to be trustworthy. While their home and wellness advice is generally solid, their product reviews often fall short.

The parent company scored a mediocre 40.53% Trust Rating across 13 of their publications. Why? Most of their sites are publishing fake “tested” reviews because they’ve shifted to creating faster, more profitable content.

Once focused on offering more educational content, these brands have now expanded into affiliate marketing, treating product reviews as another way to drive revenue. Though thoroughly testing products, as we know, is both expensive and time-consuming.

Take Health.com, for example, which falls under the YMYL (Your Money or Your Life) category. Content that affects health or finances should be held to the highest standards—well-researched and verified by experts. Yet their “Best Air Purifier for Mold” article claims to test products, but there’s no sign of quantifiable results like ppb or µg/m³—essential measurements for air quality. At best, it seems like they’re just copying information from the back of the box. This is a huge red flag for a site that deals with health-related topics.

Fraudulent reviews are a trend across several Dotdash publications. While Serious Eats remains trustworthy, it only receives a fraction of the traffic compared to sites like Health.com and The Spruce, which present misleading “How We Test” sections. These sections claim to test important criteria but fail to provide real measurements or data, deceiving readers.

Then there’s People.com where they test simple things, like vacuum battery life, to say a product was “tested”. But it’s still unhelpful to not test the important factors like debris pick-up on carpet and hard surfaces.

What does this mean for you? Dotdash Meredith’s focus is on driving sales over providing trustworthy content for millions of readers a month. For health and home advice, it’s important to question what you read—because it’s your life and your money on the line.

The companies DotDash owns break down to:

ClassificationWebsiteMonthly TrafficTrust Rating
TrustedSerious Eats967,00070.87%
Not TrustedAllrecipes115,400,00048.70%
People186,400,00038.42%
Health.com12,910,00025.55%
Fake TesterLifewire14,906,04754.32%
The Spruce31,882,37151.73%
Better Homes & Gardens16,360,00050.50%
The Spruce Eats21,727,01937.67%
Real Simple12,510,00037.15%
The Spruce Pets1,000,00036.35%
Trip Savvy5,103,54430.74%
Very Well Health27,430,00024.70%
Food and Wine1,300,00020.15%

DotDash has this pattern where their sites have a How We Tested section towards the bottom of the review. They usually say they test all these criteria, like noise level (dB) and air filtration rate (ACH or CADR) in the Very Well Health’s Air Purifier buying guide. But the truth is that they never provide the actual test results in their content. The same goes for The Spruce’s Air Conditioner buying guides. It’s misleading, and in the case of Health.com, it means they’re outright faking the testing on their air purifiers. 

Some of their expert journalists go all out with the visuals and tests, but they’re not testing the correct things, like People.com measures irrelevant criteria in their vacuums category.

Lifewire. Their expert journalists use their products and take tons of real photos. Occasionally they’ll provide test results. But showing this evidence is far from consistent. We never see mentions of the testing equipment they used.

Dotdash MeredithThe Sprucehttps://www.thespruce.com/best-roombas-4693405A good example of a site that clearly is doing something with the products but isn’t providing any data.
Dotdash MeredithVery Well Healthhttps://www.verywellhealth.com/best-air-purifiers-for-allergies-4170072#toc-how-we-tested-air-purifiers-for-allergiesThey have a clear testing method and note what measurements they take but provide basically none of them; even with their sound levels they’re missing them on some products.

Valnet

Out of the publishing juggernauts we focused, Valnet is easily the youngest. Formed in 2012, the Canadian owns publications covering everything from comics to smartphones, and has its fingers in popular media with sites like CBR and MovieWeb.

In our dataset, Valnet has 9 publications under its belt, which account for 20.5M monthly visitors. Unfortunately, the company has a poor overall average Publication Trust Rating of 36.56%. With sites lke Make Use Of and Android Police in its fold, this shouldn’t be the case – both are known stalwarts for tech reviews, but neither is doing their due diligence.

Despite clear testing guidelines from both MUO and AP, our investigations into several of the categories produced some troubling results. We get it, testing TVs and soundbars is difficult, but if you can’t do so thoroughly, it’s best not to claim you are.

The breakdown of the publications it owns is as follows:

ClassificationWebsiteTrust RatingMonthly Traffic
Not TrustedXDA Portal45.71%743,000
Hot Cars43.93%288,500
Game Rant25.99%13,644,902
How to Geek25.52%2,000,000
Pocketnow14.60%27,900
Fake ReviewerPocket Lint47.86%262,611
Make Use Of44.65%2,037,372
Android Police40.92%1,560,422
Review Geek39.87%12,539

One of Valnet’s juggernauts has the lion’s share of the traffic, Game Rant, but it’s Not Trusted, which sets up what to expect from the publication. The average reader opening up a Valnet-owned pub is going to find content rife with serious trust concerns, and on the off chance it isn’t one of the Fake Reviewers, it’ll still be untrusted.

Valnet’s primary issue has less to do with Fake Reviewers (though that’s still a significant concern) and more that if they’re not serving the public fake reviews, they’re simply not trusted.

It’s one thing to claim you test and lie about it, it’s another thing altogether to not do enough in our 55-point inspection to manage even a Low Trust score. Not Trusted publications are flat out not providing useful information, and the Trust Ratings we see reflect that.

Game Rant, for example, falls flat even when they DO test. Their gaming mice reviews mention testing, but the testing they do is the bare minimum – checking customization software. This is the easiest part of testing a gaming mouse, and provides only a nominal amount of useful information for the user: what does the software let you configure?

More important data, like click latency and sensitivity, matter much more to the consumer when it comes to making an informed decision.

Game Rant isn’t doing too much that’s wrong in this category – they’re just not doing enough. That’s the position Valnet finds itself in: when they’re not faking it, they’re not doing enough.

All told, Valnet owns mores than 25 publications, including the 8 we covered, making them the fourth largest parent company in this analysis.

ValnetMake Use ofhttps://www.makeuseof.com/creative-soundstage-v2-review/No testing data.
ValnetAndroid Policehttps://www.androidpolice.com/best-vpnNo speed testing data.

Need to do an analysis of both of these.

Hearst Digital Media

Hearst is a truly massive publishing entity with a very long history that stretches all the way back to 1887.

Not all of it is good. The founder, William Randolph Hearst, is infamous for his leverage of yellow journalism to gain prominence, and in the modern day, Hearst Digital Media (their digital publishing arm) hasn’t fallen far from the tree.

Fake Reviewers run rampant in the stable of publications we researched owned by Hearst, so even though there’s only 9 total publications owned by Hearst that we covered (they own hundreds, by the way), six of them were plagued with Fake Reviews.

Combined, the 9 sites Hearst owns in our dataset pull in 35.7M monthly views. Don’t let the small number of sites this fool you though – Hearst is massive, and owns over 175 online publications, making it the second-largest parent company (for online publications) in our dataset.

According to Hearst themselves, they see 146 million visitors monthly and have 254 million social followers.

Trust ClassificationWebsiteMonthly TrafficPublication Trust Rating
Fake ReviewerBest Products353,18933.52%
Bicycling405,85343.65%
Gear Patrol398,81325.80%
Good Housekeeping13,861,41338.62%
Men’s Health7,600,0009.70%
Popular Mechanics1,017,01828.27%
Low TrustRunner’s World270000059.98%
Not TrustedCar And Driver149256845.05%
Harper’s Bazaar78984729.30%

Nearly all of Hearst’s traffic comes directly from Good Housekeeping, Men’s Health and Harper’s Bazaar.

Unfortunately, Good Housekeeping and Men’s Health have been classified as Fake Reviewers. Harper’s Bazaar hardly fares any better too – it’s not a fake reviewer. Still, it is labeled (rather firmly) as Not Trusted, partly owed to its 9.3% Publication Trust Rating, signaling that the publication offers very little to readers.

The failures of Good Housekeeping are particularly bothersome, because it was often used by consumers to help confirm a product was going to be worth it. It’s a publication that’s older than Hearst itself is, founded back in 1885.

Unfortunately, they simply fail to live up to their testing claims across a troubling number of categories, which casts serious doubt on their product recommendations. 

Consider their approach to testing air purifiers in this review. They constantly make mention of testers and testing, but their testing method is to simply see how their tests are reacting to the air purifier being on.

One of their “tests” was simply to turn the air purifier on and see if a tester’s hayfever symptoms went away, and then noted that their symptoms improved but did not entirely go away.

This is an excellent example of real-world use, and it’s certainly helpful, but it doesn’t offer anything quantitative.

How much did the air quality improve? Why didn’t the symptoms fully go away? Was the air purifier struggling to fully clean the air?

We’ll never know, because the product wasn’t properly tested.

It’s examples like that that have landed Hearst Digital Media in an extremely poor position: they have lowest average Publication Trust Rating, at 32.65%. This means that, as a parent company, it has the greatest amount of ground to cover.

Hearst Digital MediaGood Housekeepinghttps://www.goodhousekeeping.com/uk/product-reviews/tech/g25317749/best-tvs/No testing data.
Hearst Digital MediaPopular Mechanicshttps://www.popularmechanics.com/home/g1549/best-window-air-conditioners/Easy smoking gun – tons of “test” mentions, literally 0 data.

We’ve covered both Good Housekeeping and Popular Mechanics and can anchor link down to their sections here, or just copy paste their stuff up here again.

Condé Nast

Note: Condé Nast is owned by Advance Publications

Formed in 1909, Condé Nast has its roots in glamorous publications like Vogue, but has expanded its reach into a variety of spaces, including tech and food with publications like Bon Appétit and Wired.

Our dataset covered 6 of the publications they own, and that adds up to 18.8M monthly visitors. The majority of that traffic, however, goes to Wired and Bon Appétit, both of which suffer heavily from fake reviews.

A trusted publication like Wired suffering from fake reviews is particularly troubling, because they’re a long-running, highly trusted publication that has traditionally been trusted for giving out useful reviews on tech with meaningful testing.

For example, Wired simply doesn’t back up their testing claims in many of their reviews. Between a lack of any kind of meaningful data, spotty use of pictures that support actually having the products and using them, and the overreliance on specifications data, they earn the Fake Reviewer label multiple times across several categories.

Consider this webcam review, which is full to the brim with claims of testing, but contains no custom imagery. Nothing that shows the webcam they supposedly tested in use. No screenshots of what the video quality it outputs is like, to say nothing of actual video footage captured on it.

Simply put, there’s no data to lean into. No real measurements, nothing that suggests the use of actual testing equipment. Just qualitative assessment of a product.

Real world use? Maybe, even that’s hard to confirm. But actual testing? Nothing they have in the review gives their claim any weight.

It’s issues like these that bring Condé Nast to an average Publication Trust Rating of 34.09%

Trust ClassificationWebsiteMonthly TrafficPublication Trust Rating
Fake ReviewerArchitectural Digest302048136.55%
Bon Appétit480000016.15%
Epicurious215993027.95%
GQ160000032.07%
Wired630677532.36%
Low TrustArs Technica91072559.48%

Similarly to Hearst, Condé Nast has almost 95% of its traffic going directly to publications that have been labeled as Fake Reviewers (17.8M of the 18.8M monthly traffic, to be specific), so most readers of publications owned by this parent company have a high likelihood of reading guides and product reviews that aren’t trustworthy.

At best this means these guides and reviews don’t offer much insight or useful information about a product, and at worst, they can push a person to make a purchase that isn’t right for them.

Conde NastWiredhttps://www.wired.com/review/vilo-mesh-routerSome obvious smoking guns here.
Conde NastEpicurioushttps://www.epicurious.com/shopping/spinn-coffee-maker-review-this-machine-will-revolutionize-your-morning-cupNo temperature data or other measurements present here.

We can potentially anchor link down into the Wired section, looking at the Webcams review to show the issues. That or we can simply copy paste it and put it up here again.

Need to do the Epicurious analysis.

Parent Company Conclusion: Fakery is Prominent and Concerning

Parent companies overall do not have a good Average Trust Rating. None of them break 45%, let alone 50%, which is a sign of major trust issues. If you come upon a site belonging to one of these parent companies, trusting their scores and reviews is ill-advised.

Trusted Publishers are Faking Product Tests

Add Introduction on the audience and captured by the corporate profit motive..Why are they included

Key Takeaway: When looking at monthly traffic numbers, four out of five of the highest traffic sites in our dataset are classified as fake reviewers. Together, these four publications alone bring in almost 176M monthly views – that’s about 18% of the total traffic in our entire dataset.

Key Takeaways

  1. Consumer Reports
  2. Good Housekeeping
  3. Popular Mechanics
  4. WIRED
  5. Forbes

The top five publications by trust and traffic paint a very troubling picture. Here’s some fast information on them:

WebsiteTrust ClassificationParent CompanyRevenueMonthly TrafficPublication Trust Rating
ForbesFake ReviewerForbes Media LLCTBD65,586,35534.96%
Good HousekeepingFake ReviewerHearst Digital MediaTBD13,861,41338.62%
Consumer ReportsFake ReviewerConsumer Reports45.49%
Popular MechanicsFake ReviewerHearstTBDTBDTBD
WiredFake ReviewerConde NasteTBDTBDTBD
38.51%

Notice a trend? Four of the top five publications by traffic have Fake Reviewer as a classification, meaning they are faking their testing claims. What does that mean? Simply put, 30% of the traffic going to these top five publications is going directly to fake reviewers.

For reference, that’s 138M monthly visitors out of the 974M total traffic.

Let’s dig into these fakers and find out why they were labeled this to begin with:

Consumer Reports

Many of you can remember a time when Consumer Reports was the trusted name in product reviews. Back then, if Consumer Reports gave a product the thumbs-up, you could buy with confidence. But these days, their reviews aren’t what they used to be.

As a nonprofit, they generate over $200 million annually, supported by nearly 3 million print magazine members and more than a dozen special-interest print titles covering autos, home, health, and food.

With over 14 million monthly online visitors and 2.9 million paying members, they’ve built a massive audience—and they’re taking advantage of it.

Their content is distributed across multiple platforms, including mobile apps and social media channels.

Nowadays, there’s so much circumstantial evidence indicating that Consumer Reports hides their test results and duplicates their reviews across different products. And their disappointing 45.49% Publication Trust Rating reflects that.

While their car reviews are still pretty reliable, other product reviews and buying guides in other categories lack the in-depth test evidence that once distinguished Consumer Reports.

We reached out to Consumer Reports in December 2023, and we learned that they’ll give the actual test results if you contact them for the data. So the testing is happening, but getting that information is very inconvenient.

Many reviews are templatized, repeating the same sentences across different products’ reviews. And with little to no visible test results to back up their claims, their reviews are unreliable.

Subscribers shouldn’t be receiving these basic reviews nor have to jump through hoops to see the test results—especially when Consumer Reports used to set the standard for transparency and detailed product reviews.

18 of the 24 categories we investigated earned the Fake Reviewer class. There’s repeated circumstantial evidence across categories that Consumer Reports is concealing their test results and duplicating their reviews.

ClassificationCategoryTrust Rating
Trustedbest coffee maker82.13%
best printers69.20%
Mixed Trustbest office chair65.20%

Not Trusted
best headphones45.73%
best vpn33.20%
best cell phone insurance21.20%

Fake Reviewer
best electric scooter61.20%
best electric bike57.20%
best gaming chair57.20%
best fans49.20%
best router45.20%
best air purifier45.20%
best blender45.20%
best keyboard45.20%
best mouse45.20%
best vacuum cleaner43.20%
best tvs35.40%
best projectors29.20%
best computer monitors25.20%
best gaming headset21.20%
best air conditioners21.20%
best webcams21.20%
best soundbar17.20%
best speakers13.20%

Despite their clear claims of testing and a long-running reputation for “tested” product reviews, Consumer Reports earns a spot in the Fake Five because of a total lack of transparency and many of their reviews are duplications of one another.

The most glaring issue this lack of transparency creates is a complete absence of test results to support the claims they make in their reviews.

Additionally, their reviews often follow a cookie-cutter format, with the same sentences repeated nearly identical paragraphs used across different products.

Let’s take a look at their TV reviews for example, a category which earned a failing 47.60% Trust Rating.

TVs

TVs are the most difficult product category to test. It’s also expensive to afford all the proper test equipment, which probably isn’t an issue for Consumer Reports. However, why did they get that terrible 35.40% Trust Rating? 

Immediately in the subheadline, you see the author James K. Wilcox state that Consumer Reports tests a huge amount of TVs every year.

By seeing that subheadline, a reader expects to see test results in this TV buying guide from the best product testers in the world.

However, that’s exactly not the case as you can see in the screenshot below.

The Results section shows that CR rates TVs based on multiple criteria, many of which are very important, like picture quality, sound quality, and viewing angle.

At first glance, you’d think that this review of this LG TV looks pretty thorough and reliable. But a score out of 5 is pretty basic.

You may want to find out more info. How did this LG TV get a perfect score on Picture Quality? Luckily, there’s a tool tip that should go further in-depth and show the actual test results, right?

Unfortunately, when you mouse over the tooltips to get more information on their bizarrely simple scoring, there isn’t much beyond additional claims about the various criteria they tested.

Testing produces actual hard data, but there isn’t any here, just an explanation about what they were testing for.

Where’s the test data? You may try scrolling down the review to find those results. Then you’ll spot a Detailed Test Results section that must surely contain the test results to back up their scores. Right?

Even in the section that would seem most likely to provide “detailed results” there’s just qualitative language with no quantitative data. There are no color gamut graphs, no contrast screens, and no brightness measurements. Instead, we get statements like “brightness is very good” or “contrast is excellent.” 

If they were really testing contrast ratio (which they say is excellent) they’d provide a quantitative test result in an x:y format (Indicator 8.4 where we look for units of measurement). Consumer Reports never got points for Indicator 8.4 due to hiding their test results.

There’s also no mention of testing equipment like a luminance meter to measure the contrast or a colorimeter to measure color accuracy.

Even after you pay for a membership, you still don’t get any actual test results–you only get vague scores out of 5 for various criteria. You have to contact Consumer Reports to see the test results.

This is why they lost so many points in the Testing & Data Science category. Since we couldn’t see any of their quantitative test results, we had to mark their claim to test as “untruthful”. Their poor performance in that scoring category is what ultimately brought their Trust Score down below 60% in TVs. It’s the same story in many of their other categories. 

And even worse, their reviews are presented in this cookie cutter format that use templatized language.

Blenders was the worst case of these tokenized reviews that we came across. Check out the reviews for the Vitamix Professional Series 750 and the Wolf Gourmet High Performance WBGL100S below. The “Detailed Test results” Sections? Exact duplicates, as you can see by the highlighted parts.

It seems whoever put these reviews together didn’t even try to change the wording up between the reviews. 

Still don’t believe us? Take a look at these other sections reviews and see for yourself.

Let’s also take a look at two different reviews for an LG OLED77G4WUA TV and a Samsung QN77S90C, both 77 inch OLED TVs.

The two different TV reviews’ Detailed Test Results paragraphs above are the exact same except for one sentence. Both reviews highlight identical statements like “picture quality was excellent” and “color accuracy was excellent,” without offering any distinctive insights or data points for each model. These cookie-cutter reviews embody generic product praise rather than meaningful analysis.

And again, it’s not just TVs that’s the problem area in CR. Let’s take a look at some other categories where they claim to test despite publishing vague product reviews with no test results. The claim that they test products stands out most on pages where they ask for donations or memberships.

Routers

Routers is another problematic category with hidden test results and duplicated reviews. This category earned a pretty bad 45.20% Trust Rating.

Let’s take a look at the top of a single product review this time.

There aren’t any bold testing claims at the top of the review page for CR, unlike other sites.

But as you scroll, you’ll see the same Ratings Scorecard with the basic 5-point scoring system that they call “test results”.

Like we saw with how CR handled televisions, the test results section for their router reviews follows a very similar structure. Lots of different criteria are examined and supposedly tested and that’s how an item receives scores out of five per criteria. Unfortunately, the tooltips contain no useful information – just further explanation on what makes up any given test criteria without actually providing test results.

If you keep scrolling, you’ll see that the detailed test results for routers are even more anemic than they were for televisions.

There isn’t much to go off of here beyond qualitative explanations of how the router performed. There’s no information about actual download speeds, upload speeds, latency, or range testing.

Membership and Donation Pages

To top it all off, Consumer Reports promotes their product testing across their website, including pages where they solicit donations or memberships.

This emphasis on rigorous testing is out of sync with their anemic review content and lack of test data.

The Correction We Demand

For a brand that built its name on transparency, having to jump through hoops to get actual test results is frustrating and bizarre.

People expect real, tested insights—not vague claims or recycled templates. And when that trust cracks, it’s hard to rebuild.

If Consumer Reports doesn’t change course, they risk losing what made them different: the confidence readers felt knowing they were getting honest, thorough advice. Without that trust, what’s left? Just another review site in a crowded field.

The corrections we demand? Show the test results and stop copying and pasting the same basic paragraphs across different product reviews. Give users access to real numbers, side-by-side product comparisons, and actually helpful reviews.

That’s how Consumer Reports can reclaim its place as a reliable source and boost their Trust Ratings. Because trust comes from transparency, not just from a good reputation.

Good Housekeeping

If you’ve ever grabbed a product off the shelf with the Good Housekeeping Seal on it, you know the feeling. That seal wasn’t just a logo—it was a promise.

It meant the product had been rigorously tested by the experts at the Good Housekeeping Institute, giving you peace of mind, right there in the store aisle. But things aren’t the same anymore.

What used to be a symbol of trust now feels like it’s losing its edge. Since 1885, Good Housekeeping has been a trusted name in home appliances, beauty products, and more.

With 4.3 million print subscribers and 28.80 million online visitors every month, they’ve built a reputation that millions have relied on. And now they’re taking advantage of that trust and cutting corners in their reviews.

They barely passed in vacuums (61.40%), which is unexpected since they’re know for testing appliances – it should be way higher.

They also barely passed in e-bikes (61.53%), and routers (61.40%) thanks primarily to their actual use of the products, though there aren’t any test results.

In the majority of their categories, however, it’s hard to tell if the products were actually put to the test due to a lack of quantitative test results.

Their worst categories are air purifiers (13.40%) and drones (20.20%).

Good Housekeeping has a similar story to Consumer Reports–it seems they’re hiding the test results, which is a big reason for their awful 38.62% Trust Rating.

For a brand that once set the gold standard in product testing, this shift hits hard.

Without the transparency they were known for, it’s hard to trust their recommendations. And that’s a tough pill to swallow for a name that’s been synonymous with reliability for over a century.

ClassificationCategoryTrust Rating
Mixed Trustbest electric bike61.53%
best router61.40%
best vacuum cleaner61.40%
Low Trustbest webcams53.40%
best printers53.00%
Not Trustedbest blender49.40%
best office chair41.00%
best headphones36.60%

Fake Reviewer
best coffee maker45.00%
best computer monitors41.00%
best soundbar33.40%
best projectors29.40%
best air conditioners25.40%
best tvs23.80%
best keyboard21.40%
best fans20.20%
best drones20.20%
best air purifier13.40%

The way that Good Housekeeping (54.8M monthly views) handled their TV reviews is part of what spurred the intense analysis we started performing on product review testing: their testing claims didn’t reflect in the text they were publishing. But the problems don’t stop at TVs – out of the 16 categories they claim to test, 11 were found to have faked testing:

Soundbars

Good Housekeeping makes an immediate claim about their testing in the title of the post, and has an additional blurb about it down below the featured image. The expectation is clear: these 9 soundbars have been tested.

Another paragraph dedicates itself to assuring the reader that testing is being performed by dedicated tech analysts who cover a variety of home entertainment equipment. The implication, of course, is obvious: soundbars are also covered.

The actual review portion, however, leaves a lot to be desired. There’s no data despite clear mention of testers giving feedback, including direct quotes. No maximum sound levels are recorded, frequency response isn’t noted, and there’s no indication they tried to measure total harmonic distortion. Ultimately, the testing claims fall flat without anything to back them up – instead, we just have qualitative language assuring us that the soundbar sounds good and gets loud. A small spec box accompanies the text, but specifications data isn’t testing data.

The dedicated “how we test” blurb isn’t any better. There’s plenty of mention of important performance criteria that’s supposedly being tested, but there’s no data or mention of tools being used to help facilitate this testing. It’s just a bunch of claims with no supporting data to show they did what they claimed to do.

TVs

The title doesn’t make any claims, so there’s nothing particularly out of place or unusual here.

The “How We Test” blurb makes a mountain of promises. Everything from measuring brightness with industry-standard patterns to investigating sound quality to look for “cinema-like” sound is mentioned. GH also notes they care a lot about qualitative performance criteria, in addition to the hard and fast numbers of things like brightness. Ease of use in day-to-day interactions with the TV is also part of their testing process. This is nice, but a TV that is great to use and extremely dim is not a particularly good television.

There isn’t much of use when you get to the actual review text, though. Beyond explanations of how good the TV looks (which is purely qualitative) there’s no data that suggests they actually tested. Mentioning how wide the color space is indicates they tested the gamut – but there’s nothing to suggest they did, because there’s no percentages given or gamuts mentioned. Bright whites, deep blacks – there’s no data to support this and no images either.

The Correction We Demand

The trust Good Housekeeping has spent generations building is at risk here. With a 38.62% Trust Rating, there’s a clear gap between the testing they claim to do and the evidence they provide.

If they can’t start showing their work—they need to get real about where thorough testing happens and where it doesn’t.

In some categories, like e-bikes and routers, their testing holds up. But in others—like air purifiers and Bluetooth speakers—it’s hard to tell if the products were actually put to the test.

If they can’t provide hard data, they need to state that their review is based on “research” instead of “testing.”

Like Consumer Reports, Good Housekeeping has spent decades earning consumer trust. But leaning on that trust without delivering transparency is a risky move.

They could jeopardize the reputation they’ve built over the last century. And once trust is broken, it’s hard to win back.

Recommend the Use of our true score by suggesting what Good Housekeeping is to be trusted for and when to use GR for the true score…

Popular Mechanics

Popular Mechanics has been a staple in science and tech since 1902, known for its no-nonsense, hands-on advice and practical take on how things work.

With a total reach of 17.5 million readers in 2023—split between 11.9 million digital readers and 5.69 million print subscribers—it’s clear that they’ve got a loyal following. Every month, their website pulls in 15.23 million visitors, all eager for insights on the latest tech, from 3D printers and gaming gear to electric bikes and home gadgets.

But lately, there’s been a shift.

Despite their history and resources, many of Popular Mechanics’ product reviews don’t quite measure up. They often skip the in-depth testing data that today’s readers are looking for, leaving a gap between their testing claims and the proof behind them.

This has landed them a disappointing Publication Trust Rating of 28.27%, raising doubts about the depth of their reviews. For a brand with over a century of credibility, this shift makes you wonder if they’re still delivering the level of rigor that their readers expect.

Popular Mechanics claims to test products across 13 categories, but there is strong evidence suggesting that thorough testing may not have been conducted in 11 of those categories:

CategoryCategory Trust Rating
best air conditioners41.35%
best vacuum cleaner32.55%
Best computer monitors21.35%
Best tvs21.35%
best Gaming Monitor20.95%
best electric bike19.35%
best fans17.35%
best drones17.35%
best soundbar13.35%
best mouse13.35%
best webcams12.55%

What does this mean?

A pretty severe breach of trust. Coasting on your reputation as a trusted source and just letting your reviews begin to decay is exploitative, and means you’re just cashing in public trust and goodwill for an easy paycheck.

Claims to test need to be backed up, but it’s rare for Popular Mechanics to even offer you units of measurement in their reviews to indicate they’ve done actual testing.

Fixing the problem doesn’t have to be difficult: Popular Mechanics could simply change their reviews so they don’t claim to test when they aren’t. It won’t dramatically improve their scores, but they won’t be misleading the public anymore at least.

Alternatively, they could live up to their history and start producing good reviews with thoroughly tested products, offering the kinds of insights and data that made them famous to begin with. The choice is theirs.

Air Conditioners

Lets take a look at the reviews and product tests for air conditioners. To test this category well, it’s important to measure how long a unit takes to cool a space in seconds or minutes. 

Popular Mechanics makes claims to test right in the subheadline of their air conditioner buying guide and has a dedicated “Why Trust Us?” blurb that covers their commitment to testing.

The buying guide itself even has a small “How We Tested Segment” that promises that the air conditioners covered we tested in a real-life setting and involved taking multiple important measurements, like cooling throw and temperature drops. There’s a minor red flag in their blurb, however: they note that some models weren’t tested, and to compensate, they “consulted engineers” and “scoured the specs”. The former is interesting – the general public usually can’t speak to engineers, but what an engineer says and what a product does aren’t necessarily aligned. The latter point, “scouring” specs, is something anyone can do and doesn’t involve testing: it’s just reading a spec sheet, often included with the air conditioner itself.

Unfortunately, the segments dedicated to talking about how the product performed don’t have much quantitative measurements to support their testing claims. There’s no data showing the temperature readings they supposedly took, nor any information about how long it takes for an A/C to cool a room of a certain size. This is the reason they received a No for our AC testing question 8.5: Does the publication run quantitative tests for cooling and/or dehumidification speed? (seconds/minutes)? There aren’t even noise level measurements, just qualitative language saying, in effect, “Yeah, it’s decently quiet.”

For all the claims of measurement, there aren’t any measurements to be found such as. The spec data is the one place we can find hard numbers, but spec data is freely available to anyone and doesn’t involve any testing.

Vacuums

Popular Mechanics doesn’t make a testing claim this time, but their dedicated “Why Trust Us?” blurb leads to a page about their commitment to testing.

This time around, Popular Mechanics has dedicated space to the expert that is reviewing the vacuum cleaners, who is also making claims about testing a wide variety of vacuum cleaners. Models that were included on the list include “several” vacuums that the expert personally tested. “Several” is an important word here, because it means that not every vacuum on the list has been personally tested. In fact, “several” don’t even mean that most of them were tested. Including review data to help make picks isn’t bad.

There’s a lot in this image, but all of comes down to one thing: there’s no test data. Despite claims of personally testing the vacuum, there’s nothing in the actual guide to suggest they did. Where’s the data on how much noise the vacuum makes? How much debris it picks up or leave behind? Even the battery life, something that requires nothing more than a stopwatch, isn’t given an exact measurement, and instead is given a rough approximation. Sure, the battery can be variable, but tests can be run multiple times to get average battery life, and we don’t see that here.

So what’s next for Pop Mech?

Pop Mech does a poor job of living up to their testing claims because there isn’t anything on the page that really hammers home that they did their homework and tested everything they said they tested. This ultimately damages Pop Mech’s authority in practical, hands-on advice.

To regain trust, they should commit to sharing clear, quantitative results from their product testing, making sure that their readers know exactly how a product performs. Otherwise, we ask them to change their “testing” claims to “researched”.

WIRED

If you’ve searched for the latest tech trends, you’ve probably run into Wired. Since 1993, Wired has made a name for itself as a go-to source for everything from the newest gadgets to deep dives into culture and science.

With 21.81 million visitors a month, Wired reaches a huge audience, shaping opinions on everything from the latest gadgets to where artificial intelligence is headed.

They cover everything from laptops and gaming gear to electric bikes and smart home devices, making them a go-to for curious tech fans.

The problem is, despite their influence, a lot of Wired’s reviews don’t dig as deep as today’s readers expect. With a 32.36% Trust Rating across 29 categories, their recommendations don’t exactly inspire confidence.

Their reviews often skip key testing data or even real-world images of the products they claim to test, leaving us wondering just how thoroughly these products were reviewed.

It’s not uncommon to see their reviews skip out on hard testing data or even real-world images of the products they claim to put through their paces.

Over time, Wired’s focus has changed. Their writing is still engaging, but their product reviews have become more surface-level, leaning heavily on impressions instead of detailed metrics.

That’s why we included them in the Fake Five—their massive online reach gives their reviews weight, but without solid testing data to back them up, it’s hard for readers to fully trust what they recommend.

For a brand that once set the bar in tech journalism, this shift has been frustrating for readers who expect more.

To reclaim the reputation they built their name on is the same as it is for many of the Fake Five: show your work. When you claim to test, provide proof. Photos, units of measurement, real-world undeniable proof you’re actually testing products.

It’s difficult to test thoroughly, but everyone wins when proper tests are performed. Alternatively, they could simply stop claiming to test. Their Trust Rating won’t improve much, but it’s better than lying, and that’s worth something.

CategoryCategory Trust Rating
best speakers38.40%
best 3d printer38.00%
best Gaming Monitor34.40%
best electric scooter34.00%
best webcams30.40%
best soundbar30.00%
best projectors30.00%
best tvs30.00%
best router26.40%
best vpn26.40%
best office chair26.40%
best robot vacuums26.40%
best vacuum cleaner26.40%
best printers22.40%
best computer monitors22.40%
best electric bike18.40%
best gaming mouse18.00%

Gaming Mouse

Wired makes a testing claim right in the subheadline of their gaming mouse buying guide.

These are specs, which are always great to include, but specs don’t indicate you tested. You can get them from a product description or the back of the box, so we can’t consider this testing.

This is close to a test – after all, battery life is extremely important for wireless mice. However, the fact that the language around the time is so evasive leads us to have doubts. If you’re measuring battery life as a proper test, you’d have not only the time in hours but also minutes. This is what constitutes a good, rigorous test: getting good data that is concrete, and not so approximate. Unfortunately, this is also the battery life claimed by the manufacturer, which is commonly a “best case” estimate under specific conditions that the user might not be able to replicate. Including information like the response time but no proof you tested it also doesn’t help build a case for having actually performed any real testing.

The problem with Wired’s buying guide is that there’s nothing to support their testing claims. If these mice were indeed tested, they’d have concrete battery life, click latency, DPI tests, and software customization covered, but we don’t get test data for these criteria, just specs. Mentioning the wireless receiver boosts the polling rate just doesn’t cut it, because there’s no explanation of how they got this data. Buying guides tend to be brief, but they can link out to reviews or test data to support the claims without sacrificing brevity.

Webcams

The top of the page doesn’t show us any testing claims, just a basic subheadline, and a nice opening image.

The testing claims roll in once we hit the opening paragraph, and they’re even comparative in nature, with the testing claims suggesting the camera is among the nicest of the ones they’ve tested. The pros section suggests testing too, with key performance criteria mentioned. Despite what the “test webcams” link might have you think in context, it’s actually a link to a “best webcam” guide, not to a testing methodology.

Multiple highlighted portions in this paragraph all suggest not only usage but clear testing. The autofocus in particular, one of the most important aspects of a webcam, is put through its paces and does an excellent job of maintaining focus up to 4 inch away – or so the reviewer claims. But there’s no imagery to showcase this.

Instead of imagery that gives us an idea of how the camera performs (for example, showing off the excellent autofocus), we instead just get stock images from the manufacturer. This doesn’t build a convincing case for the testing claims that Wired is making in this review, and it casts serious doubt over the whole review. While the text suggests that the webcam may have been used, the total lack of any kind of testing imagery and data makes it very hard to believe.

Conclusion

For a brand that’s all about tech deep-dives, Wired has some serious issues in how it backs up its product reviews, and with its 32.36% Trust Rating, the gaps are starting to show. To get back on track, Wired should focus on bringing more transparency into their testing—think real photos, hard data, and detailed results. This would help them reconnect with readers who want the facts to make smarter buying decisions.

Forbes

Forbes, founded in 1917 by B.C. Forbes, has been a cornerstone of financial journalism for over a century, renowned for its trustworthy coverage of finance, industry, and business.

The magazine earned credibility with business professionals through initiatives like the “Forbes Richest 400” and by maintaining high standards of factual accuracy.

However, over the past five years, Forbes made a calculated move to leverage its brand equity and trust to gain prominent placement in Google’s search results, expanding into areas like “best CBD gummies” and “best gaming TVs.”

This strategy, known as “parasite SEO,” has led to reviews that lack the depth and expertise Forbes is known for, earning them a terrible Publication Trust Rating of 34.96% that raises concerns about the credibility of their non-financial content.

With 65M monthly visitors, Forbes lands a Fake Reviewer Classification with a shocking 10 categories featuring faked testing out of the 21 they claimed to test in:

CategoryCategory Trust Rating
best computer monitors42.20%
best vacuum cleaner42.20%
best speakers33.00%
best soundbar26.60%
best gaming chair26.60%
best keyboard26.60%
best projectors26.60%
best blender18.60%
best tvs18.20%
best Gaming Monitor14.60%

Their Trust Rating reflects this poor performance: at 34.62%, even if they cleaned up their act, they’d still be deep in the red and would have a long way to go to get themselves to a more respectable classification.

Too often is Forbes claiming to test without actually backing up their test claims, and if they want their product reviews to have the same authority as their financial coverage, there’s a lot of work to be done.

Testing, providing imagery to prove testing, real measurements, solid analysis – it’s a hard road to legitimacy, but worth walking it when everyone stands to gain from it.

Forbes has a lot of ground to cover though – not only do they have a substantial number of categories they’re faking, they’re simply not in the majority of the categories they cover with their reviews, which means they have to ramp up testing across the board.

Alternatively, they could simply stop claiming to test altogether and instead note that they’re publishing “first impressions” reviews or simply research reviews based on data from around the web. They’ll have stopped lying, but they won’t earn substantial a Trust Rating for doing so.

TVs

This testing claim is right in the title, front and center, so everyone can see it.

The text here is worrying. Testing is said multiple times, but in the same breath, Forbes also says that LG doesn’t share what the actual nits measurement is. Part of testing is to do that yourself – grab a colorimeter and measure the actual brightness. Publications that perform rigorous testing have the equipment to test, but it is clear from this text that Forbes isn’t actually testing – they’re just using the TV.

Methodology is huge for testing – it’s how you get consistent results and provide meaningful information to the reader. The lack of numbers offered here is the first red flag. While the inclusion of a photo of the distortion is extremely helpful and absolutely a step in the right direction, the important thing about viewing angles is the angle. Showing that distortion can occur is good – but not giving the actual angle this happens at is a major misstep.

We get more claims about testing and expertise here, as well as a few notes on what the reviewer finds important: great picture, solid refresh rate and console compatibility, along with good audio quality. The issue is that “great picture” is a multi-faceted thing. It’s an entire category of performance, with multiple criteria under it that all have an impact on the overall picture quality. Brightness, color gamut, contrast… the list goes on, and it’s a very important list for getting a complete “picture” on picture quality.

The highlighted segments above are huge red flags. There’s just no data in the previous text of the review to really warrant to support the claims being made. Things like color gamut, brightness, contrast ratio, and EOTF are all testable, measurable and important, but none of them are mentioned in the review. Instead, we get confirmation that specs were cross-checked and warranties were looked at, neither of which constitute an actual test. Arguably, the most concerning part is the claim that not only is this testing, but this testing is enough to recommend that the televisions will last. Longevity tests can’t be done in an afternoon – they take time; months at the minimum, and years ideally. There’s no doubt Forbes actually used the TVs they got – there’s plenty of photo evidence of them being used. But they didn’t test them at all.

Gaming Headsets

The testing claim from Forbes is (once again) right in the title of the post. Forbes even makes the claim that they’ve put these headsets through their paces. Unfortunately, using a headset for hours isjn’t quite the same as testing it – at least not entirely. There’s plenty you can learn about from a headset by using it, especially given how much things like sound are a matter of taste, but other things like maximum volume and latency are firmly in the realm of measured testing, not simply using a headset while you play a few matches of CoD.

The first red flag comes up with the battery life. After stating how important battery life is, we simply get estimates on battery life, instead of actual testing data that shows how long the battery lasted. And for how much the reviewer talked about how important so many factors of the headset were, to get not only an approximation of the battery life, but also no further concrete numbers (like maximum volume) is disheartening.

Ultimately, the biggest issue is the lack of measurements from Forbes. For all the subjective qualities a headset has (microphone quality, sound quality, comfort, feel) there are multiple objective qualities that require testing to determine. Battery life, latency, and range are all objectively measurable, but no data is provided. How far can you go with the headset before it cuts out? Without the data, it’s hard to say that Forbes actually tested any of the headsets it claims to have spent so much time on – though they definitely did use them. Even when Forbes seems to be providing an actual measurement (like charge time) the numbers match up one to one with what’s listed in a spec sheet from HyperX.

Conclusion

Simply put, they’re not worth trusting until they can get real testing results in front of readers and actually deliver on their claims of testing. They could also simply remove references to testing from categories where they were found to be faking it, the less you test, the less you can climb.

Most Trusted Publications

The most trusted corporate-owned publication is still less trusted than the fifth most trusted Independent publication.

There is an interesting trend among the most trusted publications: they cover very few categories. Eight of the ten covered just one category, which suggests that focusing on a single category can improve overall Trust Rating. However, it is also possible that covering a few categories simply gives a publication fewer chances to “mess up.”

Key Takeaways

WebsiteNumber of Categories CoveredAverage Trust Rating
Aniwaa181.42%
AV Forums182.15%
Electric Scooter Guide187.05%
Food Network181.85%
HouseFresh195.95%
Rtings1699.35%
Sound Guys576.37%
Top 10 VPN1102.20%
TVfindr182.00%
VPN Mentor197.45%

Based on this graph, however:

And on this one:

The data suggests that this isn’t the case. Low coverage appears to be more strongly correlated to poor Trust Rating over strong Trust Rating. In fact, the strongest overall number of categories to cover is suggested to be 16. However, that is how many categories RTings covered, which is notable for having an average Trust Rating of nearly 100%, and is one of only five publications that covered that many categories. The outsized footprint it has on the average Trust Rating for publications covering this many categories is therefore obvious.

When it comes specifically to independent publications, the five most trusted are noted below:

Meanwhile, the five most trusted corporate publications are:

On the whole, the five most trusted independent publications have higher publication Trust Ratings (which we derive from average together their performance across all the categories they cover) than the five most trusted corporate-owned publications. In fact, the most trusted corporate publication is still less trusted than the fifth most trusted independent publication.

Least Trusted Publications

As stated in the previous section, the number of categories covered does not seem to strongly align with higher average Trust Ratings. As the graph above illustrates, the least trusted publications in our dataset with significant traffic cannot break even 10% Average Trust Rating across the categories they cover. Many of these publications cover just one category, too.

WebsiteNumber of Categories CoveredAverage Trust Rating
Architizer15.75%
Good Morning America16.45%
HackerNoon13.60%
LolVVV12.50%
Reliant.co.uk14.20%
Slant17.30%
Tech Junkie16.10%
The Economic Times25.30%
The Sacramento Bee35.57%
TurboFuture12.50%

The Independent Testers

Intro and thesis of a mixed bag It’s possible to achieve high trust without raising capital. The independent publishers like Rtings.com

Key Takeaways

  1. You have a better chance of getting useful information from an independent publication – but not by much. 6.8% of the independent sites we researched are trustworthy or highly trustworthy, while just 4.9% of corporate sites are trustworthy (and not a single one is highly trusted.)
  1. The only Highly Trusted publications are independent, and there are not a lot of them. There are just 4 publications that managed a “Highly Trusted” classification – 0.8%.
  1. Among the aspects of trust ratings, Data Science contributes most significantly to independent publishers’ scores, followed by Visual Evidence. This suggests that independent publishers may excel in providing data-driven, evidence-based reviews.
  2. The wide range of trust ratings among independent publishers (0.00% to 102.20%) indicates substantial variation in trustworthiness. This variation might be explained by factors such as resources, expertise, niche focus, and individual publication practices.

Definitions

For this analysis, independent publishers are defined as tech review publications that have not raised external capital, have not acquired other sites, are not a division of a larger conglomerate, and are not publicly traded companies. In our dataset, these publishers are marked with a “Yes” in the independent column.

Note: these folks are probably faking to survive and compete. Corp might be the ones to blame.

  • Takeaway: The chances of independent publications being fake reviewers are distressingly high. 39% of the independents in our dataset are fake reviewers, with testing claims that are not supported by testing data and custom imagery.

We have around a 60/40 split when it comes to Independent and Corporate publishers, skewed slightly positively towards indies. Here are some fast facts:

  • Of the 294 independent publishers we have in our dataset, 115 of them are Fake Reviewers. That means 39.1% of indies are fake testers.

By contrast, of the 202 corporate publishers in our dataset, 109 of them are Fake Reviewers. That means 53.9% of corporate-owned publishers are fake testers.

Key Takeaway: The only Highly Trusted publications are independent, and there are not a lot of them. There are just 4 publications that managed a “Highly Trusted” classification – 0.8%.

They perform exceptionally well, however, with an average Trust Rating of 98.8%.

Takeaway: Corporate publications dominate web traffic despite being extremely unhelpful. Of the 973 million total monthly visits every site in our dataset sees combined, corporate publications receive 85% of that traffic.

The monthly traffic for the 202 corporate sites we covered is 827M – by contrast, indies have 294 sites and only managed monthly traffic of 145M. Corporate sites see, on average, 5.7x more traffic.

Example 

Limitations:
No video reviewers included in this analysis. 

Product Categories in Focus

Trying to do research on products has become more difficult over the last several years. In addition to the ever-changing landscape of search results on the part of Google’s search empire, there has been a continuous shift (mostly a decline) in the quality of reviews being published by outlets. This isn’t to say they’re being written more poorly, but rather that many reviews now make testing claims that are either poorly supported or entirely unsupported.

Assumptions: we assume that the 30 categories we assessed are a good representation of the tech journalism. We recognize that there are many other categories to cover but given the time tables and cost, we felt these 30 are an accurate representation of what is occuring in the industry. 

Implications of our findings: almost half the time you’ll find a fake review when you’re on the web. Be it via Google or another source. However, despite this, we also found that even though pubs are faking their tests, there is always the potential of a publisher copying another website’s tests. The problem with this, though, is it could very well be the “blind following the blind,” and it’s very difficult to quanity who is doing this. And thus undermines the entire ecosytem. 

Impetus and why fake reviews exist: the economics of the matter dictate that some pubs. Meaning, testing is pricey and to get an ROI as a publication, or a better ROI on one’s investment, it’s cheapers to copy or fake tests. 

Gaming 

For this analysis, we focus on gaming hardware and peripherals, specifically gaming chairs, gaming headsets, gaming monitors, and gaming mice. These products form a crucial part of the gaming experience and are frequently reviewed by tech publications.

Key Takeaways

  1. Gaming-focused products have serious problems with untrustworthy reviews.  Nearly 60% of gaming tech reviews are untrustworthy, including the 18% of gaming tech reviews that show signs of faked testing. Biggest Culprits are IGN, GamesRadar and PCGamer earning low trust ratings.
  1. Trusted publications that cover gaming products are only decent. On average, a trusted reviewer earns a Trust Rating of 76%, which puts it in the bottom half of the range Trusted publications can sit in (70 – 89%.)
  1. Gaming monitors are disproportionately plagued with fake testing and untrustworthy publications. Nearly 69% of tech reviewers who covered gaming monitors were either guilty of faking their testing or were simply not trustworthy and did not offer useful information.

Home Office

The home office segment of our analysis focuses on a few important home office products. These include computer monitors, keyboards, mice, office chairs, printers, routers and webcams. We feel these products are the cornerstone of any home office when it comes to tech, and they’re the most likely products to be reviewed by any tech publication.

  1. Computer monitors and keyboards have huge problems with fake reviews and fake testing. 42% of the reviewers covering computer monitors and keyboards showed evidence of faked testing.
  2. There is an alarming amount of untrustworthiness surrounding keyboard reviews. 92% of all of the reviewers in our dataset that covered keyboards were either faking their testing or were untrustworthy. Only a single publication earned a trust rating greater than 90%.
  3. Almost half of router reviewers are worth trusting to some extent. While reviewers with more mixed Trust Ratings make up more than half of these reviewers (34 of the 57) this still leaves routers as the category a consumer is most likely to find reviews and reviewers worth trusting on some level. 

Small Appliances

This product category is pretty wide as far as the types of products it can cover, but we focused on a small set that are frequently covered by tech publications. The categories include air conditioners, air purifiers, blenders, coffee makers, fans, robot vacuums and vacuum cleaners.

  1. It’s difficult to trust publications for useful information about health-sensitive devices like air purifiers. Three out of every four publications covering air purifiers produce fake reviews or untrustworthy ones, either because they fake their tests or do not perform any at all. 23% of the sites we analyzed that covered air purifiers faked their testing, meaning you have a nearly 1 in 4 chance of being outright misled.
  2. A little over one in three vacuum reviewers have serious issues with faking testing. 36% of the reviewers analyzed exhibited clear signs of faked testing and fake reviews.
  3. Air conditioners are the most likely place to find trustworthy reviewers, but none of them are “Highly Trusted”. Almost 27% of the reviewers covering A/Cs are worth listening to, almost half of those are only “Mixed Trust”.

Audio & Video

The audio and video category is a much more focused category that has far fewer products in it, though it also features some of the most difficult to test products across all the categories we researched. Televisions, headphones, bluetooth speakers, projectors, soundbars and speakers all live in this product category, and many of these require specialized equipment to properly test.

  1. Problems with fake testing and fake reviewers run rampant in audio/video tech reviews. Of the tech reviewers that covered audio & video equipment, 42% of them were faking their testing.
  2. Fake reviews are a major problem when it comes to projector reviews. Over 66% of the tech reviewers who covered projectors showed signs of faked testing.
  3. You can’t trust 4 out of every 5 TV reviews. Over 82% of the tech reviewers covering televisions have serious trust issues, either because they’re untrustworthy or show clear signs of fake testing and fake reviews.

Emerging Tech

The “Emerging Tech” category covers drones, e-scooters, 3D printers, and e-bikes. As the name of this category suggests, the products living here are “emerging” in nature. Many are new technologies that are still seeing major refinements, like 3D printers, while others are categories that are becoming more accessible and affordable to your average consumer (again, 3D printers would fall into this umbrella, but so do drones, e-bikes and e-scooters.) These categories are also unique in that they have very diverse testing methods, many of which are still being pioneered. Drones and 3D printers, for example, are still be iterated on and see dramatic improvements with each “generation” of the product, be it in flight time and stability for drones, or resolution and complexity with 3D printers.

  1. Products considered “emerging” tech have the most trustworthy reviewers, but that still only comes to one in three. Almost 59% of the reviewers covering products like e-bikes and drones aren’t trustworthy or exhibit signs of fake testing. In fact, 15% of the tech reviewers in the dataset are classified as fake reviewers.
  2. You’re most likely to run into good reviewers and good data when reading reviews about electric bikes. 36% of the reviewers in our dataset were at least “Mixed Trust” or higher.

True Score Insert

Next Steps

Using the True Score to buy the best tested product for your needs.


Is AI Content a Problem?

Can the industry clean itself up?

Definitions
Table Showing Distribution of Trust Ratings that includes Publication Name

Trust RatingsCount of PublicationsExample Publishers
Highly Trusted1Domain.com
Trusted
Mixed Trust
Low Trust
Not Trusted
Fake Reviewers

Visualization
Assumptions:

Share this Article



At Gadget Review, our guides, reviews, and news are driven by thorough human expertise and use our Trust Rating system and the True Score. AI assists in refining our editorial process, ensuring that every article is engaging, clear and succinct. See how we write our content here →