Comparison Methodology
This page explains how we build and maintain VS comparisons so readers can verify assumptions and reuse the same framework.
How We Compare
- Use one shared test prompt for both tools in each comparison.
- Use a consistent baseline for duration, format, language, and export settings.
- Capture differences in a structured matrix so the same template can be reused across VS pages.
Source Policy
- Primary source preference: official pricing pages and official product documentation.
- Secondary sources: help center documentation and established review sources when official pages are incomplete.
- Each source-backed claim can include a source link and a checked date.
Scoring Dimensions
VS pages use an internal score (our in-house model), not a third-party authority rating. Default weights: pricingValue (25%), ease (20%), speed (20%), output (20%), customization (15%).
External ratings (when shown elsewhere) are treated as separate third-party signals and are not the same as this internal score.
Each metric is tagged as verified (linked to primary sources) or estimated (derived from structured product data when row-level links are still limited). Mixed pages contain both.
Every score block includes a method note plus per-metric rationale so readers can see why the score moved and what evidence backs it.
Winner cards are derived from internal score metrics: Price uses pricingValue, Speed uses speed, Quality uses output, and overall recommendation uses weighted total score. If the weighted gap is under 0.2, we mark it as a close call.
For featured tools, we apply a small internal-score calibration to keep recommendations consistent across the site.
Date Rules
- Updated: the date the page structure/content was last revised.
- Pricing checked: the date pricing and source-backed factual rows were last verified.
- We avoid conflicting date labels on the same page.