Advertising Disclosure: Some links on this site are affiliate links. We may earn a commission when you make a purchase — at no extra cost to you. This never influences our rankings. read our methodology

Omellody Research Hub

Evidence, methodology, scoring criteria, and the test data behind every Omellody recommendation. This is where our reviews come from.

Omellody Research is the evidence layer of the site. Every ranking, verdict, and score we publish is backed by a documented research process, a scoring rubric, and primary sources readers can verify for themselves. This hub is how we make that work visible.

5Active research programs
500+Products tracked
5,000+Documented test hours
QuarterlyRe-evaluation cycle

Research programs by category

Each program has its own scoring rubric, evidence log, and lead analyst. Click a category to see the criteria, the tested products, and the primary sources we reviewed.

What counts as evidence

For every scored product, our researchers maintain a working evidence bundle. Readers can see summarized versions of these inputs on each review page under the Evidence Box. The raw inputs include:

  • Primary vendor documentation — security whitepapers, privacy policies, terms of service, pricing pages, and support articles retrieved with a fixed date.
  • Independent third-party material — published security audits, regulator filings, reputable news reporting, app-store changelogs, and court or breach disclosures.
  • Hands-on test logs — setup, daily use, performance measurements, support ticket interactions, and cancellation flow screenshots.
  • Reader-submitted reports that we were able to corroborate against at least one primary source.

How our scoring model works

Scores are produced by the category rubric, not by a single analyst's preference. Each rubric weights dimensions that matter to that decision — for example, VPN scoring weights jurisdiction and leak behavior more heavily than visual design, while tax software weights covered-form breadth more heavily than cosmetic UI changes. Rubric weights and dimension definitions are published on How We Score. When we change a weight, that is disclosed in the relevant category research program page and in the product's update history.

Update cadence

Every scored product is on a quarterly re-evaluation cycle at minimum. We also re-open a review out-of-cycle when any of the following happens:

  • The vendor ships a breaking change, a major feature, or a material price change.
  • An independent audit, regulator action, or confirmed security incident affects the product.
  • A reader reports a factual issue and our team can verify it from a primary source.

Corrections, transparency, and conflicts

Affiliate relationships do not influence rankings. Display advertising is separated from editorial decisions per our Advertising Policy. When we make a factual mistake, we log it publicly on the Corrections page.

Research FAQs

Who does the research?

Named analysts are responsible for each category. Their bios, credentials, and areas of expertise are on the editorial team page, and every review page links to its reviewer's profile.

Do brands pay to be included?

No. Inclusion is based on reader demand and market coverage, not on affiliate status. Products without affiliate programs appear regularly in our comparison tables.

Can I see the raw test data?

Category research program pages publish summary evidence and methodology. Readers or researchers who need deeper verification for a specific claim can request clarification, and we will point to the relevant primary source.