The AI Growth Stack
Independent Research & Reviews
AI-Powered B2B Software
Thursday, March 12, 2026  ·  The independent source for AI-powered B2B tools Subscribe to the weekly briefing →
Editorial Methodology

How we test and
rate software tools

Every review on The AI Growth Stack follows the same process — hands-on testing, structured evaluation, and honest scoring. Here's exactly what that looks like.

Our analysis is research-driven — built on deep synthesis of product documentation, verified user data, pricing structures, and feature comparisons across the market. We don't rely on vendor demos or press releases. Every article reflects structured research against the specific use cases our readers are actually navigating. This page explains our methodology in full.

1
Tool selection
We select tools based on reader demand, market relevance, and the significance of their AI features. We don't accept payment to review a tool, and we don't guarantee positive coverage to anyone. Tools can be submitted for consideration at maya@theaigrowthstack.com — we evaluate all submissions but can't commit to covering everything.
2
Deep research & data synthesis
For every tool we cover, we conduct structured research across multiple sources: official product documentation, verified user reviews on G2, Capterra, and Reddit, pricing pages, published case studies, and direct product walkthroughs. We cross-reference vendor claims against what real users report experiencing — not just what the homepage says.
3
AI feature analysis
We specifically focus on evaluating AI claims critically. We ask: Is this feature genuinely AI-powered, or is it rule-based automation relabeled? Does the AI require significant data or setup to work? What do real users report about accuracy and time savings? We cut through vendor marketing to surface what the AI layer actually does in practice.
4
Competitive positioning
Standalone analysis doesn't mean much without context. Every review situates the tool relative to its main alternatives — we explicitly address who should choose this tool and who shouldn't, and which competing product is a better fit for different use cases, team sizes, or budgets.
5
Scoring & publication
We apply our scoring framework (below), write the analysis, and publish with a clear date. Articles are refreshed when a product releases significant changes — we mark the last-updated date on every piece so you always know how current the information is.

How we score

Tools are rated on a 10-point scale. The score is a weighted composite across five categories:

Category Weight What we evaluate
AI Feature Quality
30%
Genuineness and usefulness of AI capabilities. Does the AI actually work? Does it save real time or improve real outcomes?
Core Feature Set
23%
Completeness and quality of non-AI features. AI is a layer — the underlying product has to be solid.
Ease of Use
17%
Setup experience, UI clarity, onboarding quality, and how long it takes to get to first value.
Pricing & Value
17%
Whether the pricing is fair relative to what you get, especially for AI features which are often gated to higher tiers.
Support & Reliability
13%
Quality of documentation, customer support responsiveness, and product stability during our testing period.

What the scores mean

9–10 Exceptional
8–9 Excellent
7–8 Good
6–7 Average
<6 Below par
A note on vendor relationships: Some tools we review provide free access to paid tiers for testing purposes. When this is the case, we disclose it in the review. Free access does not influence our scores or recommendations — we have published critical reviews of tools that provided complimentary access, and we will continue to do so. Our affiliate relationships are disclosed separately on each article and in our full affiliate disclosure.

Quick takes vs. full reviews

Not every piece of content on this site is a full review. We publish two types of content:

Full reviews follow the complete methodology above — minimum 2 weeks of hands-on testing, structured scoring, and competitive context. These are labeled "Review" in our category tags.

Quick takes are shorter-form opinions based on initial testing, public information, or notable product updates. These are clearly labeled "Quick Take" and do not carry a numerical score. They're our way of covering fast-moving news in the AI software space without waiting for a full review cycle.

If you have questions about our methodology or want to flag an inaccuracy in a review, contact us at maya@theaigrowthstack.com.