Tools · Semantic HTML Analyzer

Semantic HTML Analyzer.

A technical SEO tool that grades your rendered HTML for semantic compliance, accessibility, and Schema.org coverage. Paste HTML or analyze a live URL. Get a 0–100 score, an issues list with concrete fixes, and a clear picture of whether AI crawlers can actually parse your page.

Launch the App

Semantic HTML Analyzer dashboard — a 93/100 semantic score in a green ring, structural composition showing 38% semantic with semantic vs generic tag bars, an AT A GLANCE panel with image alt coverage, structured data, DOM density, meta coverage and text-to-HTML ratio, plus a summary, issue breakdown of 0 critical / 0 warnings / 3 info, document outline of heading hierarchy, and Schema.org coverage chips for ProfessionalService, Organization, Person, WebSite, BlogPosting, WebPage, and BreadcrumbList.
The score dashboard — semantic score, structural composition, at-a-glance metrics, summary, issue breakdown, document outline, and Schema.org coverage in a single view.

Why I Built It

AI Search retrieval is brutal on bad markup. When Gemini, ChatGPT, or Perplexity reach for a passage to cite, they're walking the DOM the same way assistive tech does — looking for landmarks, headings, and proper interactive elements. A page built out of <div onclick> soup with five H1s and no <main> can rank fine on classic Google but vanish from AI Overviews because nothing in the structure tells the model what's actually content.

I needed a tool that could grade a page on those signals at a glance. Lighthouse audits some of this, but it doesn't connect the dots — semantic HTML, schema coverage, meta tags, alt text, heading hierarchy — into a single AI-Search-readiness verdict. So I built one.

The other goal: make the suggestions actually trustworthy. LLMs love to invent plausible-sounding HTML rules. To prevent that, the analyzer is grounded in a documented rule set derived from web.dev, W3Schools, and other trusted sources — with explicit decisions recorded for the cases where sources disagree.

What It Catches

A handful of the most common violations on real pages — the ones that show up in nearly every audit.

  1. Div soup with click handlers. A <div onclick="…"> looks like a button visually but isn't keyboard-focusable, screen readers ignore it, and the browser doesn't fire activation on Enter or Space. Use <button> for actions; <a href> for navigation.
  2. Skipped heading levels. An <h2> followed by an <h4> with no <h3> in between breaks the document outline. Most retrieval systems and assistive tech walk the heading tree to understand section hierarchy.
  3. Multiple H1s — or zero. Browsers and screen readers don't implement the HTML5 document-outline algorithm in practice; they treat the first H1 as the page title. Multiple H1s muddy the signal. Zero is broken.
  4. Spans styled as headings. A <span class="section-heading">Work</span> looks like an H4 visually, but it doesn't appear in the heading tree. Crawlers and AI scrapers miss the section structure entirely.
  5. Missing image alt attributes. Decorative images take alt="" (empty but present). Missing entirely is broken. alt="image" or alt="photo" is worse than empty — describe what the image actually conveys.
  6. No <meta name="viewport">. Google's index is mobile-first; a missing viewport tag is one of the strongest negative signals you can ship.
  7. JSON-LD that drifts from the visible page. Schema entity names that don't match what's rendered can be flagged as spam. Worse, an article's headline that doesn't match the <h1> trains AI retrieval on the wrong title.

Each violation gets a severity tier — Critical (page is broken), Warning (real semantic issue), Info (defense-in-depth or polish) — and a concrete fix in the dashboard.

How It Works Under the Hood

Four stages. The first two are setup. The last two are the analysis.

1. Read the Page

Two input modes: hand it a URL, or paste rendered HTML. URL mode renders the page with JavaScript executed, so single-page apps are analyzed against the DOM users actually see — not the empty shell that ships from the server. Paste mode is for staging environments, gated pages, or pasting document.documentElement.outerHTML from DevTools.

2. Parse the DOM

A deterministic pre-pass tallies tag counts, walks the heading tree, extracts every JSON-LD @type, counts images and missing alt attributes, and computes the text-to-HTML ratio. Same input, same output — these numbers don't depend on a model.

3. Grade Against the Rule Set

The markup and tallies are then scored against the documented rule set. An LLM produces structured output: every issue is tagged with severity (Critical, Warning, Info) and a concrete fix; every strength lands in the "Doing well" list. The output schema is fixed. No free-form text. No hallucinated rules.

4. Render the Dashboard

Score ring, structural composition chart, at-a-glance metrics, issue list, heading tree, Schema.org chips. The whole report is one screen on desktop. For URL analyses, a Pre-render vs Post-render comparison flags content that exists in the rendered DOM but not in the server response — important when AI scrapers walk the non-JS version of your page.

What It Reports

Semantic score (0–100)

One number, color-banded:

Structural composition

The ratio of semantic tags (<article>, <section>, <header>, <nav>, <main>, <aside>, <footer>, <figure>, etc.) to generic <div>/<span> tags. Modern React or Vue apps can ship with 5% semantic tags and 95% divs — that's the signal you want to catch.

Issue list with severity tiers

Every flagged item lands in one of three buckets, with a one-line fix suggestion:

Semantic HTML Analyzer detailed analysis view — a Pre-render vs Post-render panel comparing server HTML and rendered HTML at 48k chars each with 98% SSR coverage and 24 to 24 headings; a Doing well checklist of strengths including document structure, heading hierarchy, metadata, JSON-LD, image alt text, and interactive element semantics; and a Detailed analysis section listing three INFO-severity issues with line numbers and concrete FIX suggestions.
Pre-render vs post-render comparison plus the issues list. Every issue cites the exact line and offers a concrete fix.

Heading hierarchy outline

The full H1 → H6 tree, indented by level. Skipped levels (H2 → H4 with no H3 in between) are flagged. Empty headings are flagged. Multiple H1s are flagged.

Schema.org coverage

Every @type in every JSON-LD block on the page, rendered as chips. Organization, WebSite, BreadcrumbList, Article, FAQPage — at a glance you see whether the page is feeding rich-results pipelines or coasting on default rendering.

Meta coverage

Five-checkbox summary: viewport, canonical, description, Open Graph (any og: tag present), Twitter Cards (any twitter: tag present). Missing viewport is the single most damaging fail here — Google's index is mobile-first.

Image alt and text-to-HTML ratio

Total <img> count and how many are missing alt. Plus the visible-text-to-HTML-bytes ratio — a heuristic, not a hard rule. Below 5% on a content page is suspicious; React apps routinely sit at 10–15% and rank fine.

Pre-render vs post-render

For URL analyses, the analyzer can compare what the server returns to what JavaScript renders. If your SSR shipped 12 headings but your client-side hydration adds another 18, that's a flag — non-JS crawlers (and a lot of AI scrapers) only see the first set.

What It Doesn't Do

When to Use It

Try It Now

Launch the App

Working With Me on This

The Semantic HTML Analyzer is free to use. The harder part is the rebuild — taking a Figma-driven div soup and refactoring it into something AI crawlers can actually parse without breaking the visual design. That's the kind of work the AI SEO consulting service handles. If you want me to grade your site and propose the fixes, start a conversation.

← All Tools