The State of AI Web Design in 2026: Data from Scanning 1,000 Sites
We scanned 1,000 websites built with AI coding tools and measured them across 7 design dimensions. The results reveal a web that is converging toward a single aesthetic. Here is the data.
Between January and March 2026, we ran Sailop's scanner against 1,000 websites that were publicly identified as built with AI coding tools. The sites came from "built with AI" showcases, Hacker News "Show HN" posts tagged with AI tools, Product Hunt launches that credited AI in their stack, and developer portfolios that mentioned Cursor, Bolt, v0, or Claude Code in their build process.
We scored each site across Sailop's seven design dimensions: color, typography, spacing, layout, component patterns, decoration, and content structure. This article presents the raw data, the patterns we found, and what it means for anyone building with AI tools.
For background on the seven dimensions, read what is AI slop: the 7 dimensions of generic design. For the patterns we detected, see our catalog of 90+ AI design patterns to avoid.
Methodology
We collected URLs from four sources:
Source breakdown:
+----------------------------+-------+---------+
| Source | Count | % Total |
+----------------------------+-------+---------+
| "Built with AI" showcases | 412 | 41.2% |
| Hacker News Show HN | 287 | 28.7% |
| Product Hunt launches | 198 | 19.8% |
| Developer portfolios | 103 | 10.3% |
+----------------------------+-------+---------+
| Total | 1,000 | 100.0% |
+----------------------------+-------+---------+Each site was scanned using sailop scan --url , which downloads the page, extracts all CSS (inline, embedded, and linked), parses the DOM structure, and scores each dimension from 0 to 100 (lower is better -- 0 means fully unique, 100 means maximally generic).
We excluded sites that were clearly templates (identical HTML to known template products) and sites that were behind authentication walls.
Overall Score Distribution
The average Sailop score across all 1,000 sites was 61.4 out of 100 (Grade D). For context, a hand-designed site from a professional agency typically scores between 15 and 30 (Grade A to B).
Score distribution (1,000 AI-built sites):
0-10 (A+) ██ 1.2% (12 sites)
10-20 (A) ███ 2.8% (28 sites)
20-30 (B+) █████ 4.9% (49 sites)
30-40 (B) ████████ 7.7% (77 sites)
40-50 (C) █████████████ 12.6% (126 sites)
50-60 (D+) ████████████████████ 19.8% (198 sites)
60-70 (D) ██████████████████████████ 25.7% (257 sites)
70-80 (F+) █████████████████ 16.4% (164 sites)
80-90 (F) █████████ 7.2% (72 sites)
90-100 (F-) ██ 1.7% (17 sites)
Mean: 61.4
Median: 63.0
Std Dev: 16.2The distribution is roughly normal with a slight left skew. The peak is in the 60-70 range, which corresponds to sites that hit most of the common AI patterns: blue color scheme, Inter font, three-column grid, rounded corners, and card-based layouts.
The 89 sites scoring below 30 (Grade B+ or better) typically had one of two characteristics: they were built by experienced designers who used AI as a coding assistant rather than a design tool, or they used Sailop or similar constraint tools during development.
Scores by Dimension
Not all dimensions are equally problematic. Here is the average score per dimension:
Average score by dimension (lower is better):
Color ███████████████████████████████████ 68.3
Typography ██████████████████████████████████ 66.7
Component ████████████████████████████████ 63.2
Layout ██████████████████████████████ 59.8
Decoration ███████████████████████████ 54.1
Spacing █████████████████████████ 50.6
Content Struct ██████████████████████ 44.3
0 10 20 30 40 50 60 70 80Color and typography are the worst offenders. This aligns with what we know about how AI agents generate CSS: token probability bias is strongest for color and font tokens. Content structure scores best because AI tools are actually decent at generating semantic HTML -- the problem is visual, not structural.
Most Common Patterns
We detected and ranked every pattern from our catalog of 90+ AI design patterns. Here are the 20 most frequent:
Top 20 AI design patterns (% of sites exhibiting the pattern):
1. Inter font family 78.3%
2. Blue primary color (hue 200-240) 72.1%
3. rounded-lg / rounded-xl on cards 69.4%
4. shadow-md / shadow-lg on cards 67.8%
5. Three-column feature grid 64.2%
6. Hero with centered text + CTA button 61.7%
7. White/gray-50 backgrounds 59.3%
8. text-gray-600 body text 57.8%
9. py-24 section padding 55.1%
10. Gradient hero (blue to purple) 52.4%
11. Check icon feature lists 49.7%
12. max-w-7xl centered container 47.3%
13. Card hover:shadow-xl effect 45.6%
14. "Get Started" primary CTA text 43.2%
15. Terminal mockup with 3 colored dots 41.8%
16. Badge/pill above hero heading 39.4%
17. Testimonial cards in 3-col grid 37.1%
18. Footer with 4-column link grid 35.6%
19. Dark gradient navbar 33.2%
20. FAQ accordion section 31.4%78.3% of AI-built sites use Inter. Nearly three out of four use a blue primary color. The combination of these two patterns alone creates a strong visual similarity that users notice instinctively, even if they cannot articulate why.
The terminal mockup with three colored dots appeared on 41.8% of sites -- a pattern that has become a reliable marker of AI generation.
Color Palette Analysis
We extracted the primary color from each site and mapped it to HSL hue:
Primary color hue distribution:
0-30 (Red/Orange) ███ 3.4%
30-60 (Orange/Yellow) ██ 2.1%
60-90 (Yellow/Green) █ 1.4%
90-120 (Green) ████ 3.8%
120-150 (Green/Teal) ██ 2.2%
150-180 (Teal/Cyan) ███ 3.1%
180-210 (Cyan/Blue) ████████████ 11.4%
210-240 (Blue) █████████████████████████████████████████ 40.7%
240-270 (Blue/Indigo) ██████████████████████ 21.8%
270-300 (Purple) ████████ 5.2%
300-330 (Magenta/Pink) ███ 3.1%
330-360 (Pink/Red) ██ 1.8%
AI Band (200-290): 72.1% of all sitesThe AI band (hue 200-290) accounts for 72.1% of primary colors. The single most common hue range (210-240, pure blue) accounts for 40.7% on its own. This means that if you pick any two random AI-built sites, there is a significant probability that both will have a blue primary color.
Warm hues (0-90) account for only 6.9% of primary colors. Sites in this range tend to score significantly lower (better) on the Sailop scale because the color alone is enough to create differentiation.
Font Family Analysis
Font family usage (heading / body):
Inter / Inter █████████████████████████████ 56.2%
Inter / system-ui ████████ 15.1%
system-ui / system-ui ████ 7.4%
Poppins / Inter ███ 4.8%
DM Sans / Inter ██ 3.2%
Inter / Roboto ██ 2.1%
Geist / Geist █ 1.8%
Custom serif / sans-serif █ 1.4%
Other combinations ████████ 8.0%
Sites using Inter (any role): 78.3%
Sites with same font heading + body: 67.1%
Sites with serif anywhere: 8.4%67.1% of sites use the same font for both headings and body text. This eliminates one of the most basic typographic tools for creating hierarchy and visual interest. Read more about why this matters in our typography problem article.
Only 8.4% of AI-built sites use a serif font anywhere. In contrast, roughly 35% of professionally designed sites use serifs for at least headings. This is another strong differentiator.
Framework Breakdown
We identified the build framework for each site where possible:
Framework distribution:
Next.js █████████████████████████████████████████ 48.3%
Vite + React █████████████████ 19.7%
Astro ████████ 8.4%
Nuxt.js █████ 5.6%
SvelteKit ████ 4.2%
Plain HTML ███ 3.1%
Remix ██ 2.3%
Other/Unknown ████████ 8.4%
CSS framework:
Tailwind CSS ██████████████████████████████████████████████ 89.2%
Vanilla CSS ████ 4.3%
CSS Modules ██ 2.8%
styled-comp. ██ 1.9%
Other ██ 1.8%Next.js dominates at 48.3%, which is not surprising given that most AI coding tools have extensive Next.js training data. Tailwind CSS is used by 89.2% of sites, confirming its near-monopoly as the CSS framework of choice for AI-generated code.
Scores by Framework
Average Sailop score by framework (lower is better):
Next.js ████████████████████████████████ 63.8
Vite + React ██████████████████████████████ 60.2
Nuxt.js █████████████████████████████ 58.4
SvelteKit ████████████████████████████ 56.1
Remix ███████████████████████████ 54.7
Astro ████████████████████████ 48.3
Plain HTML ████████████████████ 40.1
0 10 20 30 40 50 60 70Plain HTML sites score significantly better, likely because they tend to be built by developers who are making more intentional design decisions. Astro sites also score better than average, possibly because the Astro community emphasizes content-driven design over component libraries.
Next.js sites score worst, which may be explained by the high correlation with Tailwind and shadcn/ui -- a combination that produces very uniform output.
AI-Built vs Human-Built Comparison
To contextualize our data, we also scanned 200 sites that we identified as professionally designed (award-winning agencies, established brands, design-led startups):
Score comparison (AI-built vs human-built):
AI-Built Human-Built Delta
(n=1000) (n=200)
+------------------+-----------+------------+---------+
| Overall Score | 61.4 | 24.8 | +36.6 |
| Color | 68.3 | 22.1 | +46.2 |
| Typography | 66.7 | 19.4 | +47.3 |
| Component | 63.2 | 28.7 | +34.5 |
| Layout | 59.8 | 26.3 | +33.5 |
| Decoration | 54.1 | 24.6 | +29.5 |
| Spacing | 50.6 | 27.2 | +23.4 |
| Content Struct. | 44.3 | 25.1 | +19.2 |
+------------------+-----------+------------+---------+The gap is largest in typography (+47.3) and color (+46.2), confirming that these are the dimensions where AI tools perform worst relative to human designers. The gap is smallest in content structure (+19.2) and spacing (+23.4), where AI tools are closer to human performance.
Trends: What Is Getting Worse, What Is Improving
We compared Q1 2026 data to a smaller sample we collected in Q3 2025 (n=300):
Trend comparison (Q3 2025 vs Q1 2026):
Q3 2025 Q1 2026 Trend
(n=300) (n=1000)
+------------------+-----------+------------+---------+
| Overall Score | 58.2 | 61.4 | worse |
| Color | 64.1 | 68.3 | worse |
| Typography | 61.3 | 66.7 | worse |
| Component | 59.7 | 63.2 | worse |
| Layout | 57.4 | 59.8 | worse |
| Decoration | 52.8 | 54.1 | stable |
| Spacing | 51.2 | 50.6 | stable |
| Content Struct. | 46.1 | 44.3 | better |
+------------------+-----------+------------+---------+The overall trend is concerning: AI-built sites are getting MORE generic over time, not less. The average score increased from 58.2 to 61.4 in six months. Color and typography scores worsened significantly.
The likely explanation is a feedback loop. AI tools are trained on web data. As more AI-generated sites enter the training data, the models become even more biased toward the patterns they already favor. The training data becomes more homogeneous, which makes the output more homogeneous, which makes the next round of training data more homogeneous.
Content structure is the one bright spot, improving from 46.1 to 44.3. This may reflect improvements in AI models' understanding of semantic HTML and accessibility.
Specific Pattern Trends
Some patterns are growing rapidly:
Pattern frequency change (Q3 2025 -> Q1 2026):
Growing patterns:
Terminal mockup w/ 3 dots 31.2% -> 41.8% (+10.6)
Gradient hero (blue/purple) 42.1% -> 52.4% (+10.3)
Badge/pill above heading 30.7% -> 39.4% (+8.7)
"Get Started" CTA 35.8% -> 43.2% (+7.4)
Bento grid layout 12.3% -> 19.1% (+6.8)
Animated gradient border 8.4% -> 14.2% (+5.8)
Declining patterns:
Bootstrap grid classes 18.4% -> 11.2% (-7.2)
jQuery animations 9.7% -> 4.3% (-5.4)
Material Design components 14.2% -> 9.8% (-4.4)
Stable patterns:
Inter font 76.8% -> 78.3% (+1.5)
Blue primary 70.4% -> 72.1% (+1.7)
Three-column grid 63.8% -> 64.2% (+0.4)The terminal mockup and gradient hero patterns are growing fastest, suggesting these are becoming the new "default" elements for AI agents. The bento grid is emerging as a trendy alternative to the three-column grid, but it is rapidly becoming just as generic.
Recommendations
Based on this data, here are our recommendations for anyone building with AI tools:
1. Fix color first. Color has the highest score (most generic) and the largest gap from human-designed sites. Switching from blue to a warm hue outside the AI band immediately drops your score by 15-20 points. Use Sailop's design system generator to get a unique palette.
2. Use a serif font somewhere. Only 8.4% of AI sites use serifs. Adding a serif heading font instantly differentiates your site from the other 91.6%. Our typography article covers this in detail.
3. Break the three-column pattern. 64.2% of sites use it. Use asymmetric grids, prose layouts, or table-based comparisons instead. Read our guide to alternatives.
4. Drop shadows and rounded corners. 69.4% use rounded corners on cards, 67.8% use shadows. Try flat designs with borders, or no visible containers at all. The complete guide to anti-AI design has a full section on this.
5. Scan before you ship. Run sailop scan in your CI/CD pipeline. If your score is above 50, you look like two-thirds of AI-built sites. Our CI/CD integration guide shows how to set this up in five minutes.
6. Consider a different framework. Astro and plain HTML sites score significantly better than Next.js sites. This is not about the framework itself but about the ecosystem of components and defaults that come with it.
The Data Is Clear
The web is converging. AI tools are accelerating that convergence. In six months, the average AI-built site has become measurably more generic. The feedback loop between training data and output means this trend will continue unless developers actively resist it.
The good news is that differentiation is not hard. The bar is so low that a few intentional design choices -- a warm palette, a serif font, an asymmetric layout -- put you in the top 15% of AI-built sites.
The data does not lie. And neither does your Sailop score.
npm install -g sailop
sailop scan ./src --full
# See where you stand among the 1,000 sites we scanned
# Anything below 40 puts you in the top 15%Run it. See your number. Then fix it.
Try Sailop
Scan your frontend for AI patterns. Generate a unique design system. Ship code that looks intentional.