Hi 👋🏻
Have you ever looked at a messy keyword list and felt like there’s a hidden order—like the topics are begging to line up into clean, reusable patterns? That’s not wishful thinking; it’s the EAV model (Entity–Attribute–Variable) nudging you toward a more semantic, scalable way to research user queries.
When you treat keywords as expressions of real-world things (entities), their properties (attributes), and the specific values of those properties (variables), your research stops being a flat spreadsheet and becomes a structured map you can build on.
What Is the EAV Model (for Keyword Research)?
EAV = Entity → Attribute → Variable.
- Entity: A distinct thing/concept (e.g., dog food, iPhone, Barack Obama, influencer).
- Attribute: A property of that entity (e.g., food type, breed, location, niche, marital status).
- Variable: A concrete value of the attribute (e.g., kibble, Labrador, US, beauty, married with kids).
Applied to queries, EAV turns “keywords” into composable blocks. Instead of chasing phrases, you model the world your users care about—and then assemble queries from that model.
How It Works (and Why It’s Different from Plain Keywords)
Traditional keyword-based thinking treats a query like “tokens with intent.” EAV treats it like “concepts with relationships.”
- Identify entities present or implied in your market (products, people, orgs, places, ideas).
- List attributes users actually compare/decide on (type, size, brand, use case, price, location, etc.).
- Enumerate variables for each attribute (finite or infinite sets).
-
Compose patterns that mirror real search behavior:
- Single-variant:
[Entity] + [Attribute:Variable] → “dog food kibble”
- Multi-variant:
[Entity] + [Attribute A:Variable] + [Attribute B:Variable] → “best dog food (kibble) for (Labrador)”
Why EAV Matters
- Matches how modern search understands content. Search engines rely on entities/relationships (knowledge graphs, semantic signals) to interpret queries and rank.
- Reveals programmatic scale—without junk. You can map thousands of valid combinations and decide which deserve a page vs. a block vs. a facet.
- Surfaces business-first opportunities. Low or zero SV combos can still be high-value if they align with product fit or sales workflows (e.g., influencers married with kids for specific brands).
- Exposes information gaps. If the web covers kibble and canned but ignores raw or semi-moist, EAV makes those gaps obvious (and winnable).
From Model → Patterns → Pages
- Research the landscape. Compile terms, tag sources, extract entities (e.g., Google NL API/other NLP). We have a no-code template to do this in Sheets, you can also do this in Python.
- Find co-occurring terms. Use n-grams (bi/tri-grams), fuzzy matching, and clustering to spot repeatable patterns. KeyBERT is great for this.
-
Draft pattern libraries. Examples (pet niche):
- Can dogs eat [food]?
- [Brand] dog food
- Dog training in [location]
- Best [product] for [breed]/[need]
-
Map EAV combos.
- Entity: dog food
- Attributes: breed, food type, need
- Variables (sample): 356 breeds × 8 food types × 8 needs → ~26k theoretical combos
- Reality check: prune by usefulness, competitiveness, and your ability to produce unique value.
Quality & Governance (So You Don’t Spam the Web)
- Business purpose first. Prioritize combos that help users decide or buy (not just “because we can”).
- SERP fit. Validate page types and competition; decide article vs. directory vs. tool vs. filterable list.
- Differentiation. Add proprietary data, expert POV, visuals, or calculators to avoid thin duplicates.
- Workflow. Treat EAV as your CMS schema: the model powers briefs, internal linking, and on-page modules.
Practical Workflow You Can Reuse
Step 1 — Build your EAV keyword universe
- List core entities (products, audiences, problems).
- For each, capture attributes and mark finite vs. infinite.
- Enumerate variables (seed from catalogs, standards, public lists).
Step 2 — Mine natural language
- Aggregate queries; extract entities; score salience/sentiment.
- Generate n-grams and cluster to reveal pattern templates.
Step 3 — Compose & score
- Generate single- and multi-variant combos.
- Score by: business fit, SERP viability, effort to create original content, and linking potential.
Step 4 — Plan architecture
One hub per entity; spokes per high-value attribute; variables as:
- dedicated pages (if decision-critical),
- subheads/blocks (if supportive),
- or filters (if purely navigational).
Step 5 — Brief & ship
Use the EAV table directly in briefs: required attributes, variable coverage, FAQs per pattern, internal links to sibling variables.
Watch & Apply: Practical Takeaways
✨ Model first, keywords second. Let entities/attributes decide scope; let keywords tune titles and format.
✨ Favor finite attributes for programmatic pages; demote infinite ones to filters/metadata.
✨ Pattern libraries (e.g., best [type] for [use/breed]) keep briefs consistent and scale quality.
✨ Gap hunt with EAV. Compare covered vs. uncovered variables to find easy wins.
✨ Measure beyond Search Volume. Track leads/revenue per combo; some “zero-SV” pages are whales for the business.
✨ Interlink by entity. Connect variables within an attribute and across attributes to mirror real decisions.
Bottom Line
The EAV model turns keyword research into a structured, semantics-first system you can scale responsibly. By mapping entities, attributes, and variables—and composing them into proven query patterns—you’ll uncover gaps, prioritize high-intent experiences, and build an information architecture that matches how people actually decide. Do this well, and your site behaves like a skilled digital salesperson: the right detail, at the right time, in the right format.
Watch this video for a full breakdown of this concept 💜
more on Brand entity optimization for the LLM and AI Search era in the upcoming module of our Featured course 🌟
We've just given you the primer on entities and their usefulness for keyword research, but as you might have guessed by watching the video or reading this email - we've not touched upon how modern AI search systems (think - AI Mode, Perplexity, ChatGPT) use entities and why it matters as a concept for brand development.
One of the two new modules in development (by the incredible Beatrice Gamba) will do just that - help you use entity linking, knowledge graphs, and brand building concepts to expand your traditional and AI search visibility. 🔥
SEMANTIC AI-powered keyword research
Two new modules are in development, to be released in November. ✨
|
|
|
This course features 🔥 10+ hours of content already, with a ton of practical exercises, tools, checklists, scripts, and more. We cover dozens of patents to show you exactly what you need at the keyword research phase to ensure you create meaningful content.
Heads-up 💸 the price of this course will increase when the new modules drop in November —lock in now.
✨ Use code COMMUNITY30 for 30% off at checkout - that's our little thank you for being part of the MLforSEO community
75+ forward-thinking marketers are already taking our courses 💜
we recently started uploading lesson snippets to our YouTube channel
We have started uploading lesson snippets from our courses to our YouTube channel as a way to democratise some of the concepts we discuss and the knowledge we share.
That's also a great way to discover our teaching style, even if just via snippets, not the full lessons we have on the concepts featured.
More videos coming soon, and practical tutorials, so subscribe to stay in the loop with the content we publish. 💜
recent discussions from our slack community 💬
Here's some of the recent discussions there:
Or check out a small sample of the resources shared:
Join over 650 AI/ML-interested marketers on our Slack community to stay up to date with discussions on AI/ML automation in SEO and marketing.
Happy learning! ✨
Lazarina