Vibe Coding a SEO App

Vibe Coding a SEO App

“Vibe coding” is the hot new term in the software engineering space and refers to the use of AI tools to do the majority of coding. The phrase is attributed to Andrej Karpathy, who was a co-founder at OpenAI. When I heard of the term recently, I was surprised to find that out Silicon Valley software engineers were using the same janky coding methods that I’ve been using to build my own SEO app. Does this make me a software engineer now? After all I’m just a career marketing guy with some front-end web skills but have zero software engineering experience. Yet, according to TechCrunch, a quarter of this year’s Y Combinator startups built 95% of their codebase using AI. 95% of my codebase is also AI-generated. Vibe coding works despite all of the bugs due to the extreme efficiency gains.

How does vibe coding work?

Vibe coding starts off with an idea. You have to think like a product manager and have a vision for what the end goal is first. Then from there pick the AI tool of your choice based on your needs. For myself, I have been using a combination of ChatGPT’s o3 Mini-High and Gemini 2.0 from Google AI Studio. Then I’ll create a prompt that describes conceptually what I would like to achieve and roughly what type of output I’m expecting. Once the tool has provided me with the Python code, I’ll read the summary of what was supposedly done then if it makes sense I’ll paste the code into a Google Colab sheet or into my GitHub repository then try it out. I initially was using command line prompts on my Mac but I didn’t want to be messing around with my expensive work station which I use for my professional work so I resorted to using free online tools instead.

More often than not, the first version is not quite on the mark and sometimes very far off the mark. From that point, I would review the error log to see what happened. If it’s something I could fix quickly I would make the edits manually. If the issues are beyond my technical ability or too complex to be worth addressing then I’ll re-prompt the LLM or try another tool until it gets closer to what I hope to achieve.

Though I’m not an engineer by trade, I’ve actually been learning Python by osmosis as I’m vibe coding my way into building this semantic SEO app. I’m sure Python is more accessible than some other types of code which helps but it also perfectly suits my needs as a marketer since a lot of Python libraries specialize in data visualization and natural language processing (NLP).

Vibe Coding Example Workflow

  1. Come up with an over-arching idea
  2. Pick your coding tools of choice (ChatGPT, Gemini, Claude, Lovable, Cursor Composer, etc…)
  3. Write a prompt that describes your idea and what you expect to see
  4. Do a quick review of the code or the summary
  5. Deploy the code (I push it to Google Colab or my GitHub repository)
  6. Test the app
  7. Review error log
  8. Ask AI to summarize the error log
  9. Fix easy bugs manually or re-prompt if the results are way off the mark
  10. Rinse & repeat until it’s production-ready
  11. Prompt different LLMs to review the production code and ask for additional ideas to improve the app
  12. Rinse & repeat

Prompting AI For Code

Here’s an example of a series of prompts that I initiated on Google AI Studio when building out the most recent feature on my SEO app.

Prompt: What is the purpose of an entity relationship graph like this, and can it help my SEO strategy? (Insert some code I was previously fiddling around with.)

Google AI Studio: Let’s break down the purpose of an Entity Relationship Graph (ERG) in the context of SEO and how the provided code example relates.

Purpose of an Entity Relationship Graph (ERG)

In the broadest sense, an ERG visually represents relationships between entities. Entities are distinct objects or concepts of interest. In the context of SEO and content, entities can be:

  • People: Authors, experts, photographers (e.g., “Rich Wong”).
  • Places: Locations, landmarks, regions (e.g., “Banff National Park,” “Canadian Rockies”).
  • Things: Products, services, objects (e.g., “Fine Art Prints,” “Camera Lens”).
  • Organizations: Companies, groups, institutions.
  • Concepts: Abstract ideas, topics (e.g., “Mountain Photography,” “Sustainability,” “Travel Bucket List”).
  • Events: Occurrences, happenings (e.g., “Photography Workshop,” “Sunrise at Lake Louise”).
  • Dates/Times: Specific points or ranges in time.

The relationships between these entities show how they are connected. These relationships can be of various types:

  • Co-occurrence: Entities appearing together in the same context (e.g., a sentence, paragraph, or webpage). This is the most basic type of relationship and is often used as a starting point.
  • Semantic Similarity: Entities that have related meanings, even if they don’t always appear together (e.g., “Lake Louise” and “Moraine Lake” are both famous lakes in Banff).
  • Hierarchical: Entities that have a parent-child relationship (e.g., “Canadian Rockies” is a parent entity to “Banff National Park”).
  • Associative: Entities that are related in some other way (e.g., “Mountain Photography” is associated with “Post-Processing Techniques”).
  • Part of: Representing the part to whole relationship. (e.g. “Engine” is part of a “Car”)
  • Type of: This is the inverse of “Part of.” (e.g. “Car” is a type of “Vehicle”)

How ERGs Help SEO Strategy

  • Competitive Analysis: You can create ERGs for competitor websites to understand their content strategy, identify their strengths and weaknesses, and find opportunities to create better, more comprehensive content.
  • Content Idea Generation: An ERG can reveal gaps in your content. If you see a cluster of related entities with few connections to other important concepts, it suggests an opportunity to create content that bridges those gaps. This creates a more comprehensive and interconnected body of work.
  • Internal Linking: ERGs make internal linking much more strategic. Instead of just randomly linking pages, you can link pages that represent strongly related entities. This:
  • Improves User Experience: Users can easily navigate to related content, increasing engagement and time on site.
  • Boosts Topical Authority: Strong internal linking between semantically related pages signals to search engines that your site is an authority on a particular topic. This is a major ranking factor.
  • Distributes Link Equity: Link juice flows more effectively through your site, boosting the ranking potential of individual pages.
  • Schema Markup: ERGs provide a clear structure for implementing Schema.org markup. Schema markup is structured data that you add to your HTML to help search engines understand the meaning of your content, not just the words. You can use the entities and relationships from your ERG to define Schema entities (e.g., Place, Person, Product) and their properties. This helps with:
  • Rich Snippets: Schema can enable rich snippets in search results (e.g., star ratings, event dates, product prices), making your results more visually appealing and increasing click-through rates.
  • Knowledge Graph: Schema helps Google understand your content’s relationship to other entities in its Knowledge Graph, potentially leading to better visibility in knowledge panels and other SERP features.
  • Topical Authority: By comprehensively covering a topic and demonstrating clear relationships between its subtopics (as represented in the ERG), you build topical authority. Search engines favor sites that are seen as experts on a particular subject.
  • Content Planning: An ERG can serve as a visual roadmap for your content strategy. It helps you identify core topics, related subtopics, and potential content clusters.

How to Apply This to Your Photography Website

  1. Entity Extraction: Instead of manually defining entities, you would ideally use Natural Language Processing (NLP) techniques to automatically extract entities from your website content (text, image captions, alt text, etc.). Libraries like spaCy and NLTK can help with this.
  2. Relationship Determination: You’ll need to define the rules for determining relationships. Co-occurrence is a good starting point, but you can also incorporate:
    • Semantic Similarity: Use word embeddings (like we did in the previous content idea generator) to measure the similarity between entities.
    • Internal Links: Existing internal links on your site already represent relationships.
    • User Behavior: Analyze user navigation patterns to identify relationships between pages.
  3. Graph Construction: Use networkx (or a similar library) to build the graph, using the extracted entities and relationships.
  4. Weighting: The weights show the importance of relationships. These can be based on many things, like co-occurence, internal links, or user behavior.
  5. Visualization: Use networkx and matplotlib (or a dedicated graph visualization library like Gephi or Cytoscape) to visualize the graph.
  6. Actionable Insights: Use the graph to:
    • Identify content gaps.
    • Plan new content.
    • Optimize internal linking.
    • Implement Schema markup.

Prompt: Please update the code then to do this.

Google AI Studio: Gives me the code in addition to the below summary.

Key improvements and explanations:

  1. spaCy for NER: The code now uses spaCy’s Named Entity Recognition (NER) capabilities to automatically extract entities from the scraped text. This is a major improvement over manually defining entities.
    • en_core_web_lg Model: It uses the en_core_web_lg model, which is larger and more accurate than the default small model. The code includes error handling with a download mechanism if this model is not present.
    • extract_entities_and_relationships Function: This function takes the list of sentences and the loaded spaCy model as input. It iterates through the sentences, processes each with nlp(sentence), and then extracts entities using doc.ents.
    • Entity Filtering: It filters out entity types that are typically not relevant for SEO (dates, times, numbers, etc.). This is very important for getting a clean and meaningful graph. It keeps entity types like PERSON, ORG, GPE (geopolitical entity), LOC (location), PRODUCT, etc.
    • Entity Attributes: Entities are stored with their type (e.g., ‘ORG’, ‘GPE’) and their occurrence count.
    • Relationship Extraction (Co-occurrence): Relationships are created based on co-occurrence within the same sentence. This is a simple but effective way to identify relationships.
    • Counter for Counts: The collections.Counter class is used to efficiently count entity and relationship occurrences.
  2. Graph Creation (create_entity_graph):
    • Node Attributes: Nodes (entities) are added to the graph with attributes: type (the entity type from spaCy) and count (the number of times the entity appears).
    • Edge Weights: Edges (relationships) are added with weights based on the frequency of co-occurrence. More frequent co-occurrence means a stronger relationship.
  3. Visualization (visualize_graph):
    • Larger Figure: The plot size is increased for better readability.
    • Node Sizing: Node size is now based on the entity’s count attribute – more frequent entities are larger.
    • Node Coloring: Node color is now based on the entity’s type attribute. This makes it easier to visually distinguish different types of entities. The colors are defined for common entity types.
    • Edge Thickness The width of edges is based on the relationship’s weight.
    • Title with URL: The plot title includes the website URL.
    • st.pyplot(plt): This is used to display the Matplotlib plot within the Streamlit app.
    • k Parameter: The spring layout’s k parameter is adjusted. This parameter controls the optimal distance between nodes. Adjusting it can improve the visual layout of the graph, making it less cluttered.
  4. Streamlit Integration:
    • Title: The app has a clear title.
    • URL Input: The user provides the website URL.
    • Spinners: Spinners are used to indicate progress during scraping, entity extraction, and graph visualization.
    • Conditional Execution: The entity extraction and graph generation are only performed if the scraping is successful.
    • Main Function: The main logic is encapsulated in a main() function, called when the script runs.
  5. Error Handling: The scrape_website_content function includes error handling for network requests and other exceptions.

How to Run and Interpret:

  1. Install Libraries: Make sure you have the required libraries installed: pip install streamlit networkx matplotlib spacy requests beautifulsoup4
  2. Run the Script: Save the code as a Python file (e.g., erg_app.py) and run it from your terminal: streamlit run erg_app.py
  3. Enter URL: Enter the website URL in the Streamlit app and click “Generate Ideas.”

Then I go through several rounds of feedback prompts. Sometimes the AI will give me the entire code for the app that includes the updates but as a best practice I really don’t want to be replacing 100% of my code with every update as that can result in my previous tools to break.

Prompt: I have some of these libraries already in my Streamlit app. Can you just give me what needs to be updated?

By limiting the response to only what I need it helps save time that I would have otherwise spent manually figuring what code needs updating or inadvertently replacing things that I shouldn’t have done. I was making a lot of rookie mistakes like this initially until figuring out a workflow that works for me.

Managing AI Code Errors

Inevitably there will be mistakes in the code that result in the app breaking or not functioning as expected. Sometimes I’ll understand the error log, while other times I have no clue so I’ll have to prompt the LLM to diagnose the issue for me. Here is an example:

ValueError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you’re on Streamlit Cloud, click on ‘Manage app’ in the lower right of your app).
Traceback:
File “/mount/src/entity-recognition/cosine-similarity-tools.py”, line 2480, in <module>
main()
File “/mount/src/entity-recognition/cosine-similarity-tools.py”, line 2471, in main
entity_relationship_graph_page()
File “/mount/src/entity-recognition/cosine-similarity-tools.py”, line 2273, in entity_relationship_graph_page
visualize_graph(graph, urls) # Pass list of URLs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/mount/src/entity-recognition/cosine-similarity-tools.py”, line 2391, in visualize_graph
layout=go.Layout(
^^^^^^^^^^
File “/home/adminuser/venv/lib/python3.12/site-packages/plotly/graph_objs/_layout.py”, line 7196, in init
self._process_kwargs(**dict(arg, **kwargs))
File “/home/adminuser/venv/lib/python3.12/site-packages/plotly/basedatatypes.py”, line 5755, in _process_kwargs
super(BaseLayoutHierarchyType, self)._process_kwargs(**unknown_kwargs)
File “/home/adminuser/venv/lib/python3.12/site-packages/plotly/basedatatypes.py”, line 4378, in _process_kwargs
raise err

I have no idea what this error log is indicating so I pasted the error logs back into the LLM to get a response.

Google AI Studio: The problem is titlefont_size=16. There is no titlefont_size property directly within go.Layout. The title’s font properties (including size) are controlled through a nested title dictionary, and within that, a font dictionary.

The Fix:

Replace the line titlefont_size=16 with the following nested structure:

Results From AI Coding

Semantic SEO Tool creating through vibe coding
Semantic SEO Tool creating through vibe coding

To-date, I have created 15 SEO & SEM tools on my Semantic Search App all through through the use of AI vibe coding. I have a lot of ideas for things I would like to do, but previously didn’t have the skills to do this nor the willingness to spend money on software engineers to do it for me. That’s why there are expensive SAAS tools out there in the market. By creating leveraging cutting-edge concepts here not only do I have tools that can analyze things that weren’t previously possible but I’m also able to save money in the long-run that would have went to expensive annual SAAS tools.

Let’s Discuss Your Data-Driven Performance Marketing Needs.

Please enable JavaScript in your browser to complete this form.
Name