Research Question
What are experts doing to generate traffic and grow an audience around their content? What’s working, What’s not, and how is that changing?
Background
It’s clear what works is changing
At each point in the funnel (traffic, subscription, sales), things are changing.
On the one hand, traffic is harder and more expensive to come by, users’ inboxes are inundated with offers, and market competition on paid solutions is only growing.
On the other hand, technology and monetization approaches are evolving, with patron subscriptions, pay-what-you-can, bundles, influencer sponsorships, niche ad monetization platforms, and better tech enabled workflows.
Three-Pronged Approach
Right now I’m taking a three-pronged approach of survey, interview, and website data analysis.
Series of Five Question Interviews
The five question survey was about 20 minute call format loosely based around these questions:
- Can you tell me what last week looked like for you? I’m looking for an idea of how you spend your time.
- What’s your biggest focus this year from a marketing standpoint?
- Compared to last year, what about your approach to generating traffic, subscriptions, sales has stayed the same and what’s changed?
- Is SEO something you care about?
- If you could wave a magic wand and be able to do anything that you can’t do today, what would it be?
The goal was to better understand sentiment and challenges around generating consistent traffic.
Initial Surveys
Goal: Inform hypotheses, create a resource to pull from for coding dimensions of marketing efforts, and gauge willingness for deeper interviews.
Disseminated how? Email outreach, Twitter, relevant Facebook groups, LinkedIn.
Email: I’m doing some research to better understand how experts serving large audiences online approach increasing website traffic. I’d love to get your input. Would you be willing to take a 5 minute survey?
Optional opt-in: Would it be okay if I reach out to you for an interview? It would be 100% confidential.
Website audits at scale
Graph-based approach with ScreamingFrog + Neo4j
I’ve spent a fair amount of time developing a workflow to semi-automate crawling websites, extracting relevant data about content, technologies used, navigation items, and combine it with other relevant information, like third party data from ahrefs.com and semrush.com.
Layering in additional data
Real life relationships are discoverable just by crawling the web. This is what Google does. It maps relationships between entities, assigns value to those relationships based on a host of factors, and then uses them to determine what sites should rank for queries around what topics.
I am hoping to combine correlational data (like rankings and website characteristics and link-based relationships) with self-report data of people doing things to intentionally generate traffic to sites. In the future, I’ll be looking at more formalized process to experimental designs (changing sites, tracking effects).
Project Updates / Related Posts
Technical side of research project related product development
- Early notes on visualizing a website with Neo4j
- Preprocessing data with Python for NLP
- Prepping Website Content Data for Graphing
- Neo4j from the command line – a walkthrough on using cypher-shell to work with a series of website data load scripts
- Visualizing a website with Neo4j and Screaming Frog data (Loading Screaming Frog Website Crawl into Neo4j Tutorial)
- Conducting a website audit with SF and Neo4j – basically a series of database queries in Cypher that can act as an SEO site audit crawl report type clone of Moz Site Crawl
- Prioritizing internal redirects to fix – a more specific example of a query should probably get rolled into the above guide at some point
- Less related, but looking at how to slice and aggregate issues by size from a SEMRush crawl report master CSV export in Google Sheets
- Crawl and scrape sites that require your login (ConvertKit example)
- Edit and write neo4j cypher load scripts for website data (Dev plan)