Google Search Console (GSC) is the single most valuable source of truth for any SEO. It tells you exactly what users are searching for and how Google sees your site.
But the GSC user interface is… limited. It allows you to filter by query or page, but it fails at complex analysis. You cannot easily ask: “Show me all non-branded keywords ranking between position 11 and 20 that have high impressions but low CTR.”
To get that answer, you typically have to:
Export 10,000 rows to Excel.
Write complex VLOOKUPs and Pivot Tables.
Spend 3 hours cleaning data.
Or, you can do it in 3 minutes using LLM Code Interpreters (like ChatGPT Advanced Data Analysis or Claude 3.5 Sonnet Artifacts).
At kōdōkalabs, we believe in Agentic Analytics. We don’t spend hours crunching numbers; we prompt machines to do it.
This guide will teach you how to turn your GSC exports into immediate revenue opportunities using the “Striking Distance” method.
Part 1: The "Striking Distance" Opportunity
Before we open the tool, let’s define the strategy.
Striking Distance Keywords are queries where your site ranks on Page 2 (Positions 11–20).
You are relevant enough to rank.
You are authoritative enough to be indexed.
But you are invisible to 99% of users (because nobody clicks Page 2).
Moving a keyword from Position 12 to Position 8 is often 10x easier than moving a new keyword from Position 100 to Position 10. This is the highest ROI activity in SEO.
The problem? Identifying these opportunities across 5,000 pages manually is impossible. The solution? Code Interpreter.
Part 2: Getting the Raw Data
You have two ways to get the data.
Method A: The Simple CSV Export (Good for <1k Pages)
Log into Google Search Console.
Go to Performance > Search Results.
Set the Date Range to Last 3 months (or 6 months for more data).
Click Export > Download CSV.
You will get a zip file. Extract it. You need Queries.csv and Pages.csv.
Method B: The API (Better for >1k Pages)
For larger sites, the UI limits exports to 1,000 rows. This is insufficient. You should use a connector (like the “Search Analytics for Sheets” extension or a Python script) to pull 25,000+ rows via the GSC API.
Pro Tip: Ensure your export includes Clicks, Impressions, CTR, and Position.
Part 3: The Code Interpreter Workflow
Now, we feed the machine.
Open ChatGPT (Plus) or Claude 3.5 Sonnet. Ensure “Advanced Data Analysis” (or Artifacts) is enabled.
Step 1: Upload and Prime
Upload your Queries.csv file.
Prompt:
“I am uploading my Google Search Console query data. The columns are Top Query, Clicks, Impressions, CTR, and Position. Please load this dataset into a Pandas DataFrame and display the first 5 rows to confirm you understand the structure. Do not analyze yet, just confirm the data type of the ‘Position’ column is numeric.”
Why this prompt? LLMs sometimes misinterpret the “Position” column as text because of decimals. This forces a check.
Step 2: The "Striking Distance" Filter
Now we find the money.
Prompt:
“I want to find ‘Striking Distance’ opportunities. Create a new dataframe called striking_distance that meets these specific criteria:
Position: Greater than 10 AND Less than 21 (Page 2 rankings).
Impressions: Greater than 100 (Ensure there is enough volume to care).
Brand Filter: Exclude any query containing ‘[Your Brand Name]’.
Sort the list by Impressions (Descending). Display the top 20 opportunities in a markdown table.”
The Output: The LLM will write and execute Python code to filter your messy CSV instantly. You will see a clean list of high-potential keywords that you are almost ranking for.
Step 3: Semantic Clustering (The Advanced Move)
A list of 500 keywords is overwhelming. We need to group them by topic.
Prompt:
“Take the striking_distance dataframe. Use a lightweight NLP technique (like N-gram analysis or simple string matching) to group these queries into ‘Topic Clusters’ based on common words.
Output a summary table showing:
Cluster Name (e.g., ‘Integration’, ‘Pricing’, ‘Error Codes’)
Total Impressions in Cluster
Average Position in Cluster
Number of Queries in Cluster”
Strategic Insight: If you see that the “Integration” cluster has 50,000 impressions sitting on Page 2, you know exactly what content you need to upgrade next.
Part 4: Actioning the Data (The Fix)
You have the list. Now, what do you do with it? Here is the kōdōkalabs Optimization Protocol for Striking Distance keywords.
Scenario A: The "Thin Content" Problem
Diagnosis: You rank #14 for “Enterprise CRM Benefits,” but the page is a short 500-word blog post from 2021.
The Fix:
Update Content: Add 3-4 new H2s covering related entities (use the Entity Salience guide).
Add Data: Inject a comparison table or recent statistic to boost Information Gain.
Refresh Date: Update the “Last Modified” date.
Scenario B: The "Cannibalization" Problem
Diagnosis: You see two different pages ranking #18 and #19 for the same query. Google is confused about which page is relevant.
The Fix:
Consolidate: Pick the stronger URL.
Redirect: 301 Redirect the weaker page to the stronger one.
Merge Content: Move any unique value from the weak page to the strong page.
Scenario C: The "Internal Link" Deficit
Diagnosis: The page is perfect, but stuck at #11.
The Fix:
Link Injection: Find 3-5 existing, high-authority pages on your site.
Anchor Text: Add links from those pages to the Striking Distance page using the exact match keyword as the anchor text.
Part 5: Automating the Analyst
Doing this once is good. Doing it monthly is profitable. You can ask the Code Interpreter to write a Reusable Python Script for you.
Prompt:
“This analysis was perfect. Please write a standalone Python script that I can run locally on my machine. The script should:
Take Queries.csv as an input.
Perform the Striking Distance filtering (Position 11-20, Impressions > 100).
Exclude my brand name ‘[Brand]’.
Save the result to a new file called high_priority_opportunities.csv. Provide the full code block.”
Now you have a piece of proprietary software. You can hand this script to your junior “SEO Systems Architect” to run every Monday morning.
Part 6: Beyond Keywords (Opportunity Mining)
Striking Distance is just the start. Here are three other “Code Interpreter” plays we use at kōdōkalabs.
1. The "Low CTR" Hunter
Find pages that rank well but nobody clicks (a sign of bad Title Tags).
Prompt: “Filter for queries ranking in Position 1-3 but having a CTR below 3%. Sort by Impressions. These are likely ‘Title Tag Optimization’ opportunities.”
2. The "Question" Extractor
Find PAA (People Also Ask) opportunities.
Prompt: “Filter the dataset for queries that start with ‘How’, ‘What’, ‘Why’, or ‘Can’. Sort by Impressions. These are candidates for a new FAQ Schema section.”
3. The "Decay" Detector (Requires 2 CSVs)
Upload this month’s data and last month’s data.
Prompt: “Compare the two datasets. Identify queries that have lost more than 3 positions in ranking month-over-month, despite having high impressions. List these as ‘At Risk’ content.”
Conclusion: Data Without Action is Vanity
Tools like ChatGPT Code Interpreter democratize data science. You no longer need a Data Analyst on your payroll to find enterprise-level insights.
But the AI cannot log into your CMS and update the content (yet). The competitive advantage belongs to the agency that can shorten the time between Insight and Action.
Run the analysis (3 minutes).
Identify the Top 10 Striking Distance pages.
Assign them to your “Content Pilots” for an Information Gain refresh.
Stop staring at the GSC dashboard. Start mining it.