A Google Ads audit is a structured review of an account's configuration, spend behavior, and conversion setup. Done properly, it covers six categories: campaign structure, keywords, bidding, ad copy, conversion tracking, and landing page alignment. The 40 checks in this list are the same ones used in COREPPC's automated audit tool, which runs across thousands of Google Ads accounts. Most accounts have 3-5 structural issues on the first audit. The most common: broad match keywords consuming 20-35% of budget on irrelevant search queries, Smart Bidding strategies starved of conversion data (under 30 conversions per 30 days), and conversion tracking overlap that inflates reported conversions by 1.5-2x. This checklist walks through each check in sequence. It is organized to run top-down: campaign structure first, because structural problems compound through every layer below. Conversion tracking last, because most teams fix visible performance issues before finding the data integrity problems underneath.
How to Use This Checklist
Work through the checks in order, top-down. Structure problems flow into keyword problems, keyword problems flow into bidding problems. Fixing a keyword issue without addressing the structure underneath it is a partial fix.
Each check carries a severity rating: Critical (fix before spending another dollar), Important (fix within 7 days), or Recommended (best practice, fix when bandwidth allows). Start with Critical items. Run through all six categories. Then address Important items in order of estimated spend impact.
The COREPPC Google Ads audit tool runs these same 40 checks automatically via OAuth in under 5 minutes. For a manual audit, use the checklist below.
Run the same 40 checks automatically. Connect your account via OAuth and get a scored report in under 5 minutes.
Run the Audit AutomaticallyCampaign structure is where most account problems start. A poorly structured account cannot be fixed at the keyword or bidding level. The issues upstream determine what is possible downstream.
Campaigns should be named by platform, goal, and audience: for example, "Search | Lead Gen | Branded" or "Shopping | Revenue | Retargeting." Unnamed or inconsistently named campaigns make budget analysis and reporting impossible at scale. If you cannot tell what a campaign does from its name, you cannot analyze it reliably.
Each ad group should cover one theme. More than 20 keywords per ad group typically signals mixed themes, which lowers Quality Scores and raises CPCs. A single-theme ad group allows the ad copy to match keyword intent precisely, which is the fastest path to better Quality Score.
Search campaigns should have "Search partners" and "Display Network" unchecked unless there is a specific, documented reason. Google enables both by default. Display expansion from a Search campaign routes budget to untargeted placements with fundamentally different intent levels than search traffic.
No single campaign should consume more than 60% of total account budget unless it is the verified highest-performing campaign. Concentration risk is the fastest path to month-end budget overruns and underdelivery in other campaigns. Check the budget distribution and document why each allocation exists.
The account should have a documented reason for each campaign type in use: Search, Shopping, Performance Max, Display, Video. Campaigns added without a clear strategy tend to cannibalize each other's auction participation. If you cannot explain why a campaign type is running, audit it for overlap before keeping it.
PMax campaigns need asset coverage above 80%: headlines, descriptions, images, and videos. Under-resourced PMax campaigns default to text-only ads, which consistently show lower ad strength and reduced auction eligibility compared to fully built-out asset groups. If PMax is in the account, check the asset strength score before any other optimization.
Two campaigns targeting the same audience with the same keyword themes split impressions and inflate CPCs. Flag any campaigns with overlapping targeting. The fix is consolidation, not optimization: merging campaigns builds conversion data volume and reduces auction competition with yourself.
Keyword issues are the most common source of wasted spend. Most show up in one of two places: the match type settings and the negative keyword lists. Check how budget waste compounds for context on why keyword problems escalate quickly.
Broad match keywords without a strong negative keyword list typically waste 20-35% of search budget. Check what share of impressions come from broad match. Over 40% broad match with under 200 negatives is a red flag. The search term report will show where the budget is going.
Every campaign needs at least a campaign-level negative keyword list. Accounts with zero negatives are funding competitor brand queries, irrelevant job-search traffic, and navigational queries. This is not an edge case. It is the default state of most unmanaged accounts.
Shared negative lists block categories of waste across the entire account. Common entries: "free," "jobs," "salary," "review," "how to," and competitor brand names (if not actively targeting them). An account-level list prevents obvious waste from entering any campaign.
Pull the last 30 days of search term report. Filter for queries with more than $50 spend and zero conversions. Sort by spend descending. Any query in the top 20 that is not aligned with your product or service is budget bleed. Accounts with over 100 entries in that filtered view have a structural broad match problem that cannot be fixed with a few negatives.
Two ad groups bidding on the same query split Quality Score signals. Check search impression share by ad group to find cannibalization. When two ad groups compete for the same query, neither builds sufficient Quality Score history, and CPCs rise for both. Consolidate or use campaign-level exclusions to assign each query to one ad group.
Keywords scoring below 5/10 are paying a premium per click. Flag all sub-5 Quality Score keywords. The fix is usually ad relevance (the ad copy does not match keyword intent) or landing page experience (the landing page does not satisfy the query). Raising Quality Score from 4 to 7 on a $3 CPC keyword can reduce actual CPC by $0.40-0.60.
High-volume head terms get most of the attention. Check whether the account has keyword coverage for high-converting long-tail variants. These typically carry lower CPCs and higher purchase intent because the searcher has already narrowed down what they want. Keyword planning tool data from the last 3 months shows what you are missing.
Bidding issues are often misdiagnosed as targeting or creative problems. The account looks like it is not working when the actual issue is a bid strategy operating on the wrong signal or starved of data.
Lead gen campaigns should use Target CPA or Maximize Conversions. Revenue campaigns should use Target ROAS or Maximize Conversion Value. Campaigns using Maximize Clicks with no conversion goal are optimizing for the wrong signal entirely. Clicks are not conversions. An account running Maximize Clicks on a lead gen campaign is paying to drive traffic, not leads.
Smart Bidding bid strategies need at least 30 conversions per 30 days to exit the learning phase. Accounts with fewer than 30 conversions per month on Smart Bidding show CPAs 40-60% above campaigns with adequate data. This is the most common cause of "Smart Bidding is not working" complaints. The fix is consolidation, not a new bid strategy.
On Smart Bidding data starvation: Smart Bidding bid strategies require a minimum of 30 conversions per 30 days to exit Google's learning phase. Accounts running tCPA or tROAS with fewer than 30 monthly conversions operate in a permanent learning state. CPAs in learning-phase campaigns run 40-60% higher than campaigns with adequate conversion data. The fix is not to abandon Smart Bidding. It is to consolidate conversion actions until a single campaign or portfolio has enough data volume to train the algorithm properly. Accounts spending $5K-$15K/month on Google Ads with multiple campaigns each starved below the 30-conversion threshold should consolidate into fewer campaigns or switch to Maximize Conversions (no target) until data volume improves.
The target CPA set in the campaign should be within 20% of the account's actual average CPA over the last 30 days. Targets set too aggressively force the algorithm into over-restriction. Targets set too loosely allow unconstrained overspend. Neither is a bidding strategy. Both are misconfigurations that masquerade as performance issues.
Same logic as tCPA. A tROAS target of 400% on an account averaging 250% ROAS means the algorithm bids aggressively for conversions it will rarely find. Recalibrate to actual performance before optimizing. A target that cannot be achieved is not a target. It is a constraint that limits delivery.
Portfolio strategies that group campaigns with different goals or different conversion types produce mixed signals. A portfolio covering a lead gen campaign and a brand awareness campaign gives the algorithm conflicting objectives. Each portfolio should cover campaigns with the same conversion action type and similar performance baselines.
Campaigns using manual CPC should have documented bid logic by keyword and ad group. Undocumented manual bids drift over time. Six months after setup, no one knows why a keyword is bid at $2.40 vs. $1.80. Document the logic so bid audits are meaningful.
Device, location, audience, and dayparting adjustments should be based on actual performance data from the last 90 days. Set-and-forgotten adjustments from account setup often work against current performance patterns. Check whether the adjustments still reflect how the account performs today, not how it performed when it launched.
Ad copy problems rarely kill performance outright. They degrade it gradually. A low-strength RSA does not fail. It just delivers fewer impressions, at worse positions, against stronger competitors.
Pinning all headlines removes Google's ability to test combinations. The best practice is to pin the most critical headline to position 1 only and leave all others unpinned. Over-pinned RSAs consistently show lower ad strength and reduced auction eligibility. Pin when necessary. Not as a default.
Flag all RSAs rated "Poor" or "Average." The threshold for acceptable performance is "Good" or better. Poor ad strength correlates with reduced auction eligibility, meaning the ad competes in fewer auctions than it should. This is a hidden impression share problem that does not show up in impression share metrics.
RSAs should use all 15 headline slots and all 4 description slots. Unused slots reduce combination testing and lower ad strength. A common gap: accounts use 8-10 headlines and leave 5-7 slots empty. The fix takes 30 minutes and typically produces a measurable improvement in ad strength within the next reporting period.
Every campaign targeting lead generation should have at least one call extension if the business accepts phone inquiries. Callout extensions, with 3 or more per campaign, consistently improve CTR by 10-15%. Extensions are free real estate. Accounts without them are ceding ad space to competitors who have them.
Minimum 4 sitelinks per campaign. Sitelinks with duplicate destination URLs provide no testing value and may confuse searchers. Each sitelink should go to a different, relevant page. Sitelinks with meaningful distinctions (pricing vs. features vs. case studies) give the searcher information before they click.
Use structured snippets for services, product categories, or features. Accounts without structured snippets lose a free ad real estate opportunity on non-brand queries. The setup takes 10 minutes. The value is incremental CTR on competitive queries.
Each ad group should have at least 2 active ads or 2 RSA variants. Single-ad groups provide no testing signal and are flagged as under-resourced by Google's optimization score. If there is only one ad in an ad group, there is no way to know whether a better version exists.
Conversion tracking errors are the most underdiagnosed category in Google Ads audits. The most common: importing conversions from GA4 and firing a separate Google Ads conversion tag for the same event. This creates double-counting, inflating reported conversions by 1.5-2x. Smart Bidding treats the inflated count as accurate and optimizes aggressively for a phantom signal. The second most common error: setting secondary conversion actions (page views, scroll events, session duration) as primary conversion actions. When a bidding algorithm targets a scroll event as its primary signal, it optimizes for engagement instead of revenue. Both errors are invisible to anyone looking only at conversion counts. They only surface when auditing the conversion action configuration directly in the Google Ads interface and comparing it against the actual tag firing behavior.
The primary conversion action should be the highest-intent action in the funnel: purchase, lead form submit, phone call. Secondary actions (page views, scroll depth, session duration) should never be set as primary. When secondary actions are primary, Smart Bidding optimizes for the wrong signal and performance degrades without any visible error.
A 30-day conversion window makes sense for a SaaS trial. A 7-day window is appropriate for most ecommerce. Mismatch between the window and the actual sales cycle understates or overstates attribution. Check whether the conversion window reflects how long your buyer actually takes to decide.
Google's auto-applied recommendations can silently change match types, add keywords, adjust bids, and edit ad copy without human review. This should be turned off on all accounts unless there is a specific, documented reason to enable specific recommendation types. Check the status under Recommendations in the Google Ads interface.
Importing conversions from GA4 AND firing a separate Google Ads conversion tag for the same event creates double-counting. This inflates conversion totals by 1.5-2x and causes Smart Bidding to optimize for a phantom signal. Check for duplicate conversion actions tracking the same event under Tools and Settings > Conversions.
Accounts where the conversion path spans devices (research on mobile, purchase on desktop) should have cross-device conversions enabled. Not a critical fix but frequently unset, and the attribution gap understates mobile's contribution to final conversions.
Pull the conversion tag diagnostic from the Google Ads interface or Google Tag Assistant. Tags that fire on the wrong page (homepage vs. thank-you page) or fire multiple times per session inflate counts artificially. A tag that fires on the homepage records a conversion on every visit. This is not an edge case.
Accounts where phone calls are a lead source should use Google's call conversion tracking with a forwarding number or call duration threshold. A click-to-call event measures tap intent, not actual calls. If phone leads are part of the funnel, tracking taps instead of calls understates actual conversion volume and misrepresents lead quality.
The audit does not stop at the Google Ads interface. Four of the most impactful issues exist on the page the ad sends traffic to.
The landing page headline should match the keyword intent and ad copy message. If the ad promises a "free Google Ads audit" and the landing page opens with "Grow Your Business," Quality Score drops and conversion rate follows. The searcher expected to land somewhere specific. Mismatch between ad and page breaks that expectation in the first 2 seconds.
Google's threshold for acceptable LCP is under 2.5 seconds on mobile. Pull the landing page URL through PageSpeed Insights. LCP above 4 seconds correlates directly with higher bounce rates and lower Quality Scores on mobile traffic. This is one of the few landing page fixes that improves both paid and organic performance simultaneously.
The primary call to action should be visible without scrolling on both desktop and mobile. Pages where the CTA requires scroll lose 30-50% of conversions compared to above-the-fold CTA placement. Check on a phone screen, not a desktop preview.
If the landing page has both a form and a phone number, both should have conversion tracking in place. Accounts that only track form fills miss phone leads. The split between form and phone conversions varies by industry and audience. Without tracking both, optimization decisions are based on partial data.
Check where users land after form submit or purchase. The confirmation page should fire the conversion tag exactly once. Broken thank-you pages (that redirect or error out) cause tag misfires and are a common source of inflated or missed conversion counts. Test the full post-conversion flow in Tag Assistant before any bid strategy changes.
Run the Full Audit in Under 5 Minutes
This 40-check list takes 2-4 hours to run manually on a moderately complex account. COREPPC's audit tool runs the same checks automatically via OAuth in under 5 minutes. For agencies auditing client accounts before QBRs or onboarding calls, the time difference is significant.
The Google Ads audit tool reads live data from your account, scores each category, ranks issues by severity, and exports a white-label report with your agency branding.
For a manual audit, work through the checklist above in order. For agencies managing 5 or more accounts, the tool version handles the same framework at scale without the 2-4 hour per-account time cost.
Check Google Ads anomaly detection for how to monitor accounts continuously between audits. For a summary of the issues that surface most often, see common PPC audit findings.
Run the Audit Automatically
Connect your account via OAuth and get a scored report in under 5 minutes. No spreadsheets, no manual data entry.
Run the Audit AutomaticallyFrequently Asked Questions
A complete Google Ads audit covers six categories: campaign structure, keywords, bidding, ad copy, conversion tracking, and landing page alignment. COREPPC's 40-check audit scores each category separately and ranks issues by severity: critical, important, or recommended. Most accounts have 3-5 critical issues on the first audit.
A manual audit using this checklist takes 2-4 hours for a moderately complex account (5-10 campaigns, 50-100 ad groups). An automated audit via a tool like COREPPC runs the same 40 checks in under 5 minutes. Both approaches produce the same findings. The tool version scales to multiple accounts in the same session.
Quarterly at minimum. Monthly for accounts spending over $30,000/month. After any major platform update: Google's broad match behavior changes, Smart Bidding algorithm updates, and new campaign type releases all shift what needs to be checked. New client accounts should be audited before any optimization work begins.
Conversion tracking configuration. Every other optimization decision, including bid strategy and keyword expansion, depends on accurate conversion data. If the account is double-counting conversions or tracking the wrong actions as primary, every optimization built on that data is wrong. Start with the conversion tracking category before touching anything else.
In practice, the terms are used interchangeably. A formal audit implies a structured, scored framework with documented findings. An informal review is typically a senior practitioner scanning the account for the most obvious issues. The difference is depth, not intent. This checklist represents the structured audit standard: 40 specific checks across six categories with severity ratings.
You can run the full checklist manually using the Google Ads interface, Search Terms report, Conversion Actions section, and PageSpeed Insights for landing pages. A tool like COREPPC automates the same checks and adds scoring by category. For agencies auditing multiple client accounts, automation saves 2-4 hours per account per audit cycle.
By Dror Aharon | April 28, 2026