<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
  <title>PPC Red Flag — Red Flags</title>
  <subtitle>Neutral, practitioner-built advisory for businesses paying a Google Ads agency. Spot the tactics that hurt you, ask the right questions, get a straight answer.</subtitle>
  <link href="https://ppcredflag.com/feed.xml" rel="self"/>
  <link href="https://ppcredflag.com/"/>
  <updated>2026-05-09T01:45:30.598Z</updated>
  <id>https://ppcredflag.com/</id>
  <author>
    <name>Alex Langton</name>
  </author>
  <entry>
    <title>They optimise to the cheapest cost per conversion, ignoring which products actually matter</title>
    <link href="https://ppcredflag.com/red-flags/cheapest-cpa-ignores-priorities/"/>
    <updated>2026-05-11T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/cheapest-cpa-ignores-priorities/</id>
    <summary>Not every conversion is worth the same. If your agency is steering budget toward whatever produces the lowest cost per lead, regardless of which product line carries margin or strategic priority, they&#39;ve handed your portfolio strategy to Google&#39;s auction.</summary>
    <content type="html"><![CDATA[<p>You tell your agency that Product A is the strategic focus this year. It’s the higher-margin line. It’s the one sales is staffed to support. It’s the one your CEO told the board about. Three months later, the report shows that Product B and Product C are getting the lion’s share of the spend — because they happen to convert at a lower cost per lead. The agency presents this as performance. It is, in fact, an abdication.</p>
<p>Cost per conversion (or cost per lead, or cost per acquisition — same idea, slightly different metrics) is a useful number, but it is one input into a real budgeting decision, not the entire decision. A $400 lead for a $40,000 product is a wildly better outcome than a $90 lead for a $4,000 product, even though the per-lead number favors the second. An agency that optimises to lowest cost per conversion in the absence of revenue or margin signal is not running your strategy. They are running Google’s default heuristic, and your strategic priorities are silently losing the budget allocation argument every week.</p>
<h2>Why agencies do it</h2>
<p>It’s the easiest target to hit and the easiest to defend.</p>
<p>Cost per conversion is countable and immediate. It shows up in the platform within days. It is the number Google’s automated bidding strategies are designed to minimise out of the box. The agency can hit it without conversation, without context, without understanding what your business actually values. As long as the per-lead number trends down, the report looks good.</p>
<p>The strategic alternative — weighting conversions by product, by margin, by deal size, by sales-cycle stage, by lifetime value — requires three things most retainers don’t fund. It requires the client to tell the agency what each conversion is worth (which most clients haven’t formalised). It requires conversion-value tracking inside Google Ads, with values feeding through from your forms or your CRM (technical setup). And it requires the agency to optimise toward target ROAS or value-based bidding instead of the simpler cost-per-conversion targets, which means more variability in the early weeks and more explanation in the report meeting.</p>
<p>So the easy version wins. The agency hits cost-per-lead targets. Budget flows toward whichever product happens to be cheapest to lead-gen. Your strategy is a footnote on slide twelve.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Spend allocation does not match the priority order you’ve communicated. The product line you’ve told the agency matters most is getting middling or low budget share. The campaign manager defends this with “that’s where the conversions are coming from.”</li>
<li>Conversion actions in Google Ads are configured without values, or with placeholder values like “1,” meaning every conversion is treated as equivalent.</li>
<li>Bidding strategies are set to “Maximise Conversions” or “Target CPA,” not “Maximise Conversion Value” or “Target ROAS.”</li>
<li>The report celebrates “efficient lead generation” without referencing which product the leads were for.</li>
<li>When you ask for a margin-weighted view of spend — “show me dollars spent and dollars produced by product line” — the agency takes a week to produce something that should be a five-minute pull, because the underlying conversion structure doesn’t support it.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Concrete example</span>
A B2B equipment manufacturer told their agency that the strategic line for the year was a new high-margin service contract, with leads worth roughly $18,000 in average lifetime value. The agency ran the account for six months optimising to a $220 cost-per-lead target across all conversions equally. Spend skewed heavily toward a legacy parts category that converted at $80 per lead but produced $1,200 in average value. The cost-per-lead number looked great. Margin-weighted return on ad spend was a fraction of what it should have been. The fix &mdash; configuring conversion values, switching to value-based bidding, and reweighting the campaign mix &mdash; took about three weeks of focused work and produced a 2.4x improvement in revenue per ad dollar over the next quarter, with the same total budget.
</aside>
<h2>What to ask your agency</h2>
<p>Two questions, in this order.</p>
<p>First: <em>“What conversion value is currently configured for each of our conversion actions in Google Ads, and how was that value derived?”</em></p>
<p>Second: <em>“If I told you Product A is twice as strategically valuable to us as Product B, how would the campaigns reflect that — specifically, what changes? ”</em></p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Each conversion action carries a value derived from your average deal size by product line, refreshed quarterly with your finance team &mdash; A is at $14,200, B at $3,800, C at $1,100. We&rsquo;re running Maximise Conversion Value across the relevant campaigns and Target ROAS where we have enough data. If A becomes twice as valuable, we&rsquo;d update the value at the conversion-action level, and the bidding algorithm would shift budget within a few weeks. We&rsquo;d also rebalance budgets at the campaign level immediately to accelerate the shift. Here&rsquo;s a model of what the next 60 days would look like under that change.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;We&rsquo;re currently optimising to a blended cost per lead target across all campaigns. Conversion values aren&rsquo;t configured at the action level &mdash; we manage priority by adjusting budgets manually based on performance. Right now, Product B is producing the most efficient cost per lead, so it&rsquo;s receiving more of the budget. Happy to discuss reallocating if you&rsquo;d like to push more budget toward A.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means your strategic priority is not encoded anywhere the algorithm can see. The agency is using one number — cost per lead — as a stand-in for the entire economic decision about where money should go. As long as that one number looks good in the deck, the underlying allocation is invisible. The agency has, without saying so out loud, decided that the auction’s view of efficiency outranks your view of strategy.</p>
<p>The fix is genuinely technical and requires both sides to do work. You owe the agency a written ranking of conversion value by product, line of business, or deal type, with rough numbers attached — even imperfect numbers, because $14,000 vs. $3,000 is a directional signal the algorithm can use. The agency owes you proper conversion-value tracking, the right bidding strategy, and a campaign structure that can be rebalanced when your priorities change. That conversation is what a real partnership looks like.</p>
<p>If the conversation produces shrugs on the agency side — “we just optimise to cost per lead, that’s our methodology” — you’ve learned where the ceiling is on what the agency is willing to do. Your portfolio strategy should not be downstream of an account manager’s methodology. If you want help putting numbers on this for your own account, that’s a good free question.</p>
]]></content>
  </entry>
  <entry>
    <title>The report regurgitates the KPIs without saying what changed or why</title>
    <link href="https://ppcredflag.com/red-flags/reports-regurgitate-kpis/"/>
    <updated>2026-05-10T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/reports-regurgitate-kpis/</id>
    <summary>A report is supposed to interpret. If it just reads the numbers back to you with adjectives like &quot;strong&quot; and &quot;consistent&quot; attached, it isn&#39;t a report - it&#39;s a screenshot with narration. The interpretation is the entire point.</summary>
    <content type="html"><![CDATA[<p>A monthly report has three jobs: tell you what happened, tell you why it happened, and tell you what the team is going to do about it. The first one is description. The second one is interpretation. The third one is commitment. The first one is the cheapest of the three by an order of magnitude, and an enormous number of agency reports stop there — they describe the numbers and call it a report.</p>
<p>A description is not insight. “Click-through rate is up 8% month over month, with strong performance across our consideration campaigns” is a sentence that contains zero information you couldn’t get by reading the dashboard yourself. A real report sentence is: “Click-through rate is up 8%, driven almost entirely by ad-copy variant C in the consideration campaign, which we shipped on April 11th. The variant emphasises ROI calculator language, and the next step is rolling that messaging into our prospecting campaign by the 25th.” The first one tells you the score. The second one tells you the game.</p>
<h2>Why agencies do it</h2>
<p>Interpretation is exposing. Description is safe.</p>
<p>Saying “X happened because of Y” commits the agency to a causal explanation. If that explanation is wrong, it’s embarrassing two months later when something else turns out to be the real driver. Most account managers, especially junior ones, learn quickly that the safe move is to describe what the numbers did and let the client draw their own conclusions. The deck looks confident. The author is shielded. Nobody can be wrong about a number that’s simply restated.</p>
<p>There’s also a labor problem on top of the safety problem. Real interpretation requires understanding cause. To say why CTR moved, the analyst has to know what changed in the account, what changed in the market, what your competitors are doing, what the seasonal pattern looks like in your category, and what your own product cycle is. That’s real thinking, every month, on every account. A template that just renders the numbers takes ten minutes. A template that interprets them takes two hours. Across a book of fifty clients, that math doesn’t work for an agency optimising for margin.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Every chart is followed by a sentence that paraphrases the chart. CTR went up. CPC went down. Conversions held steady. The sentence doesn’t add anything.</li>
<li>Adjectives carry the weight: “strong,” “consistent,” “solid,” “in line with expectations,” “continued momentum.” None of them are anchored to a target.</li>
<li>The narrative makes no reference to specific changes the agency made. No dates, no campaign names, no “we did X on the 11th, here’s the impact.”</li>
<li>The “next month’s priorities” slide is generic enough that it could apply to any client of theirs — “continue optimising,” “refine targeting,” “test ad creative.”</li>
<li>When you ask “why did this metric change?” you get a description of the metric’s movement, not a cause.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">The two-sentence test</span>
Open the most recent report. Find any month-over-month change of more than 5% on a meaningful metric. The agency should have written, in the deck, two sentences about it: one for &ldquo;why did this happen&rdquo; and one for &ldquo;what we&rsquo;re doing about it.&rdquo; If those two sentences aren&rsquo;t there for the three biggest movers in the deck, the report is a description, not a report.
</aside>
<h2>What to ask your agency</h2>
<p>Pick one specific metric movement in the most recent deck and ask: <em>“Walk me through what specifically caused that, what changed in the account or the market, and what we’re doing in response.”</em></p>
<p>Don’t accept the description. Push for the cause and the response.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;CPL on the consideration campaign jumped from $310 to $385 in the second half of the month. Two things changed. We expanded match types on April 12th, which broadened the query mix faster than we expected. And a competitor came back into the auction on April 18th &mdash; impression share data confirms it. The match-type change is the larger factor; we&rsquo;re reverting that this week and will rebuild the broader coverage with more curated phrase-match sets over the next three weeks. We expect CPL to return to roughly $320 within ten business days. If it doesn&rsquo;t, I&rsquo;ll flag it before the next call.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;CPL ticked up slightly month over month, which is consistent with broader market trends in your category. We&rsquo;re continuing to monitor and optimise. The fundamentals of the account remain strong and we expect performance to normalise.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means the report is doing its safe job and nothing else. The agency is describing the weather, not telling you what to wear. There is no commitment to any specific action, because no specific cause has been named, because naming a cause is exposing. The cumulative effect over six months is that you have no idea what your agency is doing, what they’ve learned, or what they’ll do differently — you just have a stack of decks that all sound vaguely positive.</p>
<p>The fix is reframing the report itself. Tell your agency the next deck needs a “what / why / so what” section for every meaningful change, with specific cause attribution and a specific response. Two sentences per movement. If that gets pushback, you’ve learned something more important than any individual number: you’ve learned that the agency would rather report safely than report usefully. That’s the conversation the renewal meeting should be about.</p>
]]></content>
  </entry>
  <entry>
    <title>Most of your paid traffic is getting dumped onto the homepage</title>
    <link href="https://ppcredflag.com/red-flags/homepage-as-landing-page/"/>
    <updated>2026-05-09T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/homepage-as-landing-page/</id>
    <summary>A homepage is a navigation hub built for everyone. A landing page is a single-purpose conversion surface built for the search you just paid for. Sending paid clicks to a homepage tells me your agency does not own the part of the funnel that converts.</summary>
    <content type="html"><![CDATA[<p>If I had to pick the single biggest reason a B2B paid-search account underperforms its potential, it wouldn’t be the bidding strategy, the keyword structure, or the audience overlay. It would be the landing page. Specifically, the absence of a real one. On at least half the accounts I audit, the bulk of paid clicks — sometimes 70% or more — are being sent to the homepage. The homepage was designed for everyone. It is, by definition, the worst possible match for a paid click that came in searching for a specific thing.</p>
<p>A landing page is a single-purpose page built for the keyword cluster that brought the visitor there. It has one headline, one promise, one form, and no navigation menu inviting the visitor to wander off. It can be tested, iterated, and scored. A homepage is a navigation hub built for fifteen different audiences at once, optimised by committee, and almost impossible to attribute conversion changes to. Sending paid traffic there is the digital equivalent of paying for a sales lead and then handing them a stack of brochures.</p>
<h2>Why agencies do it</h2>
<p>Three reasons, in order of frequency.</p>
<p><strong>Landing pages aren’t the agency’s job, technically.</strong> A standard PPC retainer covers in-platform work: campaigns, keywords, bidding, reporting. Building landing pages requires a CMS, design, copywriting, and developer time. The agency’s scope ends at the click. The client’s scope, in theory, picks up at the click. In practice nobody owns the click-to-form-fill conversion because both sides have decided it’s the other side’s problem.</p>
<p><strong>The agency doesn’t want to be measured on what happens after the click.</strong> As long as the destination is the homepage and the conversion rate is whatever the homepage produces, the agency can hide behind “that’s a website issue, not a campaign issue.” The moment you build a dedicated landing page, the agency is on the hook for landing-page conversion rate — and the gap between “could be 6%” and “currently 1.4%” suddenly has their fingerprints on it.</p>
<p><strong>Nothing has been tested.</strong> Even on accounts that do have landing pages, those pages are usually built once and never touched. Same headline, same form, same hero image for fourteen months. No A/B testing (running two versions of a page against each other to see which converts better). No iterative changes. The page might as well be a homepage for all the dynamic optimisation it’s getting.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Pull the destination URL by spend in the Google Ads landing page report. If your domain root or your homepage URL is in the top three by spend, you have a problem.</li>
<li>Visit the actual landing pages. Do they have a navigation menu at the top, links to your About page, your Careers page, your Blog? Then they’re not landing pages. They’re just web pages with form fills attached.</li>
<li>Ask your agency how many landing-page tests have been run in the last 90 days. Tests, plural — A vs. B, headline vs. headline, form length vs. form length. If the answer is zero, the page is a static asset, not a conversion surface.</li>
<li>Look at the conversion rate on paid traffic. If it’s under 2% on a B2B account that’s using the homepage, the homepage is the cap. You can’t bid your way out of a 1.4% conversion rate; the math doesn’t work.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Concrete pattern</span>
A B2B services client was sending ~80% of paid traffic to their homepage. Conversion rate was 1.7%. We built three dedicated landing pages, one per primary service line, with a single form, no nav, and copy that mirrored the keyword cluster driving traffic. Conversion rate on those pages stabilised at 5.2&ndash;6.4% within four weeks. Same campaigns, same budget, roughly 3x the leads. The cost of the landing pages, including iteration, was about one month&rsquo;s media spend.
</aside>
<h2>What to ask your agency</h2>
<p>Two questions.</p>
<p>First: <em>“What percentage of paid spend is currently routed to dedicated landing pages versus the homepage or other site pages?”</em></p>
<p>Second: <em>“What landing-page tests have you run in the last 90 days, and what did each one teach us?”</em></p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;78% of paid spend is on dedicated landing pages, one per major campaign theme. The homepage gets brand search only. We tested three headlines on the consideration page over the last 60 days; variant B lifted form-fill rate from 4.1 to 5.6%. Next test is form length &mdash; we&rsquo;re hypothesising that dropping two fields will lift fills another 10&ndash;15% without hurting lead quality. Results in three weeks.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;Landing page strategy is owned on the client side. We can recommend best practices but we don&rsquo;t have direct control over the website. Happy to flag opportunities for your team to act on.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means nobody owns the most important conversion lever in the entire account, and the agency is comfortable with that. The bad answer is technically accurate — a PPC retainer often doesn’t include landing-page production. It is also a tell, because the right response from a real partner is “here’s the gap, here’s how we close it, here’s who needs to do what.”</p>
<p>The fix is one of three things. Either the agency expands scope to include landing-page production and testing (and the retainer adjusts to match), or you bring in a landing-page specialist who works alongside the PPC agency, or you assign internal resources to it. All three are real answers. “Not our scope” without a follow-up suggestion is not a real answer; it’s the agency telling you they’d rather optimise the part of the funnel they get credit for than the part of the funnel that actually limits performance.</p>
<p>If you want a quick sanity check on whether your landing pages are the cap on your account, that’s a good free question.</p>
]]></content>
  </entry>
  <entry>
    <title>They don&#39;t ask, in any depth, who your customers actually are</title>
    <link href="https://ppcredflag.com/red-flags/no-customer-discovery/"/>
    <updated>2026-05-08T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/no-customer-discovery/</id>
    <summary>A campaign built without understanding your customer is a campaign built on Google&#39;s idea of your customer. The hour-long discovery call at month one isn&#39;t enough. If your agency couldn&#39;t describe your buyer in three sentences right now, they&#39;re guessing every day.</summary>
    <content type="html"><![CDATA[<p>A Google Ads campaign is, at its core, a series of bets about who your customer is, what they search for, what they read before they buy, and what makes them ready to act. Every keyword is a bet. Every audience overlay is a bet. Every landing page is a bet. The accuracy of those bets is bounded by how well the people running the account understand your customer.</p>
<p>Most agencies do roughly an hour of customer discovery during onboarding, fill in a one-page ICP (ideal customer profile) document, file it in a Google Drive folder, and never look at it again. Twelve months later they’re still running campaigns based on assumptions made in a forty-five-minute call with someone who isn’t at your company anymore. The campaigns work to the extent they happen to be correct by accident.</p>
<h2>Why agencies do it</h2>
<p>Customer understanding is the highest-leverage thing an account team can do, and the lowest-billable.</p>
<p>Spending two hours talking to your sales team about which deals close fast and which ones stall is unbillable on a fixed retainer. Reading the transcripts of three of your closed-won discovery calls is unbillable. Sitting in on a customer success call to hear what your existing customers actually say about why they bought is unbillable. All of it pays off — in better keywords, better ad copy, better audiences, better landing pages — but the payoff is diffuse and the cost is concentrated. So it doesn’t happen.</p>
<p>There is also a competence problem. Asking deep questions about a customer requires curiosity, time, and a willingness to admit you don’t already know. Agency analysts are often optimised for technical platform skill rather than commercial intuition. They can build a tightly structured campaign in six hours; they can’t tell you what your buyer is afraid of. Both skills exist on every senior team. Only one of them tends to be staffed at the level your retainer is paying for.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Ad copy talks about features, not outcomes. Every variant says “leading provider” and “industry-leading” and lists product attributes. None of it says what your customer is trying to accomplish or what they’re afraid will happen if they get the buying decision wrong.</li>
<li>Keyword targeting is built around your product’s category nouns, not the language your customers actually use. Your customers say “machine that does X.” Your campaigns target “industrial X-ing equipment.” Different searches, different intent, different conversion rates.</li>
<li>Audiences are demographic, not behavioural. “CFOs at companies with 500–5,000 employees” instead of “people who’ve hit a pricing page in the last 30 days,” or “people who’ve read three pieces of competitor content.”</li>
<li>Nobody on the agency team has spoken to one of your customers, ever. When you ask, “Have you sat in on any of our discovery or sales calls?” the answer is no, and the suggestion that they should is treated as scope creep.</li>
<li>The original onboarding ICP document, if you can find it, is generic enough to apply to four other companies in your industry.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">The diagnostic question</span>
On the next call, ask: &ldquo;In your own words, who is the ideal customer for our business, what do they search for in the moment they&rsquo;re ready to buy, and what makes them stall?&rdquo; A real account team has thought about this and can answer in two minutes. A guessing account team will pivot to platform mechanics within the first sentence.
</aside>
<h2>What to ask your agency</h2>
<p><em>“Walk me through who you think our customer is. Not from the onboarding doc — from your own understanding now, ten months in. Who buys, why, and what stops them?”</em></p>
<p>If they don’t have a confident answer, that is the answer.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Your buyer is a plant manager or operations director at a mid-sized manufacturer, typically replacing equipment after a downtime event has cost them more than the new system. They search by problem, not category &mdash; queries like &lsquo;reducing changeover time&rsquo; convert at 4x the rate of generic equipment terms. They stall on internal capex approval, which is why our remarketing copy hammers ROI calculators rather than features. I sat in on three of your sales calls last quarter; this is partly informed by that. We have a discovery refresh scheduled with your sales team next month.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;Based on the onboarding documentation, we&rsquo;re targeting decision-makers at mid-market B2B accounts in your verticals. We&rsquo;ve seen good engagement from those audiences and our campaigns are aligned to that ICP. Happy to revisit if there&rsquo;s additional customer information you&rsquo;d like to share.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means the campaigns are built on the platform’s defaults plus an outdated one-page summary, and the agency hasn’t done meaningful customer thinking since the kickoff call. Performance is whatever the algorithm happens to find. The agency’s value reduces to platform mechanics, which is real value but a fraction of what you’re paying for.</p>
<p>The fix is uncomfortable for both sides. Either the agency commits to a quarterly customer-discovery cadence — sales call shadowing, customer interviews, ICP refresh — and treats it as part of the retainer, or you start treating customer briefing as your own work and feed them the answers each quarter. The first version produces better campaigns. The second produces an agency that is functionally an executor of your strategy rather than a thinking partner. Both can work. The version that doesn’t work is the one where neither side is doing it and the campaigns drift.</p>
]]></content>
  </entry>
  <entry>
    <title>Every flat or declining month is blamed on a &quot;learning period&quot;</title>
    <link href="https://ppcredflag.com/red-flags/learning-period-excuse/"/>
    <updated>2026-05-07T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/learning-period-excuse/</id>
    <summary>A learning period is a real two-to-three-week phase that happens after a meaningful change. It is not a six-month explanation for why nothing has improved. When &quot;learning&quot; becomes the standing excuse, you&#39;ve got a process problem dressed up as a platform problem.</summary>
    <content type="html"><![CDATA[<p>There is a real thing in Google Ads called a learning period. When you make a meaningful change — a new bidding strategy, a significant budget shift, a structural campaign change — the algorithm needs roughly two to three weeks of data to recalibrate. During that window, performance can be unstable. After it, performance should stabilise and you should be able to read the new normal. This is real, and on a healthy account it gets cited maybe once or twice a quarter.</p>
<p>The red flag is when “we’re still in the learning period” becomes the standing answer for every month that doesn’t go the right way. It is one of the most reliable signals on this list because it requires no looking at the account to detect. You just have to count how many months in a row you’ve heard it.</p>
<h2>Why agencies do it</h2>
<p>It is the most defensible-sounding rhetorical shield in the whole vocabulary.</p>
<p>A learning period is a real concept, documented by Google, with real mechanics. It cannot be dismissed by a non-practitioner without sounding like the non-practitioner doesn’t understand the platform. So an account manager who has nothing useful to say about why a month went badly can reach for “learning period” the way a politician reaches for “it’s complicated” — technically correct, almost always insufficient, and effective at ending the conversation.</p>
<p>The mechanic underneath is usually one of three things. Either the agency is making frequent small changes that keep retriggering the learning state (which is itself a process problem), or the agency made a single change two months ago and is still claiming we’re calibrating (which is a credibility problem), or the agency has made no change at all and is using “learning” as a generic term for “we don’t want to commit to why this happened” (which is the worst version).</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>The phrase “learning period” appears in two or more consecutive monthly meetings.</li>
<li>When you ask, “Learning from what change, made on what date?” you don’t get a specific date or a specific change.</li>
<li>The campaign change history (visible in Google Ads under Tools &gt; Change History) shows either constant micro-edits or no meaningful changes for the past 30+ days.</li>
<li>Performance stays flat or declines but the explanation doesn’t evolve. Three months in a row of “still calibrating” is not three different explanations.</li>
<li>Smart Bidding strategies (Maximise Conversions, Target CPA, Target ROAS) are being changed every four to six weeks, which keeps the account perpetually in the learning state by design.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">A diagnostic move</span>
On the next call, ask: &ldquo;What was the last change made to a bidding strategy on this account, on what date, and what was the rationale?&rdquo; If the answer is more than 30 days old, the &ldquo;learning&rdquo; excuse is dead. If the answer is less than seven days old and there have been multiple recent changes, you have an agency that can&rsquo;t leave the account alone long enough for it to actually stabilise.
</aside>
<h2>What to ask your agency</h2>
<p>Two questions, in this exact order. The order is the trap.</p>
<p>First: <em>“What specific change triggered the current learning period, and on what date?”</em></p>
<p>Second, regardless of what they answer: <em>“What does the trailing-90-day trend look like? If we’re still calibrating from a change made over three weeks ago, that’s longer than the platform documents. What’s actually happening?”</em></p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;The most recent meaningful change was a switch from Maximise Conversions to Target CPA on the consideration campaign on April 14th. We&rsquo;re fifteen days in. The first two weeks looked unstable, which is normal. Last week stabilised at a CPA of $284, which is 8% over our $260 target. We&rsquo;ll let it run another seven to ten days to confirm and then either tighten the target or roll back. We&rsquo;re not making other changes during that window so the data stays clean.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;Smart Bidding takes time to find its footing, especially in a competitive market. We&rsquo;re continuing to monitor and we expect performance to normalise as the algorithm finishes calibrating. Patience is critical with machine learning systems &mdash; pulling levers too aggressively just resets the learning.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means the agency is using a real Google Ads concept as a fog of jargon, in the hope that you don’t know enough to notice. It is the verbal equivalent of slowing down at a yellow light forever. Calling it out by name — “a learning period documents at two to three weeks; we’re past that, what’s the actual issue?” — collapses the rhetoric. Either you get a real answer or you get the next layer of fog, and the next layer of fog tells you everything you need to know about whether to keep going.</p>
<p>In my own auditing work, this is the single most common phrase I see used as cover. When I get into the account and pull change history, the “learning” that’s allegedly happening is one of two things. It’s either nothing — no real changes for weeks — or it’s churn, where the agency keeps tweaking small settings that keep restarting the algorithm. Both explanations are real. Neither is the explanation the client got.</p>
]]></content>
  </entry>
  <entry>
    <title>Nobody is tracking what happens after the lead form gets submitted</title>
    <link href="https://ppcredflag.com/red-flags/no-post-form-tracking/"/>
    <updated>2026-05-06T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/no-post-form-tracking/</id>
    <summary>A form fill is not a customer. If your agency optimises to leads but has no idea which of those leads close, the algorithm is being trained on the wrong outcome and you&#39;re paying to scale up junk.</summary>
    <content type="html"><![CDATA[<p>A form fill is not a customer. It is the opening of a conversation that may or may not become one. The cost-per-lead number on slide three of your monthly report is the cost of opening that conversation, not the cost of acquiring revenue. If your agency reports cost per lead and stops there, they are reporting half a metric — the half that’s convenient to optimise toward, and the half that has the loosest connection to whether the campaign actually works.</p>
<p>The honest version is closed-loop tracking. The lead leaves Google Ads, lands in your CRM, gets qualified, gets disqualified or progresses, and either becomes a customer or doesn’t. That outcome flows back into Google Ads as an offline conversion (data fed back from your CRM into the ad platform) and the bidding algorithm starts optimising toward the leads that actually close, not just the leads that fill out forms. Setting this up takes a half-day to a day depending on your CRM. The fact that most agencies don’t insist on it tells you what they actually optimise for.</p>
<h2>Why agencies do it</h2>
<p>It is the cleanest example on this list of agency incentives diverging from client outcomes.</p>
<p>A lead form fill is fast feedback. It happens within a click of the ad. It is countable, attributable, and easy to put on a slide. The agency’s monthly report can show “142 leads at $87 cost per lead” on the third of every month, the moment the previous month closes. That number renews retainers.</p>
<p>A closed lead is slow feedback. The B2B sales cycle for the kind of business that pays an agency $5,000–$30,000 a month is usually 30 to 120 days. The agency would have to wait two to four months to know whether a given month’s campaign actually produced revenue. By the time the truth surfaces, three more renewal cycles have happened. The structural incentive is to optimise for fast feedback even when fast feedback isn’t the right outcome.</p>
<p>Add the technical work involved — CRM integration, conversion mapping, value assignment, ongoing data quality checks — and you have a feature that is genuinely useful to the client, costly to set up, and quietly de-prioritised by an agency triaging where to spend its hours.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>The monthly report shows leads, cost per lead, and conversion rate. It does not show closed leads, customer acquisition cost, or revenue.</li>
<li>When you ask, “Of the 142 leads from last month, how many closed?” the answer is “That’s a question for your sales team” or “We don’t have visibility into the CRM.”</li>
<li>Google Ads conversion settings show conversions firing on form submit only — no offline conversion imports, no CRM integration via Zapier or HubSpot or Salesforce.</li>
<li>The campaign optimisation strategy in the platform is “Maximise Conversions” or “Target CPA,” based on form fills as the conversion event. The algorithm has no idea which of those form fills became customers.</li>
<li>Performance Max is running “by conversions” without conversion values that reflect actual revenue.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">What the data usually reveals</span>
On most B2B accounts I audit, when we get closed-loop data flowing in, we discover that 60&ndash;80% of the campaigns producing the cheapest cost per lead were producing the worst-quality leads. The bidding algorithm had been trained for months or years to scale up traffic that closed at a fraction of the average rate. Real customer acquisition cost was 2&ndash;4x what the agency was reporting. Reallocating budget toward the campaigns that actually closed often recovers 20&ndash;35% of efficiency in a quarter.
</aside>
<h2>What to ask your agency</h2>
<p><em>“What’s the closed-loop tracking setup, and which campaigns are producing leads that actually close?”</em></p>
<p>That question has a direct answer or it doesn’t. There is no middle.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;We import qualified-lead and closed-deal events from your CRM into Google Ads daily, with revenue values attached. The bidding algorithm optimises to qualified leads, not raw form fills. Our monthly report shows cost per qualified lead and customer acquisition cost by campaign. Last month, brand search closed at 18% and our consideration campaigns at 6%; we&rsquo;ve shifted budget accordingly. Here&rsquo;s the breakdown.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;We track conversions at the form-fill level, which is industry-standard. Closed-loop attribution depends on your CRM data and is typically owned by the client side. We focus on what we can directly influence within the platform.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means the campaigns are being optimised toward the wrong target and have been the entire time. Every dollar of the bidding algorithm’s machine learning has been trained on “produce more form fills” and not “produce more customers.” In a healthy market this looks fine because the conversion-rate-to-customer is roughly stable. In a soft market or a market where lead quality has dropped, it’s catastrophic and invisible — the agency keeps reporting “steady cost per lead” while your sales team complains the leads are unworkable, and nobody on either side has the data to connect the two.</p>
<p>The fix is technical and not particularly hard. A half-day with a competent agency, a conversation with whoever owns your CRM, a working integration via Zapier or a native connector, and a thirty-day calibration period before the algorithm has enough closed-loop signal to optimise on. If your agency tells you it’s “not their lane,” the conversation about whose lane it is, is the actual conversation about whether to keep this agency.</p>
]]></content>
  </entry>
  <entry>
    <title>The monthly report has no real month-over-month comparison on conversions</title>
    <link href="https://ppcredflag.com/red-flags/monthly-reports-no-mom-comparison/"/>
    <updated>2026-05-05T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/monthly-reports-no-mom-comparison/</id>
    <summary>A report without month-over-month conversion comparison isn&#39;t a report. It&#39;s a marketing brochure for the agency. The single most diagnostic format costs nothing to produce, which is exactly why some agencies don&#39;t.</summary>
    <content type="html"><![CDATA[<p>A monthly PPC report has exactly one job: tell you what changed in the last month, why it changed, and what the team is going to do about it. To do that job, the report has to have month-over-month comparison on the metrics that matter — specifically on conversions, cost per conversion, and conversion rate, broken down by campaign or campaign group. If those columns aren’t in the deck, the report is doing a different job. The job it’s doing is making the agency feel renewable.</p>
<p>This is the single most diagnostic red flag on this list, because it costs nothing to produce a real report. The numbers exist in the platform. Pulling them takes ten minutes. Choosing not to put them in front of you is, almost always, a deliberate choice.</p>
<h2>Why agencies do it</h2>
<p>Three reasons, with the third being the one that should worry you.</p>
<p><strong>Pre-formatted templates.</strong> Most agencies use a standard report template across all clients. The template was built once, by someone who optimised for “looks good in a presentation,” and it leads with totals because totals are easy to make look healthy. The agency doesn’t custom-build a template per client unless asked. Asking is enough to fix it on a half-decent agency.</p>
<p><strong>Inertia.</strong> The report goes out. The client doesn’t ask for changes. The report keeps going out in the same format. Months go by. Years go by. Nobody is hiding anything, but nobody is showing you anything either. The cure is also asking.</p>
<p><strong>Selection.</strong> This is the bad one. The numbers being shown are the numbers that look good. The numbers being omitted are the numbers that don’t. Month-over-month conversion comparison gets left out specifically because it would tell you that conversions have been slowly trending down for four months, while clicks and impressions have been trending up. The aggregate report shows a healthy account. The MoM trend on conversions tells the truth.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>The deck shows current-month totals (clicks, conversions, spend, ROAS) but no comparison column to last month or the same month last year.</li>
<li>Charts span only the current month, so trend direction is invisible.</li>
<li>Campaign-level breakdown exists but doesn’t include MoM deltas (changes from month to month).</li>
<li>The narrative paragraph at the top of the deck uses words like “steady,” “consistent,” or “in line with expectations” without telling you which expectations.</li>
<li>When you ask for a quarter-over-quarter view, you get a custom one-off pull and it takes three days to arrive. (That’s the tell — it should take ten minutes.)</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Concrete pattern</span>
On a recent audit, the agency&rsquo;s monthly report showed a healthy 4.2x ROAS for the current month. When I pulled the trailing six months in five minutes inside the platform, ROAS had been: 6.8, 6.1, 5.4, 4.9, 4.5, 4.2. Each individual monthly report had said the account was &ldquo;performing in line with expectations.&rdquo; Cumulatively the account had lost 38% of its efficiency in half a year, and the reporting format was structurally incapable of saying so.
</aside>
<h2>What to ask your agency</h2>
<p>The simplest possible request: <em>“Going forward, the monthly report needs a column for last month and a column for the same month last year, on conversions, cost per conversion, conversion rate, and ROAS. Per campaign. With percentage deltas.”</em></p>
<p>You don’t need their permission. You’re telling them what the report looks like now.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Yes, we can have that in the next deck. We&rsquo;ll add the trailing-six-month chart by default too &mdash; it&rsquo;s a more honest read than month-over-month alone because it filters out single-month noise. The narrative will lead with the trend on conversions and CPL, and we&rsquo;ll explicitly note any month where the trend is moving in the wrong direction.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;We can look into customising the template. Our standard reporting framework was built around best-practice metrics. Adding too many comparison points can make the deck harder to read and may not tell the right story month to month given normal seasonality.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>The bad answer is the agency telling you, gently, that they don’t want to put MoM comparisons in writing. There is no other reason to push back on a request this small. The work is ten minutes. The data is in the platform. The only cost is that some months the numbers will tell an inconvenient story and the agency will have to explain it on the call instead of glossing over it.</p>
<p>Some version of this conversation is the test of whether the relationship is worth keeping. If you ask for a real reporting format and the agency says yes and delivers it within one cycle, you have an agency that can be partnered with even if other things on this list are also true. If you ask and you get pushback, soft language, or a quarter of delays, the reporting format isn’t the actual problem. The reporting format is the symptom of an agency-client relationship in which the agency has decided what you’re allowed to see.</p>
<p>The hardest part of fixing this isn’t technical. It’s the moment in the next report meeting when the new MoM column shows red on conversions and the agency has to actually explain why — and you, sitting in your seat, have to be ready to ask the follow-up question. That’s the moment this whole site exists for. If you want help preparing for it, send me a question.</p>
]]></content>
  </entry>
  <entry>
    <title>Performance Max is running with zero search-term transparency</title>
    <link href="https://ppcredflag.com/red-flags/pmax-search-term-blackbox/"/>
    <updated>2026-05-04T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/pmax-search-term-blackbox/</id>
    <summary>Performance Max can be a perfectly good campaign type. But if your agency runs it without tight category exclusions, brand exclusions, and regular search-term-insight reviews, you&#39;ve handed Google a budget and asked it not to tell you where the money went.</summary>
    <content type="html"><![CDATA[<p>Performance Max (Google’s campaign type that bundles Search, Display, YouTube, Discover, Gmail, and Shopping into a single automated campaign) is not inherently a red flag. Used carefully, with the right exclusions and the right oversight, it can outperform manually structured campaigns for the right business. The red flag is when it is running with no exclusions, no brand carve-out, and no human review of where the spend is actually going — which is, in my experience auditing accounts, the default state on most retainers.</p>
<p>The problem with Performance Max is that it deliberately reduces transparency. You don’t see ad placements the way you would in a Display campaign. You don’t see the full search terms report the way you would in a Search campaign. You see what Google chooses to show you, and Google’s incentives are not perfectly aligned with yours. If your agency hasn’t built guardrails around that fact, you’re paying for a blank check.</p>
<h2>Why agencies do it</h2>
<p>Two reasons.</p>
<p><strong>It is the path of least resistance.</strong> Performance Max is what Google’s account reps push during their quarterly check-ins with agencies. The pitch is real and it works: less manual labor, broader inventory access, automated creative. For an agency managing a hundred accounts, an automated campaign type that produces decent-looking aggregate numbers with minimal touch is a profitability lifeline. The implicit trade is that some accounts get under-served because the optimisation is delegated to Google.</p>
<p><strong>It produces conversions that look attributable.</strong> Performance Max is aggressive about claiming credit for conversions, including ones that would have happened through brand search anyway. If your brand is searchable and PMax is running without a brand exclusion, the campaign will report a strong cost per acquisition number that is partly your existing brand demand cannibalised. The agency reports a healthy CPA. The CPA is, in fact, partially fictional.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>A Performance Max campaign exists and consumes 25% or more of total spend.</li>
<li>There is no brand exclusion list applied at the account or campaign level — meaning PMax is allowed to bid on searches for your own company name.</li>
<li>There is no negative keyword list applied at the account level (Google now allows account-level negatives for PMax; many agencies haven’t set them).</li>
<li>The category exclusions inside Performance Max settings are blank.</li>
<li>When you ask for the search themes that drove conversions, your agency tells you Google doesn’t share that data — which used to be true and is now mostly false. Search-term insights and asset-group-level performance are available; they have to be requested and reviewed.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Tell-tale phrasing</span>
&ldquo;Performance Max is a black box, that&rsquo;s how it&rsquo;s designed.&rdquo; Half-true two years ago, mostly false today. The right phrasing is &ldquo;PMax is less granular than Search, but here&rsquo;s what we do see and here&rsquo;s how we&rsquo;re bounding it.&rdquo;
</aside>
<h2>What to ask your agency</h2>
<p>Three questions, and the order matters.</p>
<p>First: <em>“What brand-name searches is PMax allowed to bid on, and how do you know?”</em></p>
<p>Second: <em>“Show me the account-level negative keyword list and the category exclusions inside the PMax campaign settings.”</em></p>
<p>Third: <em>“What does the most recent search-term insights review tell us, and when is the next one scheduled?”</em></p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Brand search is excluded at the account level using the brand exclusion list &mdash; here are the eight variants we&rsquo;ve included. We pulled out competitor terms and three irrelevant categories last quarter. Account-level negatives have 62 entries. Search-term insights are reviewed monthly; last review surfaced two new themes that were eating budget without converting and we&rsquo;ve excluded them. PMax represents 31% of spend and is running 18% below target CPA on incremental conversions, with brand-cannibalisation modeled out.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;PMax is fully automated &mdash; the algorithm decides where to show ads. We monitor performance at the campaign level and the CPA is currently strong. Adding too many exclusions can limit the algorithm&rsquo;s ability to find new conversions.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means PMax has been turned on, the surface-level numbers look fine, and nobody is doing the work to figure out whether the surface-level numbers are real. The phrase “adding too many exclusions can limit the algorithm” is the rhetorical move that signals an agency has decided not to bound the campaign — which is convenient for them and expensive for you.</p>
<p>The first thing I do on any audit involving Performance Max is check the brand exclusion. If it’s missing on a brandable business with any organic search demand, I can usually find ten to thirty percent of the campaign’s reported “conversions” were going to happen regardless. Strip those out and the actual incremental cost per acquisition is wildly different from what the agency has been reporting. That conversation tends to change the renewal meeting.</p>
<p>The fix is rarely killing PMax. The fix is bounding it: brand exclusion, account-level negatives, category exclusions, monthly search-term insights review with the findings shared in writing. Twenty minutes of setup, an hour a month of review. If your agency won’t agree to that, you’re not really running Performance Max — you’re running “whatever Google felt like buying that month.”</p>
]]></content>
  </entry>
  <entry>
    <title>Every site visitor is treated as the same audience</title>
    <link href="https://ppcredflag.com/red-flags/no-audience-segmentation/"/>
    <updated>2026-05-03T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/no-audience-segmentation/</id>
    <summary>A first-time visitor and a returning prospect who already requested a quote should not be bid on the same way. If your agency is treating them identically, your budget is being averaged into mediocrity.</summary>
    <content type="html"><![CDATA[<p>A first-time visitor who has never heard of you and a returning prospect who already filled out your contact form last week are two completely different commercial situations. Bidding on them with the same strategy — same budget, same ad copy, same landing page, same expected conversion rate — is one of the easiest ways to throw money away in a Google Ads account, and one of the most common.</p>
<p>Audience segmentation (splitting your traffic into groups based on what they’ve already done with your business) and using those groups as bid modifiers, exclusion lists, or dedicated campaigns is not advanced. It is table stakes. The fact that it isn’t happening on most accounts I audit is a tell about how the work is actually being done.</p>
<h2>Why agencies do it</h2>
<p>Same root cause as the broad match story: it’s labor that doesn’t show up in the report deck.</p>
<p>Setting up proper audience structure means defining segments (recent buyers, abandoned cart users, contact form submitters, customers, lookalike of customers, prospects who’ve hit your pricing page, prospects who haven’t), making sure each one is wired correctly to your tag setup or your CRM, building bid modifiers or separate campaigns for each, and revisiting the structure every quarter as your business evolves. It’s a half-day of work to set up cleanly and an hour a quarter to maintain. On a $5,000-a-month retainer, an agency that takes their margin seriously has to triage what gets done. Audience segmentation often loses the triage.</p>
<p>There’s also a competence issue. Doing this well requires the analyst to ask uncomfortable questions about your funnel: where do prospects drop off, what is the typical sales cycle, what does intent actually look like in your business. Junior analysts without account-management training don’t ask those questions, so the campaigns get built without that input, so audiences get treated as one bucket.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Open Audiences in your Google Ads account. If you see fewer than four to six audiences applied to your search campaigns, that’s a starting point for concern.</li>
<li>Look for whether audiences are applied as “Observation” (Google reports on them but doesn’t change bidding) or “Targeting” (campaigns actually use them). Observation-only across the board on a mature account means the data has been collected but never acted on.</li>
<li>Check whether you have campaigns dedicated to remarketing (showing ads to people who’ve already visited your site) versus prospecting (cold traffic). If everything is mixed in one campaign, you can’t allocate budget intelligently between the two.</li>
<li>Ask whether your customer list is uploaded as an audience (for exclusion or for lookalike modeling). If the answer is “not currently” or “we haven’t set that up,” the agency is leaving money on the table that takes ten minutes to claim.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Concrete example</span>
A manufacturing client was bidding the same way on every search campaign visitor. We split their structure into three audiences: existing customers (excluded entirely &mdash; they don&rsquo;t need to see paid ads to buy from us again), prospects who&rsquo;d been to the pricing page in the last 30 days (bid +40%), and everyone else (default bid). Cost per lead dropped 22% in five weeks with no other changes. The work took about six hours.
</aside>
<h2>What to ask your agency</h2>
<p><em>“Show me the audience segmentation strategy currently running on our search campaigns. I want to see the segments, whether each one is set to Observation or Targeting, and what the bid adjustments are.”</em></p>
<p>A real agency will send you a one-page document within a day. A struggling agency will send you a screenshot of the Audiences tab and call it a strategy.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Five segments running in Targeting: existing customers (excluded), 30-day pricing-page visitors (+35%), 90-day site visitors (+15%), customer-list lookalikes (+10%), and a general prospecting bucket at default. We review the bid adjustments quarterly against revealed cost per lead by segment. The customer list refreshes weekly from your CRM.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;We&rsquo;re using Smart Bidding, which automatically optimises for conversions across all audiences. The algorithm handles segmentation for us. Adding manual audience layers can actually limit machine learning performance.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>The bad answer is half-true, which is what makes it a red flag instead of a lie. Smart Bidding (Google’s machine-learning auction bidding) does optimise across audiences if you give it audience signals to optimise on. But it can only optimise for what it can see, and the difference between an existing customer and a cold prospect — or between someone who hit your pricing page yesterday and someone who came in via a content marketing search a month ago — is not visible to the algorithm unless someone has actually wired those segments in.</p>
<p>What the bad answer reveals is that the agency has either delegated thinking to Google’s automation or is using “Smart Bidding” as the rhetorical justification for not having done the segmentation work. Either way, it’s the same outcome for you: cold traffic and warm traffic getting bid the same, your existing customers getting served paid ads they don’t need, and a budget that’s averaging itself into mediocrity.</p>
<p>The fix is the half-day of work I described above. If the agency won’t do it as part of the existing retainer, that’s a conversation about scope. If they will but it doesn’t actually get done within three weeks, that’s a conversation about whether the retainer is staffed at all.</p>
]]></content>
  </entry>
  <entry>
    <title>Broad match consumes most of your budget and there&#39;s no negative keyword strategy</title>
    <link href="https://ppcredflag.com/red-flags/broad-match-no-negatives/"/>
    <updated>2026-05-02T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/broad-match-no-negatives/</id>
    <summary>When 60-80% of your spend goes to broad match keywords and the negative keyword list hasn&#39;t grown in months, you&#39;re paying Google to show your ads to people who will never buy from you.</summary>
    <content type="html"><![CDATA[<p>There are two settings on a Google Ads keyword that determine how loosely Google interprets it — broad match (Google decides what counts as related) and exact / phrase match (you decide). Broad match can be useful in disciplined hands. It can also be the single biggest waste of money in your account, and most clients have no way to tell which version they’re getting.</p>
<p>The way you tell is by looking at the search terms report (the list of actual queries people typed before clicking your ad) and at your negative keyword list (the list of queries you’ve told Google not to match against). If the search terms report is full of queries that have nothing to do with what you sell, and the negative list hasn’t had a meaningful update in three months, you’re funding the long tail of Google’s imagination.</p>
<h2>Why agencies do it</h2>
<p>Three reasons, in roughly this order.</p>
<p><strong>It scales. Broad match plus Google’s automated bidding produces volume on demand.</strong> Ask your agency for more clicks or more leads and broad match is the lever they pull, because it’s a setting change rather than the slow work of writing better ads, building tighter campaigns, or improving landing pages. The volume comes fast. The relevance is the cost, and the cost is invisible to you unless you go looking for it.</p>
<p><strong>It hides labor.</strong> Maintaining a negative keyword list is unglamorous, ongoing work that has to happen every week or two on a real account. It takes ten to forty minutes per pass depending on volume. Multiply that across an agency’s book of clients and you’re looking at significant unbillable hours. Broad-match-without-pruning is what happens when an account is too small to justify that labor, but the agency doesn’t want to renegotiate the retainer downward.</p>
<p><strong>It looks fine in the report.</strong> The numbers Google shows you for broad match campaigns — clicks, conversions, ROAS — are real. They just leave out the half of the spend that bought traffic that will never convert. As long as the agency reports on aggregate performance and not on search-term-by-search-term spend, the bad half is statistically invisible.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>Pull a 90-day search terms report (the list of actual queries that triggered your ads). Sort by spend. If more than ten of the top fifty queries are obviously off-topic for your business, you have a problem.</li>
<li>Look at the negative keyword list. If it has fewer than 30–50 entries on an active spending account, or hasn’t been updated in 60+ days, you have a different version of the same problem.</li>
<li>Pull spend by match type. If broad match is over 60% of total spend with no carve-outs by intent or audience, ask why.</li>
<li>Watch for query patterns like job titles (“careers,” “jobs,” “salary”), DIY-intent terms (“how to,” “free,” “tutorial”), competitor-adjacent fluff that doesn’t convert, and product categories you don’t actually sell.</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Concrete example</span>
A B2B SaaS account I audited in 2024 was spending $11,000 a month, of which roughly $4,200 was going to broad match queries containing the words &ldquo;jobs,&rdquo; &ldquo;careers,&rdquo; and &ldquo;remote.&rdquo; The agency&rsquo;s monthly report showed a healthy aggregate cost per lead. The line item that produced 87% of the leads was a single brand-search ad group; everything else was burning. The negative keyword list had not been updated in nine months.
</aside>
<h2>What to ask your agency</h2>
<p>Two questions, in order. The second one is the test.</p>
<p>First: <em>“Send me the top fifty search terms by spend over the last 90 days, with conversion rate and cost per lead next to each.”</em></p>
<p>Then, after they send it: <em>“Walk me through the negative keyword cadence. How often is the list reviewed, who does it, and what was added in the last 60 days?”</em></p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Negatives are reviewed every two weeks by our SEM lead. Last 60 days we added 47 negatives, mostly job-intent terms and a competitor cluster that was bleeding budget. Here&rsquo;s the audit log. We pulled broad match share from 71% to 48% over the last quarter; the carve-outs are running on phrase and exact for the highest-intent queries.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;Broad match is the recommended setting and lets Google&rsquo;s machine learning find the right traffic. We monitor performance at the campaign level. Negatives are added when we see issues. The system is working as designed.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means nobody is doing the work. The phrase “negatives are added when we see issues” is functionally equivalent to “we don’t look unless something breaks badly enough to show up in the aggregate report,” which on a smaller account can be never. “The system is working as designed” is the part where you should pay attention — whose design, exactly?</p>
<p>The fix is rarely as dramatic as switching agencies. It is almost always a 90-minute conversation in which you ask for a 30-day negative-keyword cleanup, a written cadence going forward (“every two weeks, list goes to you in writing”), and a target for broad-match share that comes down by ten or fifteen percentage points over the next quarter. A real agency will say yes and have it done in three weeks. An agency that says yes and then nothing happens is telling you something more important than any report ever will.</p>
]]></content>
  </entry>
  <entry>
    <title>Your monthly report leads with clicks and impressions, not leads</title>
    <link href="https://ppcredflag.com/red-flags/reporting-clicks-not-leads/"/>
    <updated>2026-05-01T00:00:00.000Z</updated>
    <id>https://ppcredflag.com/red-flags/reporting-clicks-not-leads/</id>
    <summary>When your business pays for sales-qualified leads, but the report meeting opens with click volume and impression share, you&#39;re being measured on the wrong scoreboard.</summary>
    <content type="html"><![CDATA[<p>The first slide of your agency’s monthly report is where they tell you what they want you to think mattered last month. If that slide leads with click volume, impressions, click-through rate, or impression share — and you are paying for leads — you are being measured on a scoreboard your agency picked, not the one your business runs on.</p>
<h2>Why agencies do it</h2>
<p>It is rarely malicious. It is structural.</p>
<p>Click and impression metrics are abundant, immediate, and almost always trending up if budget is steady. Lead metrics are scarcer, slower, and brutally honest. Account managers are trained to lead with the metrics that pattern-match to growth, because the report meeting is the one moment of the month where renewal risk gets calibrated. If the first slide looks like a win, the rest of the meeting is graded against a forgiving baseline.</p>
<p>There is also a competence ladder underneath this. Senior practitioners are comfortable opening with leads and conversion rate (the percentage of clicks that become leads — the only number that connects ad spend to your business). Junior practitioners aren’t, because if leads are flat the senior practitioner has a thoughtful explanation and the junior one has “results take time.” Leading with clicks lets a junior team look fluent.</p>
<h2>What it looks like in your report or account</h2>
<ul>
<li>The first three to five slides are click volume, impressions, CTR (click-through rate — the percentage of people shown the ad who actually clicked it), and impression share, in that order.</li>
<li>Lead volume, cost per lead, and conversion rate appear somewhere on slide 8 or later, often without month-over-month comparison.</li>
<li>Pipeline and revenue contribution are not in the deck at all — or appear once a quarter as a one-line summary.</li>
<li>The narrative paragraph at the top of the deck uses words like “visibility,” “awareness,” and “reach” on a lead-generation account.</li>
<li>When you ask about leads, you get a redirect to “quality of traffic” or “intent signals.”</li>
</ul>
<aside class="prf-callout"><span class="prf-callout__label">Tell-tale phrasing</span>
&ldquo;Impression share is up 14% month over month, putting us in a strong position heading into Q3.&rdquo; In a position for what? If the answer isn&rsquo;t &ldquo;more leads,&rdquo; the sentence is fluff.
</aside>
<h2>What to ask your agency</h2>
<p>Ask the question that puts the scoreboard back on your business: <em>“Walk me through leads, cost per lead, and conversion rate for the last 90 days. What changed and why?”</em></p>
<p>Listen for what comes after “and why.” That is the entire test.</p>
<div class="prf-answers">
  <div class="prf-answer prf-answer--good">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&check;</span> Good answer</div>
    <div class="prf-answer__body">
      &ldquo;Leads are down 11% over the trailing 90. Two campaigns drove it: brand search held flat at $42 CPL, but our non-brand consideration campaigns went from $310 to $440 CPL after we expanded match types in March. We&rsquo;re reverting that change this week, here&rsquo;s the rollback plan, and we expect to see CPL recover within ten business days. If it doesn&rsquo;t I&rsquo;ll flag it before the next call.&rdquo;
    </div>
  </div>
  <div class="prf-answer prf-answer--bad">
    <div class="prf-answer__label"><span class="prf-answer__icon" aria-hidden="true">&times;</span> Bad answer</div>
    <div class="prf-answer__body">
      &ldquo;Lead volume can be a noisy metric month to month. We&rsquo;re focused on quality over quantity right now and we&rsquo;re seeing strong intent signals in the traffic mix. The fundamentals are healthy and we expect the funnel to catch up.&rdquo;
    </div>
  </div>
</div>
<h2>What it means if you get the bad answer</h2>
<p>It means one of two things, and they are about equally common.</p>
<p>The first: the person on the call doesn’t actually know what happened. Either they didn’t look before the meeting, or they did look and don’t understand what they saw. This usually points to a junior analyst running the account with thin oversight from above. Not catastrophic, but you are paying senior rates for junior work.</p>
<p>The second: the person on the call does know, and the news is bad enough that they are managing the moment instead of telling you. This is more serious. If your agency reflexively softens bad news instead of naming it, every problem you have is going to surface a month or two later than it should — which is the difference between a recoverable quarter and an unrecoverable one.</p>
<p>Either way, the move is the same: ask for the leads-first version of the report, in writing, before the next meeting. If they push back on producing it, that’s the answer.</p>
]]></content>
  </entry>
</feed>
