Waddup y'all! Hope everyone had a great week.
Last week I broke down how to figure out whether your content or your funnel is the real problem. (If you missed it, go read "Stop Killing Ads That Are Actually Working." It'll change how you look at your ad account.)
This week I want to talk about the other side of that coin. What do you do when you want to find MORE winners?
Because here's the thing. Most founders test creative the wrong way. And it's costing them.
The mistakes I see over and over
When most founders "test creative," here's what actually happens.
They throw a bunch of different ads into a campaign. Different hooks, different formats, different messaging, different audiences. They let it run for a few days, look at the results, pick a winner, and kill everything else.
That's not testing. That's gambling.
Here's what's wrong with that approach:
They test too many variables at once. If you change the hook AND the format AND the messaging AND the audience all at the same time, you have no idea which variable actually made the difference. Did the ad win because of the hook? The format? The angle? You'll never know. And if you don't know WHY something won, you can't replicate it.
They kill ads too early. I've seen founders turn off ads after 48 hours because the CPA looked high. That's not enough data. You need statistical significance before you make decisions, and most early-stage budgets don't generate that in two days.
They don't have a baseline. If you're testing 5 new ads against each other, cool. But which one are you comparing to your PROVEN winner? Without a control, you're just picking the best of an unknown batch. That's not the same as finding something better than what's already working.
They test the wrong things. Changing the background music or the text color isn't a creative test. Those are cosmetic tweaks. The variables that actually move the needle are hooks, format, and messaging angle. That's it.
How I actually test creative
Here's the framework I use with every client. It's not complicated, but it is disciplined. And discipline is where most teams fall apart.
Step 1: Start with what's already winning.
This is the part most people skip. Before you test anything new, you need to know what's working right now. What's your best performing ad? What format is it? What's the hook? What's the messaging angle? What audience is it running to?
That ad becomes your control. Your baseline. Everything you test gets measured against it.
If you don't have a winner yet, look at your landing page. Look at your best performing organic content. Look at your reviews. Start with whatever has already proven it resonates with your customer.
Step 2: Test one variable at a time.
Take your winning hook and test it across different formats. The same hook, shot as a founder talking head. The same hook as UGC. The same hook with B-roll over a voiceover. The same hook in a podcast-style clip.
Keep the messaging the same. Keep the audience the same. The ONLY thing that changes is the format.
Once you find which format wins with that hook, you move to the next variable. Now test different hooks within that winning format. Then test different messaging angles.
It's slower. It's less exciting. But now when something wins, you know exactly WHY it won. And you can replicate that over and over.
Step 3: Keep 80%+ of budget on winners.
This is the risk management piece that nobody talks about.
You're not putting your whole budget into test creative. You're keeping the vast majority of spend on the ads that are already working while carving out a small portion for testing. This keeps your blended CAC low while you hunt for the next winner.
If a founder is spending $10K/month on ads, I'm not going to risk half of that on unproven creative. We run $8K+ on the proven stuff and test with the rest. If a test hits, we scale it into the winner pool. If it doesn't, we didn't blow the budget finding out.
Step 4: Keep it organized.
Every test needs to be trackable. Same naming conventions. Same reporting structure. If you can't look at your data and immediately see which hook/format/angle combination produced which result, your testing framework is useless.
I keep each test within the same campaign line so we can compare apples to apples. The goal is to build a library of what works, not just find one winner and pray.
The test that changed everything for a client
Let me tell you a story about what happens when this framework actually works.
I was working with a DTC brand that had a proven winner. One specific format, one specific hook, one specific angle. It was the bread and butter. It's how the business grew.
But here's the thing about bread and butter. If that's all you have, you're one audience fatigue cycle away from a serious problem. So we kept that winner running and started testing new angles.
Through customer calls (the founder literally called customers and recorded the conversations, with consent obviously) we started testing a completely different angle. Instead of the messaging that had always worked, we tried positioning the product for parents who could use it for both themselves AND their kids.
We tested it. New format. New angle. New hook.
It hit.
But here's the part that blew my mind. We didn't just find a new ad that worked. We uncovered an entirely new audience segment for the brand that nobody had been targeting.
Parents buying for their whole family was a completely untapped ICP. And now we had creative that spoke directly to them.
Remember what I talked about last week? Once you know the content is working, you build the landing page to match. So we built an LP positioned specifically to that parent audience. Now you don't just have one winner running to one audience. You have MULTIPLE winners running across completely different subsets of people.
The brand didn't just double. We're talking 3x, 4x, 5x what it was doing before. Because multiple ads are scaling across multiple ICPs simultaneously. That's exponential growth, not linear.
And it all started with a disciplined creative test.
The framework, simplified
If you take nothing else from this newsletter, here's the cheat sheet:
Find your winner. If you don't have one, start with what's working organically or on your landing page.
Test one variable at a time. Hook, format, or messaging angle. Never all three at once.
Protect your budget. 80%+ on winners, the rest on tests. Keep your blended CAC healthy.
Track everything. If you can't explain why something won, the test was wasted.
When you find a new winner, build the infrastructure to support it. New audience? Build the LP. New angle? Update the funnel. Don't just throw it into the same campaign and hope it works.
One last thing
The difference between brands that plateau and brands that scale isn't luck. It's not even budget. It's having a system for consistently finding new winners without blowing up what's already working.
If you're running ads and your entire business depends on one or two winning creatives, you're sitting on a ticking clock. And if you're "testing" by throwing spaghetti at the wall and hoping something sticks, you're wasting money.
I help brands build this exact system. If you want me to look at how you're testing creative and tell you what I'd change, just reply to this email. No pitch. No sales call. Just reply and tell me what you're working on.
Talk soon,
Chase
