In talking with prospective customers on a daily basis, I field a lot of questions about the science of bid optimization. But over the years, the nature of the questions has shifted from “can you” to “how do you” and have become increasingly complex, as search marketers grow savvier. And even though we’ve been optimizing search campaigns using predictive portfolio methodology since 2005, the marketplace has only recently widely adopted the approach.
Taking a trip down memory lane to when I first started working with IgnitionOne (then SearchIgnite), I thought I’d share my personal recollection of how bid management questions have gotten more sophisticated, tougher, and longer over the years.
But is there a point at which the evolution of our questions and understanding (or lack thereof) of bid science will actually come at the detriment of our campaign performance? And are we relying too much on automated bid science to drive performance? Can bid and campaign optimization be completely automated by algorithms? Is an algorithm going to put me out of a job? ARE ALGORITHMS TAKING OVER THE WORLD?!
Okay- maybe I slipped too far down the slope on that last one, but hopefully you see where I’m going with this. As far as we’ve come in our understanding of rules versus portfolio logic, I’ve noticed a surprising backlash against portfolio optimization tools lately. More and more marketers have been coming to me saying that they have portfolio tools, they understand how to use them (theoretically, anyway), but they…
- Can’t customize them for specific business rules or campaign goals
- Aren’t getting results as advertised
- (and my personal favorite) Just don’t trust them
These are all valid points if you’re relying on a black-box algorithmic portfolio model to handle your bid optimization in a vacuum. So the question becomes: when it comes to bid optimization, are you book smart or street smart?
If you’re book smart, you can probably whiteboard visualization for how your bidding technology handles keyword clustering for optimization. You have every bid completely automated with multiple portfolios across millions of keywords organized thematically, geographically, and by device (well, for now – thanks, Google…). You haven’t touched a bid in years, and you trust your little black box because it contains a revenue-crunching bid-busting algorithm. And it’s smarter than you (right?). Wrong!
But if you’re street smart, you know that you possess two forms of logic that even the best algorithms never will: common sense and foresight. You know that the best performing campaigns are optimized against marginal cost to marginal return modeling, but you also demand transparency of the decision science. So when your algorithm is recommending that you bid down a keyword based on its rank return profile, but you know that you’re about to launch a promotion against that particular category of products, you can opt out of its recommendation without jeopardizing the relative performance of the portfolio.
So is it better to be book smart or street smart? Duh – both, of course! You don’t need to write your own algorithms, but please do remember there’s no such thing as a silver bullet when it comes to SEM, and even the smartest algorithms require human intervention every once in a while.
So ask yourself (and your technology provider):
“Do I have…?”
- Transparency (into keyword level bid optimizations)?
- Input (over what data set(s) the algorithm is incorporating into the model)?
- Control (over individual keyword bid optimizations within the portfolio)?