24 Top Paid Search Metrics Explained

We typically focus on more advanced marketing tactics and deeper analysis of the advertising business, but every now and then it’s nice to take a step back and address more fundamental topics.  In our discussions of paid search, we’re often quick to th…

We typically focus on more advanced marketing tactics and deeper analysis of the advertising business, but every now and then it's nice to take a step back and address more fundamental topics.  In our discussions of paid search, we're often quick to throw around abbreviations and terminology that may not be instantly discernible to all audiences, so we thought we'd offer a primer on paid search metrics.

While much of the jargon of paid search overlaps with that of other marketing channels and business in general, there are a few items here that are PPC-specific.  For those who are new to or have just a casual connection to any of these fields, there may be only a few items here that are familiar.  To those who are well acquainted with these metrics, hopefully we can offer a few tips here and there that will make this worth your while.

Without further ado...

Traffic Metrics:

Impression - An impression occurs when your paid search ad appears on the search engine results page (SERP).  Impression data is provided by the engines via their online interface or through API reports.

Click - An easy one: clicks are when the user clicks on your ad and visits your landing page.  Again, this data is provided by the engines, but it can also be determined by on-site analytics, often with greater detail about the user.

Click-through Rate (CTR) -  CTR is the ratio of the number of clicks your ad has to the number of impressions it received (Clicks/Impressions).  Click-through rate is strongly influenced by the position of your ad on the SERP and your company's name recognition, but compelling ad copy also provides a boost.  High level CTR trends can be deceiving so take caution in your analysis.

Average Position - This is the average position where your ad appeared on the SERP with the top position on the page being 1.  Because of the nature of the auction and increasingly personalized results pages, it can be difficult to interpret the average position metric in isolation.  For example, an ad with 10 impressions in position 1 and 1 impression in position 10 will have an average position of 1.8.  Also, it's possible that increasing an ad's bid will end up lowering its average position.  Google has recently provided a new data segmentation option that provides a bit more insight on ad position.

Cost Per Click (CPC) - CPC is the average amount the advertiser pays for a click.  This should be distinguished from the advertiser's bid or max CPC, as actual CPCs will typically come in lower than the bid due to the nature of the PPC ad auction.

Marginal/Incremental Cost Per Click - Since CPC is just an average and there are diminishing returns to additional ad spend, the advertiser may find they are spending a great deal more per click than their average CPC for their last dollars spent.  Viewing this marginal CPC in data from a tool like Google's Bid Simulator can reveal opportunities to shift spend across one's keyword portfolio.

Cost Per Mille (CPM) - CPM is the cost per thousand ad impressions.  Though not commonly associated with paid search, some advertisers may wish to compare their effective CPM for PPC to other channels where pricing is determined by impressions rather than click costs.

Impression Share (IS) - Impression Share is the ratio of the impressions your ad received to the number of possible impressions it could have received.  Budget restrictions and low ad rank will decrease your impression share, but having a high IS shouldn't be the goal in and of itself.  If your ads are of high quality, budget is not restricting their display and you are bidding what you can afford, IS speaks more to the level of competition you face than anything else.

Conversion Metrics:

Revenue/Sales - Used interchangeably, the terms revenue and sales can apply to the value of orders placed with or without discounts and shipping & handling included.

Margin - Margin, or Gross Profit, is expressed in dollar terms and is defined as (Revenue - Cost of Goods Sold).  Running a paid search program based on Margin figures can provide a more direct impact on profitability by taking into account variable margin percentages across products and product lines.

Leads - For some advertisers, the goal of an advertising campaign is to reach qualified individuals to pursue for a long-term commitment (insurance, bank account, etc.) or purchase farther down the road (B2B, high ticket items).  In these cases, rather than Orders, the key conversion measure is a Lead.  This can include an email signup, application completion or request for information among others.

Conversion Rate (CR) - CR is the ratio of the number of Orders or Leads to the number of ad Clicks.  Aggregate conversion rates in paid search depend heavily on the competitiveness of your product offering as well as your ability to determine value across your keyword portfolio.  Conversion rates do not vary significantly by ad position.

Revenue Per Click (RPC) - Or Sales Per Click (SPC), RPC is the average amount of revenue generated per click.  For advertisers looking to hit a revenue-based efficiency goal, predicting RPC accurately will determine what CPCs can be afforded and how to set bids.

Revenue Per Impression (RPI) - A useful measure for copy tests, Revenue Per Impression accounts for both the CTR and RPC differences one might see between two copy versions.

Average Order Value (AOV) - AOV is Revenue/Orders and it can be useful for determining how promotions should be set up to incentivize shoppers to spend more than average and as a contrast to conversion rates in analyses (Are shoppers purchasing at the same rates, but spending more/less per order over time/seasonally?)

Efficiency Metrics:

Ad Costs to Sales (A/S) - Ad Costs/Sales is a ratio used as an efficiency target for many paid search programs.  It is often used as a proxy for more direct measures of profitability, but it can be useful for those seeking to maximize top-line revenue over bottom line considerations.

Return on Ad Spend (ROAS) - Most commonly, ROAS is simply the inverse of A/S or Revenue/Ad Costs.

Return on Investment (ROI) - In the paid search world, ROI is frequently used synonymously with ROAS, but it is best tied as directly as possible with profit measures.  A typical ROI calculation is: (Gross Margin - Ad Costs - Variable Expense)/Ad Costs

Ad Costs to Margin (A/M) - The Ad Costs/Margin ratio is another common efficiency target metric that provides a more direct view of profitability than A/S.

Cost Per Lead/Order (CPL/CPO) - Leads may not ultimately pay off for months or even years, so Lead generating advertisers need an efficiency metric they can steer by today.  If the advertiser can estimate the value of a Lead they can aim for a Cost/Lead target that meets their desired profitability goals.

Other Considerations:

Quality Score (QS) - QS is a ranking the engines assign to your ad based on their view of its quality.  It is used along with your bid to determine where your ad will rank on the SERP.  Though all details of the QS assessment are not known, it is largely a function of historical CTR among other relevancy factors.  A view of QS is available for keywords via the engine interfaces and API.

Cost of Goods Sold (COGS) - COGS is the direct cost to the advertiser for the products they are selling.  Used for determining Margin, it does not include variable costs for labor, distribution, etc.

Revenue Per Search (RPS) - RPS is the amount the engines make for each search and reflects how well they are monetizing their traffic.  A higher RPS is not necessarily beneficial to advertisers, but a relatively low RPS, as seen since the Search Alliance, suggests valuable traffic is not being fully reached.

Lifetime Value (LTV) - Lifetime Value estimates the full value of a customer by forecasting future revenues they will generate.  Incorporating LTV into efficiency calculations allows the advertiser to be more aggressive with their bidding and reach a greater audience.

Well, there you have it.  I hope this is helpful for those new to paid search or those that just need a refresher.  Did we miss any important ones or personal favorites?

Detecting Significant Changes In Your Data

For statisticians, significance is an essential but often routine concept. For those who don’t remember the details of college statistics courses, significance is a nebulous concept that lends magical credence to whatever data it describes. Sometimes y…

For statisticians, significance is an essential but often routine concept. For those who don’t remember the details of college statistics courses, significance is a nebulous concept that lends magical credence to whatever data it describes. Sometimes you make a change in your paid search program, watch the data come in, and want to claim that numbers are improving because of your initiative.

How can you support this claim?  Can you discredit the possibility that the apparent improvement is just noise? How can you apply that authoritative label of “significant”?

Here I’d like to walk you through a basic test of significance that you can use to de-mystify changes in your paid search data.

If you’d like to skip the math, click here.

Let’s start with a situational example… say you’ve added Google Site Links to your brand ads and you want to show that brand click-through rate (CTR) has improved as a result.

  1. First, you need to know what value brand CTR is potentially improving from.  Let’s call this value mu (pronounced myoo), and you can choose it in a variety of ways: the average or median CTR over the past month, the average or median CTR from this time of year last year, etc. It should really be whatever value you believe CTR to truly center around.
  2. Next, you need data points.That is, you need several days of CTR data since the Site Links have been running. How many days is up to you. Generally, more is better, but I’ll touch on that later. The number of days you have is n. Take the average of the CTRs from those days; this is called xbar. Lastly, take the standard deviation (excel function stdev) of these CTRs and call it s.
  3. Now we can compute a t-score, and with it, the probability that the change in CTR you’re seeing is or isn’t attributable to chance. Set t = |xbar mu| / (s/squareroot(n)). Then use the function tdist in excel, and for the arguments, plug in t, n-1, and 1. The number that this function returns is the probability that the change in CTR is simply due to chance, aka noise. If this probability is very small, then we say CTR has changed significantly.

Enough Math! Is The Change In My Data Significant?

I’ve prepared an excel spreadsheet that handles the arithmetic. In this model, change the gray shaded cells to reflect your data.Enter the data that you think has fundamentally changed in column C. Only include data points since the change began. Then, in cell G2, enter the value from which you believe the data to have changed. That is, the average value of the data before the change.

The value p, produced in cell G7, is the probability that the change you’re seeing is only due to chance, and thus meaningless. Typically, a p-level must be below 5% to be considered significant. (If you want to be super, super sure, you can use 1% or 0.1% instead.) In other words, if your p-value is 5% or less, you can confidently say that the change in your data is real, definite, and due to something other than statistical noise. It’s a pretty safe bet that whatever initiative you took – whether it was switching landing pages, altering ad copy, or refining your bidding – was the catalyst for the improvement instead.

Allow me to fill in the spreadsheet with an example. For an imaginary online retailer, brand CTR hovers around 4.4%, so I fill in cell G2 with the value 4.4. The retailer enables Google Site Links, and CTRs for the 3 days afterward are 4.3, 5.2, and 5. So I enter those three data points into column C. And voila… the p-level comes back as 12.66%. This says that there is a 12.66% chance that the rise in CTR was due only to noise.

Not significant. Sorry, click-through-rates haven’t really increased, or at least, we can't be very confident that the observed change is anything more than random noise.

But… three days is not much data. As smart analysts, we are cautious when examining trends over only a few days, and this significance test incorporates such wisdom. As the number of data points (n) you use increases, p-levels fall. For example, if all the numbers in the above example were the same except that you used 7 days instead of 3 (so n=7), the corresponding probability drops to 2.6%. In this instance, it’s very unlikely (2.6% unlikely) that the increase in CTR was due to noise, so here you can rather confidently say, “Yes, CTR has increased, and it wasn’t due to chance. It was probably due to the site links.”

GreaseMonkey: Hacking Web Apps So They Work The Way YOU Want

GreaseMonkey is a Firefox extension that lets you run arbitrary Javascript code against selected web pages. What this means is that GreaseMonkey lets you change how other people’s web pages look and how they function (but just in your own browser, of …

GreaseMonkey is a Firefox extension that lets you run arbitrary Javascript code against selected web pages.

greasemonkey2What this means is that GreaseMonkey lets you change how other people's web pages look and how they function (but just in your own browser, of course).

I last played with GreaseMonkey (GM) about four years ago. Then, I didn't find the idea compelling. Today, with ever more applications going online, GM is worth a serious look.

GM can increase productivity by making web apps easier to use.

Even more interesting, GM also lets you add functionality to web pages. Here's a small example. I use Delicious to bookmark sites. I used Google Reader to read blogs, and I tag interesting posts with Reader's "gold star" button. Via a few lines of GM code, Google Reader now sends my starred items to Delicious automatically. This improvement keeps all my interesting links in one place.

GM works on intranets, too.

Suppose you're an online retailer and your merchants use an intranet app to enter product information for your site. Suppose that app had some UI annoying issues, like extra confirmation screens after entering each product ("Are you sure you want to add the following?") If your vendor or your internal IT folks can't (or wont) change the app, you could use GM to skip the unnecessary page.

Or perhaps your call center staff uses an intranet app for order entry. If they're retyping or pasting telephone data from phone pop into the order entry app on each call, perhaps the phone app could write its data to a local file (maybe) which GM then used to prepopulate fields in order app.

A GM script could even prepopulate web app fields from database lookups (you'd need to expose the necessary data via some simple restful url, behind the firewall).

Certainly hacky, certainly not 'beautiful' engineering, but GM opens up interesting possibilities.

Here are the GM pros and cons as I see them.

Pros:

  • Javascript. GM code is just Javascript code. Any developers familiar with Javascript and the DOM can write GM scripts.
  • User scripts. If you're seeking a common sense improvement to a popular site, someone probably has already written a GM script to do it. For example, here are popular scripts tweaking Google sites.
  • Cross-site scripting. Unlike the security model in common AJAX, GM code can access the entire web: "Unlike the XMLHttpRequest object, GM_xmlhttpRequest is not restricted to the current domain; it can GET or POST data from any URL" (from DiveIntoGreaseMonkey). This is very powerful.

Cons:

  • Scraping stinks. Fundamentally, GM is screen-scraping. Yuck. If your target site changes their design, your GM script probably croaks.
  • Ffox preferred. GM runs best in Firefox. Some GM scripts run on Chrome, but some do not. (Specifically Chrome does not support the GM_ functions, including GM_xmlhttpRequest.). Here's info on the "--enablegreasemonkey" flag in Chrome, and here's info on GreaseMetal . I've not used it, but IE has GM4IE.
  • Reality warp. If you forget you have GM turned on, or if you assume GM is on when it isn't, or if you switch to a computer without GM, you can get confused when a familiar web page behaves "strangely."
  • Debugging. Sometimes it is hard to understand why a GM script isn't working. Firebug is essential.
  • Local install. The GM extension and scripts are installed locally, not in the cloud. Installing them on your laptop doesn't put them on your desktop, etc. Script updates need to be maintained on each machine.

And here are some GM links I found useful.


Any readers out there using GreaseMonkey for business purposes?

greasemonkey

Correcting the History of Search Engine Optimization

Does the history of search engine optimization (SEO) even matter? I believe it does. In an industry that suffers from lack of structure and standardization, having a fuzzy history only turns the blinds another notch dimmer. Without having a clear frame…

Does the history of search engine optimization (SEO) even matter? I believe it does. In an industry that suffers from lack of structure and standardization, having a fuzzy history only turns the blinds another notch dimmer. Without having a clear frame of reference for where we originated, it makes it more difficult to chart progress and locate the path to where we're going. SEO as an industry has a lot of growing up to do, surely. It's partly the reflection of the wild west mentality that permeates the early days of the Internet, and partly the result of a lot of geeks with keys to the castle. SEO can make companies a lot of money, and with no real law in effect (only rules as set by search engine guidelines), pretty much anything goes provided you're willing to take the risk. But this post isn't about SEO standards or the white hat vs black hat debate. This post is about the history of search engine optimization; a history that will now need to be corrected.

The Current History

Who invented the term search engine optimization? I may have a definitive answer - or at least definitive proof that the term was being used prior to the Usenet spam posting that is now referenced as the earliest known web mention in the Wikipedia entry. Wikipedia says the following about search engine optimization:
According to industry analyst Danny Sullivan, the earliest known use of the phrase "search engine optimization" was a spam message posted on Usenet on July 26, 1997."
Wikipedia references the earliest known reference to SEO
The Wikipedia entry references a Danny Sullivan comment where he points out a Usenet spam post using the term search engine optimization. Here's a link to the Usenet posting with the term search engine optimization highlighted. Directly below is a screenshot of the post:
Usenet spam mentioning SEO in July 1997
Up until now this has been the earliest known record of the term search engine optimization being used online.

Did John Audette Invent the Term Search Engine Optimization?

The Wikipedia page needs to be corrected. John Audette was using the term search engine optimization at least 5 months prior to the earliest reference on Wikipedia. Not only that, it was being offered as a legitimate service, a huge departure from the intentions of the spam message on Usenet. Check this web page that was published in February 15, 1997 that proves John Audette was using the term search engine optimization at least 5 months prior to the Wikipedia reference: http://web.archive.org/web/19970801004204/www.mmgco.com/campaign.html
February 1997 page from MMG on the Wayback Machine
Multimedia Marketing Group (MMG) was John Audette's online agency, founded in 1995 and the starting place for some of the pre-eminent SEO's in the industry, such as Bill Hunt, Detlev Johnson, Marshall Simmonds, Derrick Wheeler, Jeremy Sanchez, Andre Jensen and Adam Sherk. Bend, Oregon - MMG's location prior to its purchase by Tempus Group in 1999 - is still home to many of the world's best SEO shops and consultants (including AudetteMedia). It truly is the birth place of SEO.
Search engine optimization listed as a service from MMG on 2/15/97
Danny Sullivan visited MMG's Bend offices to train the fledgling team on the fundamental tactics of search engine optimization. John wanted to reach out to the premier expert in the field (Danny) to learn all he could. That was in the Fall of 1997, months after MMG began using the term SEO on its site. Here's a services detail page that again mentions SEO and meta tags: MMG was offering SEO in early 1997 I urge you to play around on the old MMG archive from early 1997 and review the several mentions of search engine optimization: http://web.archive.org/web/19970215062722/http://www.mmgco.com/index.html It's time to update the Wikipedia entry on SEO!

Little Known Way To Determine How Much To Advertise

Ever ask, “How much more or how much less should I be spending on advertising?” Here’s a simple model based on the square root rule to help answer that question, along with an Excel spreadsheet to help with the arithmetic. (To skip equations, click her…

Ever ask, "How much more or how much less should I be spending on advertising?" Here's a simple model based on the square root rule to help answer that question, along with an Excel spreadsheet to help with the arithmetic. (To skip equations, click here for the applied section.) The square root "rule" says sales, S, increases with the square root of advertising, A, like so:

douglas-cobb

In symbols:

s(a)=sqrt(a)

Where k is a calibration constant, determined from an actual observation of sales and advertising (S0, A0). If your effective profit margin (defined as 1 minus cost-of-goods percentage minus variable selling expense percentage) is m, we get this formula for profit P as a function of advertising spend A :

p(a)=mksqrt(a)-amk=s_0/sqrt(a_0)

Of course, this toy model is not correct. The only way to determine the actual relationship between sales and advertising is empirically, through testing. Nonetheless, assuming the relationship follows these equations provides some insight. Strengths of this model:

  • Decreasing returns to scale. The slope of the curve get less steep (formally, its second derivative is negative). This means the more you advertise, the more you suffer decreasing returns to scale. Because you buy the best advertising first (or at least try your hardest to do so), the more advertising you buy, the less effective each incremental advertising chunk is at producing sales. Real life works this way, and the model captures that.
  • Self-consistency. The model's predictions are internally consistent for any A_0.
  • Theoretical basis. This equation is a specific case of the well-studied Cobb-Douglas production function from economics.
  • Simplicity. Because of the model's simple form, we can solve for the optimal advertising level which maximizes profit.

Weaknesses of this model:

  • Smoothness. The model says the relationship between sales and advertising is smooth. Not so in real life -- you buy advertising in chunks (channels, media, spots, insertions, campaigns, keywords) with different inherent degrees of "chunkiness" and differing performance. For example, you can't buy 4/5ths of a Superbowl ad.
  • Scope. The model says you can always buy more sales with more advertising, on up to infinity. Not so in real life -- at some point, you've tapped out the market of potential buyers, and additional advertising returns nada.
  • Objective. Optimizing P(A) will maximize operating profits. While most catalogers and direct marketers run their advertising so as to generate maximum profit, many other marketers do not, instead intentionally choosing to "over-advertise" to increase brand awareness or drive top line sales. All valid and good, just be aware this model and spreadsheet adopts the direct marketing perspective and aims to maximize bottom line.

We can maximize profits, P(A), by differentiating and setting the derivitive equal to zero. This gives up the optimal level of advertising, Amax, which coincidentally is also the optimal profit, Pmax.

amaxpmax

Under this model, the optimal A/S is

pmax

To offer some intuition for this, if your effective profit margin is m cents per sales dollar, you maximize your operating profit (in dollar terms, not percentage terms) by sinking half of your effective profit margin (in percentage terms) in marketing, keeping the other half as contribution towards profit.

Again, this is a highly simplified model for advertising returns to scale. Use this only as starting point. The only certain way to determine your optimal advertising level is through careful testing of the A vs. S elasticity curve for your business.

Enough Math! How Much Should I Advertise?

We've prepared an Excel spreadsheet that handles all this arithmetic. To use this model, change the gray shaded cells. Enter your ad cost for a recent campaign in cell C8 and the corresponding tracked sales in C9. Enter your average cost-of-goods sold, expressed as a percentage of net sales, in C10. Enter your other variable selling expenses (credit card discount, shipping subsidy, pick and pack cost, dunnage, etc), again as a percentage of net sales, in C11.

The model will give you a base P&L for your campaign. It will also estimate what would have occurred had your advertising been 30% higher or 30% lower. And it will estimate the advertising level that would have maximized your operating profit. For an example, I loaded the model with the numbers from Kevin Hillstrom's December post on the square root rule. Kevin gave a (made-up?) example about about a CFO who stormed in on the online marketing team demanding they cut their budget by 10%.

With the P&L Kevin provided ($10mil sales, $2mil advertising, 60% COGS, 13% variable), Kevin notes the CFO was likely right -- the last 10% of budget wasn't pulling in enough sales to justify it. Cutting $200k from the ad budget might bump the bottom line up by $60k. The spreadsheet goes further, suggesting that, in this made-up scenario, profits would be maximized by cutting the budget in half. (Slow down there, cowboy!) Take some care before slashing (or, alternatively, before heavily increasing) an ad budget by such a large factor based on a simple model.

The model could be wrong several ways. The assumptions of the model might be incorrect: your actual sales vs. advertising curve might not be a smooth square root curve. Your advertising team might not be able to identify the worst performing corners of the advertising budget to cut. (If that's the case, you may have other problems.) And your goal might not be maximum online sales: you could be advertising to drive your top line, to build brand, or to increase store traffic.

On the other hand, suppose the online marketing team from Kevin's example was generating the same $10mil in sales (still with 60% COGS and 13% other variable) but was achieving that with a $1.2mil budget (12% A/S), rather than the $2mil budget described (20% A/S). In that case, the model suggests the team is underadvertising to the tune of $300k, and that increasing the ad spend to $1.5mil would increase operating profits by $20k and sales by $1.25 mil.

In this case, the CFO could rant that the marketing team was being too conservative. Before making significant changes to your ad spend, use careful testing to determine your true elasticity between advertising and sales. This simple model can start that ball rolling by giving you some sense of the direction and scale to test. Thanks to Kevin Hillstrom for posts on the "square root rule" in December and in January, to Roger Cortesi for his LaTex to image rendering page, and for Peter Newbury and Joost Winne for LaTex cheat sheets.

Demand channel vs. Transaction Channel

Al Bessin over at Lenser presents a case study this week looking at customer value by channel. His data are from one specific anonymous catalog client. (An old fashioned client at that: the firm Al describes still receives 18% of their orders by postal…

Al Bessin over at Lenser presents a case study this week looking at customer value by channel. His data are from one specific anonymous catalog client. (An old fashioned client at that: the firm Al describes still receives 18% of their orders by postal mail.) Al doesn't represent the specific findings of this study as universally applicable, as they aren't. But two of Al's points do apply to all multichannel retailers:

  1. "How you first met a customer" is different from "how the customer first purchased from you." Al uses the phrases "Channel of Demand Generation" (eg catalog, email, paid search, natural search, etc) vs. "Channel of Transaction" (web, phone, mail, fax). My former colleague Dave Dierolf used to use more succinct tag for this key concept: "Web as Media" vs. "Web as Medium". As Al states,
    All too frequently, we multichannel merchants refer to channels as Catalog, Web, and Storefront to describe marketing efforts. In fact, this is really a mix of marketing and transaction terms, and any generalization made with respect to this mix of channels can be somewhat misleading.
    Too true.
  2. All multichannel merchants should be calculating customer value (12 month, 24 month), repurchase rates, average order values, and channel preferences by both demand channel and transaction channel. Calculate these metrics each month. Look for changes. Look for differences. When you find differences, ask "why?". Ask "what's the marketing implication?" And look beyond simple averages to distributions (that is, plot histograms).

When I was marketing VP for a large consumer catalog, we noticed higher average order values on the phone that on the web. (That was a few years back and may no longer be the case there.) Why might this be, we asked.

Looking more closely at the order size distributions by transaction channel, we noticed that above a certain threshold order size, a buyer was almost certain to order by phone versus the web. (Al describes this same phenomenon with his client.) Chopping off the right tail of the web order size distribution necessarily drives up the phone AOV.

There's also a selection bias: customers with simple orders -- often lower ticket items, often single item orders -- can self-serve by buying online, and many choose to do so.

Customers with complicated orders -- multiple items, more pricey complex items -- want the assistance of a human, and so opt to call in. Again, a higher phone AOV results. Such analysis generates ideas, which in turn lead to marketing tests. Compare the number of items per order between the web and call center. Who's better at adding accessories or additional items to an order -- the machines or the humans?

If the web lags, consider better upsell and recommendations on the site. If the call center lags, consider more training or better this-goes-well-with-that call center apps. If you're not tracking key customer metrics by both demand channel and transaction channel, make that analysis a priority for this month. It will certainly spark ideas and likely reveal possible opportunities.