KISSmetrics vs Mixpanel

While trying to decide on KISSmetrics or Mixpanel, I decided to write a blog post about it since I’m guessing other people are asking the same question. I am not in any way affiliated to either of them.

Analytics Impact is all about converting data into actionable insights. Though in order to order to find good insights you need to have the right data and be able to easily slice and dice the data as needed.

Google analytics can usually give you 90% of the “right data” for most sites, but it has a few major shortcomings that truly limit it when trying to use it to gain insight for a SaaS site.

  • It does not allow you to track data down to the individual visitor across visits
  • It doesn’t have time based cohort analysis

As I am now in charge of a SaaS site, I found myself needing answers to questions Google Analytics just couldn’t answer. I know there are free add-ons and work-arounds that could handle most of my needs just with Google Analytics, but I would rather pay a reasonable monthly fee than spend hours gluing everything together, and even then I wouldn’t have an easy to use reporting solution. I know because I’ve done it in the past.

What I need is a system to fully understand what visitors are doing on my website and then continue to track them when they sign up for a free account and ultimately become customers. Once they are customers I need to understand how they are using my SaaS site (what features they are or aren’t using) and why we lose customers.

I’ve been using web analytics for a while (even before Urchin became Google Analytics) so I already knew what my shortlist was for my needs:

KISSmetrics or Mixpanel

Let me start by saying that both of them are excellent choices. Neither is “better” in the absolute sense, but I need to decide on one or the other so I started looking deeper into which one would better meet my needs.

I found an excellent blog posting on this exact topic by Sacha Greif http://sachagreif.com/analytics-showdown-kissmetrics-vs-mixpanel/

A great read but with one major problem. It’s from March 2012. I know that’s just 8 months ago, but a lot has changed since then.

Here’s a request for both KISSmetrics and Mixpanel. Please provide a simple “changes.txt” type page that easily shows me what’s changed over time. That way if I read an old product review (like this one will be in a year) I’ll be able to easily see what’s changed. Mixpanel kinda has something like this for major changes on their about page.

Back to the comparison. I personally don’t need real-time data so I’m fine with KISSmetrics not being real time (though debugging can be a pain).

Since I really need to easily be able to look at individual user history I was originally leaning towards KISSmetrics as I thought Mixpanel doesn’t support this feature. I shortly found they do but only introduced the feature in July 2012 as a paid add-on.

I wonder why the “people feature”  https://mixpanel.com/people/ isn’t linked from the main site. If anything it makes the pricing page a bit confusing since they talk about the people plan add-on but don’t provide any further details.

As an ex-coder I must say the online documentation for KISSmetrics seems more comprehensive than the Mixpanel documentation. I was also surprised that Mixpanel doesn’t even link to their documentation from the main site (it’s at https://mixpanel.com/docs/ ). KISSmetrics has it linked from the footer at http://support.kissmetrics.com/

Next I wanted to look more into revenue reporting. I’m guessing that you can store revenue just like any other number in Mixpanel, though I’m a bit concerned that revenue isn’t mentioned anywhere on their site or their docs (I searched).

KISSmetrics on the other hand talks about lifetime value on their homepage and even has a revenue report as I found in their docs.

At this point I was just about to go with KISSmetrics when I stumbled across Mixpanel’s new Engage feature: http://blog.mixpanel.com/2012/10/19/insights-are-just-the-start/ Basically you can now send targeted emails or notifications with Mixpanel’s targeting criteria.

This is the kind of feature that was science fiction (for an analytics service) a few years ago. It’s interesting to see analytics and marketing automation services like Marketo or Eloqua really start to overlap.

I’m betting than in a few years we’ll see content targeting as an additional feature so you’ll also be able to easily show dynamic based on user behavior (though this has existed for a while as stand-alone products)

BTW, I came across https://www.klaviyo.com/ which seems to be very similar to Mixpanel and KISSmetrics though it heavily promotes their email integration as one of the main features (rightfully so). They are pretty new (April 2012) but I’d keep an eye on them.

I also wanted to mention http://customer.io/ which seems like a no-brainer if all you want is very smartly targeted emails.

UPDATE:

I just wanted to include some other services that look interesting and worth looking into for SaaS based analytics:

http://totango.com looks interesting as well. It’s laser focused on SaaS sites which I like. Very strong in natively identifying the type of real world data I’d want to look at (ie customers at risk of leaving). It does seem a bit behind in terms of reporting (I didn’t see any time based cohort analysis). Also no pricing info on their site though they were very responsive when I contacted them (a good indicator that they value good customer service).

I’d love to hear your thoughts – KISSmetrics or Mixpanel and why!

Should You Test or Target?

Recently I’ve been hearing more and more online buzz about the benefits of delivering targeted content to your visitors. In simple terms this means a customized message based on information you know about the visitor (opposed to a generic message which all visitors see).

A simple example would be adding a message for international visitors that your site ships to their country. Something more complex would be a 20% discount on ink cartridges for customers that purchased a printer in the past year but have not purchased any ink in the past 90 days (and of course the message would include the name of the printer they already purchased).

Serving up targeted content is indeed a valuable tool which I have used for many of our clients (I work for Adobe), though I invite you to take a step back and look at the greater question:

What content on my website will bring me the best results?

Intuitively it makes sense that targeted content will resonate better with visitors, and ultimately get more sales (or leads, etc).

On the other hand, you can simply test changes on your site which will effect everyone in order to try to improve your conversion rates.

Both are valid methods for optimizing your site and in an ideal world your company would be doing both.

In reality though, you have limited resources to improve your online marketing efforts and you’ll need to prioritize how much targeting you’ll do and how much user experience (common content) testing you’ll do.

Based on my personal experience, most websites still have huge room for improvement by simply optimizing the user experience through split testing. I’ve discussed this with a few other conversion rate professionals who agree. Just look at the case studies out there and you’ll see dozens of examples of how making relatively simple changes to your website can increase conversion rates by double digits.

In other words, you should initially focus on improving the common user experience and then test and test and test and then test some more. Only then does it make the most sense to start targeting (and of course test to see what targeted message performs best).

If you’re site sucks, it will still suck with targeted messaging.

I will add though that some targeting opportunities are very low hanging fruit and I would implement them without even testing. For example any traffic that you are sending to your web site and know what they clicked on to get there (search, display, email, etc) make sure the main message on the landing page is the same as the message they clicked on to get there.

I’d love to hear your targeting success and failures (and I’ll even provide feedback if you want).

Thanks
Ophir

Test Fatigue – Why it Happens

First of all super thanks to all of the great comments on my previous post about Test Fatigue. If you didn’t read my previous post or you don’t know what I mean by Test Fatigue, then please go ahead and read it now. I’ll wait.

Now, to the point – why do we often see the lift from a challenger in a split test decrease after it seems to be going strong and steady?

Statistical significance is for the winner, not the lift.
First and foremost, most split testing tools (I’ve only used Test&Target and Google Website Optimizer extensively) will provide a confidence level for your results. If the control has a conversion rate of 4% and the challenger a conversion rate of 6% (a 50% lift) with a 97% confidence level, the tool is NOT telling you that there is a 97% chance that there will be a 50% lift. The confidence level is referring to the confidence that the the challenger will outperform the control.

You don’t have enough data and there are many variables outside of your control.
We tend to think that in a split test all variables other than the visitor being presented with the control vs. the challenger are identical. In reality there are many external variables outside of our control, some of which we aren’t even aware of. All things being equal, we often see fluctuations in conversion rates even when we don’t make any changes in our site. Meta Brown provided some excellent points in her comments in my previous post.

Results aren’t always reproducible. Learn to live with it.
Lisa Seaman pointed out an excellent article from the New Yorker magazine about this very same phenomenon in other sciences. This is a must read for anyone doing any type of testing in any field. Read it. Now: The Truth Wears Off

What was especially eye opening for me was this part of the article (on page 5). Here is a shortened version of it:

In the late nineteen-nineties, John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of.

The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise.

So there you have it. While I know you really want a silver bullet that will make your positive results always stay the same, reality isn’t so simple.

They say that conversion optimization is part art and part science, but I think we have to accept that it’s also part noise :)

Ophir

Test Fatigue – Conversion Optimization’s Dirty Little Secret

I’m going to expose to you a phenomenon that’s fairly common when split testing, but no one seems to be talking about it (other than veteran split testers) and I don’t think it’s ever been blogged about (please add a comment if I’m wrong).

It has to do with the question:
“Will the lift I see during my split test continue over time”?

Let’s start by looking at a scenario commonly used by practically everyone in the business of split testing.

Your web site currently is currently generating $400k a month is sales which has been steady for the past few months. You hire a conversion optimization company, which does a split test on your checkout page.

After running the test for 3-4 weeks, the challenger version provides a 10% lift in conversion and RPV at a 99% statistical confidence level. The conversion rate company turns off the test and you hard code the winning challenger.

First of all – Wooohoo!!! (Seriously, that’s an excellent win.)

A 10% lift from $400k a month is an extra $40k a month. Annualized that amounts to an extra $480k a year. So your potential increased yearly revenue from using the winning checkout page is almost half a million dollars. Sounds pretty good to me.

Here’s the problem.

All things being equal, by using the winning version of the checkout page and not your old checkout page, there is a good chance you won’t be making an extra $480k in the next 12 months.

Don’t get me wrong. You will indeed be making more money with the winning checkout page than with the old one, but in all likelihood, it will be less than simply annualizing the lift from during the test itself.

The culprit is what I like to call “Test Fatigue” (a term I think I just coined).

Here’s what often happens if instead of stopping your split test after 3-4 weeks you could let it run for an entire year. There is a phenomenon that I’ve often, but not always seen with very long running split tests; after a while (this might be 3 weeks or 3 months) the performance of the winning version and the control (original) version start to converge.

They usually won’t totally converge, but that 10% lift which was going strong for a while with full statistical confidence is now a 9% lift or an 8% lift or a 5% lift or maybe even less.

As I mentioned before this doesn’t always happen and the time frame can change, but this is a very real phenomenon.

Why is does this happen?

Please read my next posting – Why Test Fatigue Happens where I provide some explanations on why this happens.

Also, I’d love to hear if you have also seen this phenomenon with your own tests and what your personal theories are as to why it happens.

Thanks
Ophir

How can I help you with conversation optimization?

I just realized it’s been almost six months since I last posted on this blog. While I have plenty of ideas for posts, I figured it might be best to ask you – my readers (all three of you) how I can help you. Specifically there are two major ideas I’ve had in my head for a while and I’m debating between which one to write about next.

The first idea is a technical overview of how the web works, going into detail on web analytics and split testing. Everything someone who is not a techie needs to know in order to gain a better understanding of what the data really means from a technical perspective as well the implications on how technical decisions impact business decisions.

The second idea is making conversion rate optimization more of a science and less of an art. I’ve read just about every book out there that deals with site and page optimization. I’ve also conducted countless split tests and have analyzed more sites than I can remember. What I’ve found is that there seems to be a major gap in the process where what to do next and how to do it becomes more of an art and less of a science.

Plenty of smart marketers can see a web page and know intuitively that it won’t convert well. Often it’s even easy to identify specific elements which are “broken” and need to be fixed, but more often than not (at least for me), it’s usually not so simple to explain the internal thought process of converting an OK page into a great one. This is something I’d like to address.

So, my loyal readers, please let me know what I should write about. Even if it’s something other than the two topics I’m thinking about let me know.

Thanks,
Ophir

Conversion Tips from a Crying Toddler

I can’t help but think about conversion rate optimization in my day to day life outside of work.

A friend of mine who studied film (making movies) in college once told me that he can no longer watch a movie without dissecting it in his head.

Every know and then I have a similar experience (usually as part of an “a-ha” moment) in my life where I realize something about human behavior – specifically how to get someone to do something.

I have quite a few of these in my head which I’ll start sharing on my blog.

Today it’s a lesson I learned from a crying toddler.

 

David is a 3 year old boy. He has a cat called Muffins. David and Muffins get along very well, but if Muffins bothers him at night while he’s sleeping, it scares him and we wakes up.

Last night when David started to cry, I saw Muffins run out of his room and immediately understood what happened. I went into his room and tried to calm him down.

David: Crying
Me: It’s OK David, Muffins won’t hurt you
David: Crying
Me: I’ll close the door so Muffins won’t come back. Daddy’s here.
David: Crying
Me: Did Muffins scare you?
David: Yes (and stops crying)

David calmed down at that point and quickly fell back to sleep.

As you can see, I was only able to get through to David once I acknowledged his feelings. This is parenting 101 but the point here is:

Acknowledging someone’s feelings is a very powerful way to get through to them.

Some examples off of the top of my head:

  • Do you suffer from high blood pressure?
  • Are you frustrated by your child’s behavior?
  • Worried about your debt?

Personally, I’m not crazy about using “informercial” type headlines, but the reason they are so widely used is that they usually work. Of course like any good marketer you are testing your copy so ultimately you know what works best for your audience.

The Future of Split Testing and Conversion Rate Optimization

I’ve been fortunate enough to see and experience first hand the evolution of the Internet, from even before the web till today.

I’ll spare you a lengthy history lesson explaining how we’ve gone from brochureware sites to where we are today, but I do want to share some thoughts and perspective on where I think things are going.

When marketers started to understand the potential of dynamic web sites, there were two terms everyone was throwing around:

Personalization & Customization.

Fast forward to today (2011). The user experience is still exactly the same for all visitors (other than on a handfull of sites such as Amazon.com).

For the most part, web site Personalization has failed. Sure it sounds good in theory, but trying to tailor the web site experience at the individual level is extremely difficult. It is difficult both from a technological perspective but mostly by trying to create an optimal user experience based on data from a single individual.

There is no doubt in my mind that in the future (and to some extent today) the user experience when visiting a web site will be created dynamically based on what gets the best results, but based on “anonymous” information which is common to large groups of visitors, and not based on a single person.

This reminds me of the concept of Psychohistory from the science fiction series “Foundation” by Isaac Asimov.
Wikipedia explains it better than I can:

The premise of the series is that mathematician Hari Seldon spent his life developing a branch of mathematics known as psychohistory, a concept of mathematical sociology (analogous to mathematical physics). Using the law of mass action, it can predict the future, but only on a large scale; it is error-prone on a small scale. It works on the principle that the behaviour of a mass of people is predictable if the quantity of this mass is very large. The larger the number, the more predictable is the future.

I also like to think of this in terms of what usually happens at (successful) brick and mortar stores.

When you walk into a store, the salesperson probably doesn’t know you personally, but will probably try to help you based on certain public traits such as gender, age, if you’re by yourself or with someone else, etc.

Which brings me back to what actually prompted me to write this article in the first place :)

While I’ve been split testing since 2005 in order to improve conversion rates, the majority of the time, it’s still about what works best for the site as a whole, opposed to split testing together with segmentation (which is what we really want).

Until recently, there haven’t been many options out there to achieve this level of targeting and testing (at least not priced for small to mid sized businesses) but over the past few months, I’ve been starting to see more and more startups trying to bring this level of sophistication to the masses.

While I haven’t had a chance to use any of these services first hand, there is no doubt in my mind that business that truly embrace this level of targeting and split testing will eventually lead the pack and leave most one-size-fits all web sites in the dust.

Split Testing and Return Visitors

Just a quick post about a phenomenon I’ve personally seen happen but don’t recall ever seeing mentioned in split testing articles.

I’ll start by saying that ideally you should always look at the results from any split test by segmenting your visitors.

It’s not enough to know that overall version X did better than version Y. Ideally you should check how the different versions performed for various visitor segments. For example, users from organic search might behave differently than visitors from a referring site or direct traffic.

There is one segment though where merely the fact that you’re doing a split test can have an impact on the results:

New vs. Return visitors

Even if you weren’t doing a split test, you would probably see a difference between the two segments based purely on the fact that return visitors already know something about your product, service or site.

I’m talking about a different phenomenon though. The “something has changed” effect.

For new visitors, your site will be new regardless of which version of a test page they see.

For return visitors who have some level of familiarity with your site, if they see something new or changed on the site, they’ll probably pay more attention to it – merely because it’s different.

For example, if you’re homepage does not currently have any video on it and you test a new version with some video on it, return visitors who get the version with the video might watch the video simply because it’s something new.

Conclusion: Always segment visitors by new and return visitors.

If both groups show the same preference, it’s safe to say that you have a winner. If you’re seeing a large variance between new and return visitors, it might be worth it to let the test run for a while to see if the variance changes over time as more of the return visitors first visited the site after the split test started.

[UPDATE]
I was just thinking that this would be a feature that split testing tools can / should support. Segmenting not only new vs return users, but return users who’s first visit was before the split test started vs return users who’s first visit was after the split test started.

Heck – you should be able to only include visitors who’s first visit was after the test started if you want to. Are you listening optimizely, visual website optimizer, and the rest of the gang?

Adwords Search Funnel Update

I just noticed a tweet from analyticspros:

Jeff gillis from @googleanalytics announces new updates to adwords search funnels: up to 90 days back, actual query, unique paths #emetrics

This is GREAT news!

Previously search funnels only showed data back 30 days, which is adequate for many sites, but if your conversion event often happens after 30 days (which is often the case with large item purchases and B2B) you weren’t getting the full picture.

I have not see this mentioned anywhere else, so it’s probably hot off the “press” at eMetrics.

Update [Oct 6]: I just noticed that the official adwords blog has this update

– Ophir

Selling Web Analytics

How to Sell Web Analytics

While I was at the Google Analytics Certified Partner summit this year, someone came up to me and asked:

How do you sell web analytics?

This is a question I deal with quite often, both internally within the agency I work for and when trying to convey the value of web analytics to clients.

The short answer is that you shouldn’t be trying to sell web analytics.

Web analytics is a tool, a means to an ends.

It has no inherent value by itself. It’s only through the analysis of data and providing actionable insights that it creates value.

You should be selling the value that can be gained by a proper web analytics implementation and actionable analysis.

Marketing people talk about selling the benefits, not the features.

Web analytics is a feature.
Improving the bottom line is a benefit.

A simple analogy is HTML (the code used to create web pages).
Imagine if you tried to pitch HTML services to a business. They would probably be scratching their heads as to why they need your services.

Now imagine pitching web site creation services, providing real world examples on how businesses have improved their bottom live with their web site.
Now they’re listening.

When selling services that include web analytics, try starting with the end result and then work your way backwards. If you start with the prize, people will usually pay more attention.

Here’s an example:

  1. I can help you make an additional $80,000 a year
  2. It will cost you a one time investment of $25,000 investment and $1,000 a month
  3. You’re currently doing 3,000 sales a year
  4. I will get you 400 additional sales a year (average sale value is $200)
  5. The additional sales will come from improving your overall conversion rate by 13.3%
  6. Improving the overall conversion rate will coming from decreasing bounce rates, increasing “add to cart” rates and decreasing cart abandonment.
  7. The above improvements will come from making changes on the web site
  8. We’ll test a few options until we find something that works (split testing)
  9. We’ll know what changes to test based on data that tells us what people are doing on your web site
  10. In order to get the data that we need, we have to install web analytics on the site and then analyze the data

The problem is that many web analytics businesses start their pitch with step 10 and then work their way to step 1.

When people ask me what I do for a living, the answers have changed over the years, but now I usually answer along the lines of:

I help web sites do better at whatever they’re try to do.
Specifically, I learn the business objectives and then do some detective work, finding issues that can be improved and then I improve them.

As always, comments and suggestions are appreciated.

Ophir