You are likely to see differences in the statistics presented by Google and Klevu Analytics. On this page, we have provided some information on how Klevu processes the submitted search terms and presents the counts on its dashboard.
Klevu is a search as you type technology where search results are presented on almost every single keystroke. One advantage of this approach is that shoppers do not have to press enter for the search results to turn up. It reduces the time shoppers need to spend searching for the products. Usually, the "Search As You Type" feature is accompanied with "Wildcard Search". In other words, as shoppers type in the search box, the system, behind the scenes, guesses the possible terms that the shoppers might be searching for and shows results for the same.
Imagine, a shopper has just typed "iph" and the system is already showing results for the terms such as "iphone", "iphone cover", "iphone charger" and actually whatever products starting with the letters "iph".
Normalising Search Terms
When these search terms are submitted to our analytics backend, we carry out normalisation of the submitted terms. Below, we have highlighted some of the normalisation steps taken by our backend.
Normalising Long Tail Searches
liu
liuk
liuku
liukue
liukues
liukuest
liukueste
liukueste k
liukueste ke
liukueste ken
liukueste kenk
liukueste kenkä
liukueste kenkää
liukueste kenkään
As you can see above, the system may have submitted 14 terms in the backend, but our normalisation process only counts these terms as 1 term (i.e. the last one "liukueste kenkään").
Discarding Deletions
liukueste kenkään
liukueste kenkää
liukueste kenkä
liukueste k
liuk
tas
...
Here, after searching for the term "liukueste kenkään", the shopper seems to search for something that starts with the letters "tas". But before he could do that, he started deleting characters from the back from the previously written term. In this case, our system would have received additional 4 terms (i.e. "liukueste kenkää", "liukueste kenkä", "liukueste k" and "liuk"). However, our normalisation process actually gets rid of these four terms from the counting.
Discarding Spell Errors
tailjek
tailjer
Here, a user entered "k" instead of the letter "r" at the end of the word "tailje". But as soon as he realised his mistake, he deleted the letter "k" and entered the correct letter "r". In this case, even though the two terms were submitted to our backend, we counted this as a single term.
Dealing with Duplicated Searches
tailjek tailjek If a customer searches for the same term, again and again with no other term in between, our normalisation algorithm counts them as a single term.
Normalisation based on the Store's Timezone
We normalise data on daily basis, according to the store's timezone. The time stamps received on our servers are converted to the timezone of the store.
Impact of the Normalisation Process on Search Results
At Klevu, we feel that every single keyword entered by a customer is important. However, the mistakes made by customers should not affect or bias the self-learning algorithm. The normalisation steps mentioned above help us to maintain our data clean and therefore achieve the best results through our self-learning algorithm.
Klevu vs Google Analytics
A term is submitted to Google Analytics when a pause is observed in user's typing and/or when the customer has pressed enter to visit the landing page.
When the data is submitted to Google Analytics, we cannot guarantee the same normalisation process (as mentioned above) taking place at their end. This results in the differences between what Google Analytics shows vs what you see in Klevu Analytics.