Embrace The Mobile Mind Shift

Originally posted via Forrester

Interesting quick vid clip:

The mobile mind shift is the expectation that I can get what I want in my immediate context and moments of need. Your customers and employees are making this shift, now. This shift means the battle for your customer’s attention will be waged in mobile moments — anytime that customer pulls out a mobile device.


Mobile Respondents: Always Better?

I ran a TV commercial awareness. 100 completes, mobile only. We were launching the survey 10 minutes after it aired on a national morning talk show. Rolling time zone launch. Distinctive ad. Depressing imagery. Anyhoo, all 100 (after indicating they viewed it) rated the ad as good, bad, indifferent, etc. I insisted we add a final question, an open end: ‘what was the ad about?’

Out of all 100 responses claiming to have viewed the ad a few minutes earlier, we achieved inverse perfection: not a single person could identify anything about the ad: the company, the imagery (a commuter-packed subway), the message, etc.

What’s the lesson? Don’t always assume mobile is better. And…add that last question.

What Is a Beacon, Anyway?

Originally published on Phunware

Beacons play an important role in mobile location detection.

When you think of a beacon, what comes to mind? A fire on a remote mountain? A blinking light at the top of a building? A lighthouse? Just like these traditional beacons, wireless beacons are designed to communicate an important message to everyone within a certain proximity.

Beacons are small Bluetooth-enabled devices that transmit a continuous signal. The signal is “heard” by devices within a certain distance (from a few inches up to about a hundred feet) from the beacon. If a device has the relevant app installed when it recognizes the beacon, it will begin to communicate with the server to “ask” if it has any relevant information to share.

Beacon-DiagramMany companies today are using this location awareness as an opportunity to send contextually relevant push notifications to users’ devices—things like special offers, reminder and welcome messages, and other alerts.

In other words, beacons help people connect in context.

What Role Do Beacons Play in the Network?

Beacons are invaluable tools for mobile interaction, but they are only one element in the location technology universe. It’s important to understand the role of the different location technologies, and where beacons fit in.

GPS is accurate, but because it depends on satellite communication, it doesn’t work well inside a building or anywhere without a clear line of sight to the sky. Additionally, because GPS location detection is persistent, it can increase the load on network traffic and drain the battery of a mobile device.

Wi-Fi networks can be very precise at location detection, especially with multiple routers and add-on software to triangulate their signals—essentially turning the Wi-Fi network into a mini-GPS. Unfortunately, Wi-Fi users are required to consent to network connection, which adds an additional step to the interaction and may reduce the frequency of engagement.

Beacons provide the greatest accuracy; and with the introduction of Bluetooth Low Energy technology (BLE), beacons can be deployed almost anywhere. Whereas once the battery consumption of Bluetooth technology limited its applications, the advent of BLE has opened up a new world of location-enabled possibilities.

Each BLE beacon has a very small footprint and is designed to operate for years on standard coin cell batteries. BLE also supports enhanced range capability of up to 200 feet. Beacons are not only easy to deploy—the fact that they only interact with in-range mobile devices limits costly network traffic and reduces the drain on mobile battery life.

Beacons Help Drive User Engagement

Beacons enable businesses to send customers or visitors push notifications in real time, a practice that has been shown to deliver a 45 percent interaction rate (five times higher than that of traditional push messages).1

  • Retail locations use beacons to provide their most loyal customers (those with the store’s app on their smartphones) with customized offers while they are in the store. Beacons placed in a certain department or on a certain display can be detected by in-range mobile devices, and thereby trigger a message containing a coupon, special offer or additional product information.
  • Schools can use beacons to keep track of district-owned mobile devices. If a student moves a device beyond a specific threshold, like a classroom door, an alert can be triggered.
  • Similarly, hospitals can use beacons for asset tracking, thereby reducing time spent hunting down pieces of equipment (not to mention theft and loss).
  • Anyone in a large building or campus environment—hospitals, universities, municipalities and businesses—can use beacons to help people find points of interest or to trigger “you are here” notifications in a wayfinding application.

Beacons Deliver Valuable Interactions

Businesses are using beacons today to deliver engaging, interactive experiences. World Wrestling Entertainment (WWE) is recognized for its intense fan interaction, so expectations for its mobile application were high. During WrestleMania 30, the organization’s flagship event, WWE implemented a highly interactive application using iBeacons. Beacon technology helped WWE deliver 44 campaigns resulting in 30 percent engagement, blowing traditional marketing campaign engagement out of the water.

Retail giant Macy’s recently announced plans to implement beacons in all of its stores after successful flagship test run. The retail implementation will be the largest to date, involving more than 4,000 beacons and creating new expectations for the in-store experience.

The benefits of beacon technology aren’t reserved for large organizations, however. Several companies have offerings that help small businesses and even individuals use beacon technology in daily life, and they’re barely scratching the surface of possible beacon use cases.


Geofencing – Anyone using it?

Given how many conversations I have, and questions I receive, regarding geofencing, I’m curious how often this is used in practice? Specifically, I’m referring to the oft-discussed scenario of getting ‘pinged’ as you enter (or exit) a restaurant, cinema, shopping mall…etc. A survey is then launched, with location-specific questions to follow.

A few months ago a consultant with a small food company for a client relayed a question from them (their MR team lead): can we ‘ping’ a shopper as they round a corner into a store aisle? I’ve thought often about that question. Not whether it can be done (maybe, with Beacons), but rather how unrealistic this scenario is. We’re talking about someone moving through a store, who’s there to shop, not take a survey. As they walk past an end cap, or down the potato chip aisle, they get some type of phone alert. Assuming someone notices the alert (they won’t) are they going to stop in their tracks and take a 2, 3, or 7 minute survey (they won’t).

Another curve ball: can geofencing tell the difference between the first and second level of a mall? Could it think you’re in the store directly above or below where you’re standing? My hunch is that it can’t tell the difference, as geofencing works on X and Y axes (not Z). Perhaps if you’re accessing in-store wifi it might know the difference, maybe. Nevertheless, are these the questions we should be considering regarding its viability?

More recently, I was discussing a retailer exit interview scenario with several fieldwork suppliers. In fact I secured bids from several, and whilst leaning towards one offer, I decided to test their claim that yes, they can handle geofencing. By coincidence, that very eve I was attending a mobile tech Meetup with a speaker who works with Beacons (the newish Bluetooth Low Energy (BLE) technology…stay tuned for separate post on this). Anyhoo, I shared the location of the event (big restaurant with a banquet room near downtown Mpls) with the potential fieldwork supplier. They in turned geofenced the location, tested it, told me it’s good to go. I attended the event, checked my phone often…crickets. I’ll paraphrase the next day’s debrief into this: ‘turns out we don’t actually have geofencing, I thought we did.’

The silver lining in this scenario is it prompted me to circle back and give several suppliers the 3rd degree on how they define geofencing. The gist of those conversation was: most don’t have real geofencing beyond an alpha dev stage. No harm came my or more importantly my client’s way in the form of a DOA project, however I think this story goes beyond a simple ‘buyer beware’ cautionary tale. The number of mobile fieldwork suppliers with fully functional geofencing, as I write this in early 2014, is amazingly low.

Which isn’t the point. The point to me now is: apparently geofencing isn’t in much of a demand…OR…these types of projects are monopolized by the few suppliers who have invested in the technology. The latter is self-evident, and I’m leaning towards the former as well. How about you?

Raise your hand if the truth starts at .05

Originally posted on GreenBook. My first day of graduate school began with the instructors telling me and my fellow first-year classmates, “there are two acceptable reasons for being late with an assignment: hospitalization and incarceration.” Welcome to grad school, kid. We had three core instructors for my I/O Psych track, and all were newly minted PhDs under the age of 30. If you’ve ever had a new PhD for an instructor, you know they are the toughest. They just went through heck, and now you are too. They told us they were going to cram as much PhD material into us for the two years they had us in captivity. Good times.

Within these conditions, one tends to retain a few things, some of which I’ve been reminded of from time to time relative to the market research space and their residents. I’m going to throw a few out here and see what happens.

 The what is easy. The why is not.

I recall two years of 700-level statistics coursework, always at 8am. Stats are always taught at 8am. I recall a quote from my textbook, “if I only had one day left to live, I would spend it in a statistics class, because the day would seem so much longer.” Working in the MR space I’ve met many clients and colleagues in this space, as we all have. I notice how many people new to the industry are taught how to do things, but not the why behind it. For example, we rotate concepts because it ‘reduces bias’ (actually it’s due to phenomena called the primacy & recency effect). Or, we ask these particular questions for all concept tests regardless of category because that’s how we do things here at (insert Honomichl name). Or, we’re not shrinking our 25 minute survey because we know people enjoy shaping the products of the future. Or, we can run t-tests and ANOVAs on any data set, regardless of how the sample was recruited & drawn, or considerations for compounding error…

 So about this .05…

Let’s consider bread and butter significance testing: crosstabs. How often are insights created via PowerPoint by looking for the asterisks and mini-font letters indicating a significant difference? Anyone want to bet the word ‘significant’ is never misunderstood?  More to the point: why .05? Ever wonder what is so magical about that particular threshold? Based on what I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.

Lest we forget, a Type 1 error is rejecting the null hypothesis when it is in fact true (i.e. believing you have a difference in samples when there isn’t) and a Type 2 error is the opposite (i.e. there is a difference in samples but your measurement instrument isn’t detecting it). Ergo, there is nothing special about .05. Could be .04 or .06 or .08, etc. Sometimes you’ll see .01, a more stringent threshold, but the point I’m trying to make is this: please don’t assume ‘the truth’ magically kicks in at .05. It doesn’t. Yes it helps to have a threshold; however the specific boundary holds no inherent path to insights.

Non parametrics, where art thou? 

Are analyses which originate via online and similar convenience samples making a fundamental assumption that the population is distributed normally? I believe yes. Is this in fact the case? I argue: not likely. I’m not going to deep dive into the reasons, and this isn’t a quality discussion (I addressed that in my prior post). Rather, from a statistical point of view, when we run crosstabs and other common tests of significance, these tests assume normalized populations and samples drawn randomly. I argue this scenario is a rarity in real-world conditions. More to the point, how many of us are implementing chi-square tests and similar? Non-parametrics are tests of significance that assume ‘real-world’ sampling. I find them both fascinating, and apparently invisible. Is anyone out there using them for your analyses?       

 In case you’re curious…


I think what’s amazing about our profession is the abundance of learning opportunities and continuing education. From the MRA and similar organizations, Research Rockstar, the many groups on LinkedIn, the streaming Research Business Daily Report, to this very blog, we enjoy convenient, accessible, expert instruction, on demand. In particular I hope the managers out there encourage and support their younger employees to devote a few hours a week to participate in these opportunities. Thank you for reading and I hope you found this worthwhile. 

Mobile Research Quality: Absolute vs. Relative

Originally posted on GreenBook

I’ve found myself in an intriguing position in having both bought and sold mobile research studies, as a client broker and as a supplier. These are interesting times, no? I look around and see a buffet of webinars, whitepapers, and similar musings, mostly by authors who have never once been in a mobile research study as a participant. The occasional RoR pops up, and of course the endless procession of ubiquity and adoption metrics. What I see little of, are frank discussions of mobile research ‘quality.’ This is a broad term, so let’s define.

Defining Mobile Quality

In this post I’m referring to mobile research, not mobile surveys. Essentially my primary definition here is how much we can ‘trust’ mobile research results. I view this topic inabsolute (i.e., as a new methodology) and relative (i.e., compared to other quant/qual fieldwork) contexts. Also, some of this overlaps with security issues which I’ll touch on.

In the Absolute

When I think of mobile research quality in absolute terms, it’s hard for me to not lapse into relativistic comparisons, but I will table that for now. Focusing on this as a new methodology, we all know a few items: it’s a recent entrant into our world, the devices are seemingly everywhere, and people have them nearby at all times. And as tempting it is to give this a blanket endorsement as ‘automatically’ having quality, ‘because these things are so common,’ that would be unwise. I’ve participated in quite a few of these studies (I have 11 research apps running on my G4); my guess would be over 50, not sure the exact number. And yes the usual design issues are in effect: test it so it’s not buggy or looping, shorter is better, etc. Most of the studies I’ve participated in are actually quite thoughtful in their respondent experience. Mobile panelists are quite precious, and the ease one can give a 1 star savaging in the app stores is on supplier’s minds.

Regardless of the survey design, UX, etc; what is the key issue regarding mobile research quality? It is this: I’m standing in the (insert_name_here) aisle at Target, I’m taking a barcode scan of the correct or incorrect product with instant validation, I’m taking a picture of my receipt or maybe using the product at my home. I have provided evidence that I have indeed purchased said product, or been in the aisle examining the signage…etc. Moreover, an implementation of geofencing or geovalidation ensures I’m indeed inside the store during the study and/or when the submit button is reached. Am I sharing the ‘right’ answers re what I think of the product, signage, etc? No way to ever know that from any respondent, but why wouldn’t I share the truth? There are no social desirability effects and my incentive is arriving whether if I’m yea or nay on the product. Same goes for OOH ad-recall / awareness studies.

In the Relative

Let’s exit the vacuum and compare this methodology to traditional quant techniques. Having spent many (too many) years inside of online panel suppliers, I can attest to the enormousreliance on these panels to power primary market research. The sheer volume of panel-sourced survey completes is staggering.

Frankly, I think comparing mobile research quality to online panel quality is laughable. There is no comparison. This is a slam dunk in favor of mobile. Maybe you think I’m being glib…but if you’ve seen what I’ve seen you would be nodding in agreement. With the exception of invite-only panels, the amount of fraud in this space is greater than you’ve heard or read about. I’m not going to deep dive as it’s off topic but it goes beyond satisficing, identity corroboration, recruiting sources and other supplier sound bites used to reduce hesitation when buying thousands of targeted completes for $2.35.

Yes these apps are in the app stores, ergo anyone with a compatible device can install (and rate) them. Some do allow (or require) custom/private recruiting for ethnography, qual & b2b, but the bulk are freely available to the mobilized masses. Isn’t this then like online panels in that anyone can sign up? Yes, pretty close. So what’s the difference? One difference is that organized (yes, organized) fraud hasn’t infiltrated this space yet. So there’s that. The other difference is that because this space is app powered, the security architecture is entirely different, and stronger relative to online Swiss Cheese firewalls. Yes another difference is the effort required to secure an incentive; specifically the requirement of being in a physical location helps.

Effort = Good

There is effort required with these studies. You’re not sitting on your couch mouse clicking dots on a screen. Effort makes the respondent invest in the experience with their time and candor. There is also multi-media verification. For example, I’ve listened to OE audio comments, and I would encourage you to do the same if you need any convincing that these studies are not ‘real’ somehow (I can play some for you). Once you hear the tone, the frustration, interest, happiness, etc; your doubts about the realness of these studies will dry up. Incidentally, once you’ve heard OE audio, your definition of the phrase ‘Voice of the Customer’ is about to get quite a lot more stringent.

I’ll wrap this up and save more for future posts. Thank you for reading, I hope I gave you food for thought and we can enjoy watching this fascinating technology unfold together.

Mobile Musings: Have you been a respondent yet?

Mobile Musings: Have you been a respondent yet?

Scott Weinberg, Tabla Mobile LLC
Immediate Past President, MN / Upper Midwest MRA Chapter

I’ve been noticing how few market researchers and advertisers have participated in even a single mobile research study. Specifically, I’m referring to an app-based experience, usually using a form of geo-validation and multi-media data capture. I’m not referring to opening a url on your phone and taking a survey, any survey.

Rather, I’m referring to an actual ‘mobile research’ experience, the kind where you’re notified walking into a movie theatre, Best Buy, Target, grocery store, gas station, etc. Alternately, you may be pre-screened and invited to participate, e.g. an out of home ‘assignment.’ The reason I’m curious about this is because of the (profoundly?) unique and different respondent experience these studies entail. Let me give you a few examples.

I took an in-store study, or attempted to, inside a Super Target. I’m not affiliated with this supplier; I have several survey research apps running on my phone (and I never stray far from an electric outlet). Essentially the assignment entailed taking 1 photo of 11 variations of a food product, and responding to a few questions on each. Not difficult; tedious, but not difficult. When I uploaded the first pic, my phone timed out/went into lock mode (set at 1 minute). I tried it three times. I was on an iPhone back then, where pic file sizes range from 1-2 mb, depending on the detail (Androids are similar). This isn’t an issue on a home wifi or similar network, but inside a big box, via your cellular carrier, pic uploads (or any uploads) can be a pickle.

So what did I do? I was calculating that even if I could get the upload to work, I was looking at a 15+ minute boring repetitive survey, while standing in this food aisle. Not much intrigue to this. I was wondering how many others around the country were having this same frustrating experience. I decided to try an experiment of my own: I took 11 random product photos outside the survey (just using my camera) and exited the survey. The survey told me I had an hour to finish up from when I started. I drove home. Resumed the survey on my home wi-fi. At the first upload sequence, I randomly uploaded one of the pics, in about 2 seconds. Answered those questions. Went to the next sequence. Rinse and repeat. Finished in a few minutes. My experiment was to determine if this particular app had any kind of lockout or detection protocols for what I was doing. This supplier is a major player, one of the largest out there. Submitted fine, and my incentive showed up after a few days.

I’ve also noticed recently that Target’s offer free wi-fi. You need to actively accept their terms and login to connect, i.e. it’s not an ‘auto-connect.’ I wonder how many people actually do this? Or how many suppliers tell their potential respondents there is free onsite wi-fi, and to connect to it? I’ve never seen messaging to this effect in a mobile study; have you?

Another example, this time as a project manager rather than a respondent. On a time sensitive, DMA-specific mobile study, a phone recruit to survey app was in effect. Ergo, many of the respondents were ‘first-timers’ to this kind of study. I’m rather keen on these audiences actually; as they bypass the conditioned (i.e. self-select bias) ‘panel people’ who comprise the bulk of all primary online research (and a small but growing portion of mobile respondents). During this study it became apparent that live tech support was needed (and by live I mean immediate, while they were in-aisle). I began emailing my phone number to the potential respondents, and my phone quickly started ringing with confused respondents. They weren’t doing anything wrong, the app was working fine, survey was loading fine, they were just unsure how all this works. Happily however, they were motivated to participate (a healthy incentive didn’t hurt).

So, what are the lessons here? First, suppliers approach signal strength issues differently, with some using offline versions of the app experience (data are uploaded later); others minimize the amount of data uploaded via design. Ask what your options are. Second, when the sampling universe is small, e.g. with specific DMA’s, age groups and such, ergo when each potential response is critical, it’s wise to plan for tech support in advance and have live people ready and on-call to answer questions or take feedback. A confused user may not return to the study if they can’t access the content correctly.

Most importantly, experiencing activities like these make more an impact than reading about it; I always encourage interested parties to experience this methodology as a respondent. It doesn’t matter whether you’re new to the mobile research space or are versed in various fieldwork methods; the technology is rapidly changing, and our assumptions regarding how we should interact are best learned empirically.

Privacy and the Digital Lifestyle

In light of the PRISM leaks, and the revelations of the National Security Agency, we’ve all been hearing about the modern concerns regarding privacy, specifically within our digital lives. To those who may have not been paying attention for the past decade, these revelations may come as a surprise, but not for millennials. I’ve only ever been taught to keep my social security number private and secure, which may come to a surprise for some people. My address, my phone number, my likes and dislikes, my tastes and my turn-ons can all be found online for anyone with a reasonable enough (and I do mean Google search level reasonable here) aptitude. It’s all voluntary, and so far, it has all been to my benefit.

I don’t create a new log in, password, or user name every time I want to create a new account online. Instead, every time I log onto a site that requests my Facebook permissions, Google, Twitter, or whatever, I’m signing over the information I keep with those sources for the sake of convenience. Now, that isn’t so much of a problem when I can easily remove my permission and take control of my data. For most privacy advocates I bump into, that’s all they really want: the ability to say ‘enough’. I only have three or four accounts which are my keys to the web, thanks to embedded Google / Facebook / Twitter sign-in protocols. Does that mean I’m signing away my information? It does, if I had believed it was private in the first place.

For the majority of us, we are content to share our information with those companies or persons who may be interested, because most of it we have already written off as public. Addresses aren’t secret, being published in phone books and directories for decades. My phone number isn’t a secret, as it’s on my business cards, and any social network or web application that supports double-authentication already has it. As the recent PRISM leaks have shown us, digital privacy may be an all but impossible goal in the long run, and if we are to live digitally, it will be within a digital panopticon of surveillance.

So, what is the value of digital privacy anyway? All of my information is already available on the internet, and if I choose to release bits and pieces, I am rewarded with convenient login tools, websites that can translate my content across different mediums and even coupons for lunch. My generation is less and less apt to demand privacy. Instead, we want control of our data. To us, social networks online can be just as real as those connections made in real life, when sitting in front of another. In fact, as anyone who’s ever tried online dating can attest to, sometimes it’s just easier to be honest to your computer screen compared to another face.

And that’s just it. For all of the hoaxes, phishing scams and fears placed into us by the media or those who came before us, we are more willing and more able to trust someone over a computer screen than someone in front of us. Maybe, it’s just because we can examine them from the comfort of our own homes.

Mobile Musings: Disintermediation

My prior blog post ‘What is Mobile Research’ was a simple primer in the form of a Q & A. I thought I would stir the waters a bit more with this submission. I’ve been discussing, and in fact advocating, mobile research (not mobile surveys, which are different) as a methodology with both market research firms and end clients since early 2012. I haven’t kept count but certainly over 50 separate discussions/presentations, roughly split between the ‘channel’ (MR firms) and client siders. I developed two different talk tracks, based on viability with the company type and also awareness of a new business model:

  1. Within the channel, the positioning is more a VAR model (value added reseller), in which the fieldwork would be outsourced to a specialty firm who is equipped with both the technology, i.e., a mobile research smartphone app, & the traffic, i.e. engaged people who have installed said app. The MR firm then adds their ‘secret sauce,’ hence adding the value, to the fieldwork. Same model as with online panel sourcing, for example.
  2. When I’m with end clients (for me, typically CPGs and big box retailers), the people I’m chatting with have comparatively more open-minds about the methodology. The difference can be dramatic. I’ve seen it over and over.  Mobile research can (virtually) take them into their stores, in front of their products, and their competitor’s products, all from the consumer’s perspective. It’s more a case of ‘how do we get started’ as opposed to ‘what about representativeness?’ (I’ll address representativeness in a later post).

Well, is there a point to this? I think there is, and it’s this: end clients are calling upon their MR firms of record to engage in ‘in the moment’ research with today’s mobilized consumer, and the channel has been caught unprepared. And whatever interest is there now will be dwarfed a year from now. And two years from now. In defense of the channel, the rise of mobile research technologies has grown so quickly, it’s not fair to expect MR firms, whom, after all, are hypotheses generators and insight strategists, to have sophisticated smartphone technologies and a mobile panel in-house and ready to go. Noted.

However…the reactions I receive from my channel discussions, and I’m usually in the room with the senior execs, is one of four:

  1. I’m going to ignore this and hope it goes away
  2. We’ll take a wait and see approach; perhaps my clients will magically request a mobile study
  3. Hmmm…maybe this is a paradigm shift, maybe not, but I should get up to speed
  4. This would have been perfect for a qual/ethnography/mystery shop/shelf set sim study we ran last month, let’s add this to our competencies

I think if you’ve spent time observing mobile research as a business model, you’ve become aware that the end clients are leapfrogging, or disintermediating the channel and going straight to the ‘OEMs,’ i.e. those firms who have been quietly developing the apps and promoting them to audiences (recruiting I addressed in my prior post). These OEM suppliers, be they online panel firms or pure mobile plays, have naturally jumped into the void and are assertively getting in front of these end client study sponsors.

“But what about the secret sauce” I’m often asked (I’m paraphrasing) by the channel? Well, two items of note: raw, or semi-raw, data feeds from inside stores restaurants, etc, with pics, vids, and open end audio clips (what I consider the real Voice of the Customer, finally!) from mobilized consumers is as pure and non-biased a data feed as this industry has seen. Moreover, the OEMs are hiring research pros to diversify their service offer as well. Check out their press releases or attend their conference sessions if you don’t take my word for it.

If I was running an MR firm and planning to be around awhile, I would hedge my bets and initiate frank chats with other industry insiders. Specifically, about the implications of smartphone ubiquity, disintermediation, and how to maintain our perceived value as insight strategists and domain experts with these new technologies swirling around us.

Scott Weinberg, Tabla Mobile LLC

Immediate Past President, MN / Upper Midwest MRA Chapter