Geofencing – Anyone using it?

Given how many conversations I have, and questions I receive, regarding geofencing, I’m curious how often this is used in practice? Specifically, I’m referring to the oft-discussed scenario of getting ‘pinged’ as you enter (or exit) a restaurant, cinema, shopping mall…etc. A survey is then launched, with location-specific questions to follow.

A few months ago a consultant with a small food company for a client relayed a question from them (their MR team lead): can we ‘ping’ a shopper as they round a corner into a store aisle? I’ve thought often about that question. Not whether it can be done (maybe, with Beacons), but rather how unrealistic this scenario is. We’re talking about someone moving through a store, who’s there to shop, not take a survey. As they walk past an end cap, or down the potato chip aisle, they get some type of phone alert. Assuming someone notices the alert (they won’t) are they going to stop in their tracks and take a 2, 3, or 7 minute survey (they won’t).

Another curve ball: can geofencing tell the difference between the first and second level of a mall? Could it think you’re in the store directly above or below where you’re standing? My hunch is that it can’t tell the difference, as geofencing works on X and Y axes (not Z). Perhaps if you’re accessing in-store wifi it might know the difference, maybe. Nevertheless, are these the questions we should be considering regarding its viability?

More recently, I was discussing a retailer exit interview scenario with several fieldwork suppliers. In fact I secured bids from several, and whilst leaning towards one offer, I decided to test their claim that yes, they can handle geofencing. By coincidence, that very eve I was attending a mobile tech Meetup with a speaker who works with Beacons (the newish Bluetooth Low Energy (BLE) technology…stay tuned for separate post on this). Anyhoo, I shared the location of the event (big restaurant with a banquet room near downtown Mpls) with the potential fieldwork supplier. They in turned geofenced the location, tested it, told me it’s good to go. I attended the event, checked my phone often…crickets. I’ll paraphrase the next day’s debrief into this: ‘turns out we don’t actually have geofencing, I thought we did.’

The silver lining in this scenario is it prompted me to circle back and give several suppliers the 3rd degree on how they define geofencing. The gist of those conversation was: most don’t have real geofencing beyond an alpha dev stage. No harm came my or more importantly my client’s way in the form of a DOA project, however I think this story goes beyond a simple ‘buyer beware’ cautionary tale. The number of mobile fieldwork suppliers with fully functional geofencing, as I write this in early 2014, is amazingly low.

Which isn’t the point. The point to me now is: apparently geofencing isn’t in much of a demand…OR…these types of projects are monopolized by the few suppliers who have invested in the technology. The latter is self-evident, and I’m leaning towards the former as well. How about you?

Raise your hand if the truth starts at .05

Originally posted on GreenBook. My first day of graduate school began with the instructors telling me and my fellow first-year classmates, “there are two acceptable reasons for being late with an assignment: hospitalization and incarceration.” Welcome to grad school, kid. We had three core instructors for my I/O Psych track, and all were newly minted PhDs under the age of 30. If you’ve ever had a new PhD for an instructor, you know they are the toughest. They just went through heck, and now you are too. They told us they were going to cram as much PhD material into us for the two years they had us in captivity. Good times.

Within these conditions, one tends to retain a few things, some of which I’ve been reminded of from time to time relative to the market research space and their residents. I’m going to throw a few out here and see what happens.

 The what is easy. The why is not.

I recall two years of 700-level statistics coursework, always at 8am. Stats are always taught at 8am. I recall a quote from my textbook, “if I only had one day left to live, I would spend it in a statistics class, because the day would seem so much longer.” Working in the MR space I’ve met many clients and colleagues in this space, as we all have. I notice how many people new to the industry are taught how to do things, but not the why behind it. For example, we rotate concepts because it ‘reduces bias’ (actually it’s due to phenomena called the primacy & recency effect). Or, we ask these particular questions for all concept tests regardless of category because that’s how we do things here at (insert Honomichl name). Or, we’re not shrinking our 25 minute survey because we know people enjoy shaping the products of the future. Or, we can run t-tests and ANOVAs on any data set, regardless of how the sample was recruited & drawn, or considerations for compounding error…

 So about this .05…

Let’s consider bread and butter significance testing: crosstabs. How often are insights created via PowerPoint by looking for the asterisks and mini-font letters indicating a significant difference? Anyone want to bet the word ‘significant’ is never misunderstood?  More to the point: why .05? Ever wonder what is so magical about that particular threshold? Based on what I was taught, .05 is an arbitrarily agreed to compromise that splits the chances of making a Type 1 and Type 2 error.

Lest we forget, a Type 1 error is rejecting the null hypothesis when it is in fact true (i.e. believing you have a difference in samples when there isn’t) and a Type 2 error is the opposite (i.e. there is a difference in samples but your measurement instrument isn’t detecting it). Ergo, there is nothing special about .05. Could be .04 or .06 or .08, etc. Sometimes you’ll see .01, a more stringent threshold, but the point I’m trying to make is this: please don’t assume ‘the truth’ magically kicks in at .05. It doesn’t. Yes it helps to have a threshold; however the specific boundary holds no inherent path to insights.

Non parametrics, where art thou? 

Are analyses which originate via online and similar convenience samples making a fundamental assumption that the population is distributed normally? I believe yes. Is this in fact the case? I argue: not likely. I’m not going to deep dive into the reasons, and this isn’t a quality discussion (I addressed that in my prior post). Rather, from a statistical point of view, when we run crosstabs and other common tests of significance, these tests assume normalized populations and samples drawn randomly. I argue this scenario is a rarity in real-world conditions. More to the point, how many of us are implementing chi-square tests and similar? Non-parametrics are tests of significance that assume ‘real-world’ sampling. I find them both fascinating, and apparently invisible. Is anyone out there using them for your analyses?       

 In case you’re curious…

 

I think what’s amazing about our profession is the abundance of learning opportunities and continuing education. From the MRA and similar organizations, Research Rockstar, the many groups on LinkedIn, the streaming Research Business Daily Report, to this very blog, we enjoy convenient, accessible, expert instruction, on demand. In particular I hope the managers out there encourage and support their younger employees to devote a few hours a week to participate in these opportunities. Thank you for reading and I hope you found this worthwhile. 

Mobile Research Quality: Absolute vs. Relative

Originally posted on GreenBook

I’ve found myself in an intriguing position in having both bought and sold mobile research studies, as a client broker and as a supplier. These are interesting times, no? I look around and see a buffet of webinars, whitepapers, and similar musings, mostly by authors who have never once been in a mobile research study as a participant. The occasional RoR pops up, and of course the endless procession of ubiquity and adoption metrics. What I see little of, are frank discussions of mobile research ‘quality.’ This is a broad term, so let’s define.

Defining Mobile Quality

In this post I’m referring to mobile research, not mobile surveys. Essentially my primary definition here is how much we can ‘trust’ mobile research results. I view this topic inabsolute (i.e., as a new methodology) and relative (i.e., compared to other quant/qual fieldwork) contexts. Also, some of this overlaps with security issues which I’ll touch on.

In the Absolute

When I think of mobile research quality in absolute terms, it’s hard for me to not lapse into relativistic comparisons, but I will table that for now. Focusing on this as a new methodology, we all know a few items: it’s a recent entrant into our world, the devices are seemingly everywhere, and people have them nearby at all times. And as tempting it is to give this a blanket endorsement as ‘automatically’ having quality, ‘because these things are so common,’ that would be unwise. I’ve participated in quite a few of these studies (I have 11 research apps running on my G4); my guess would be over 50, not sure the exact number. And yes the usual design issues are in effect: test it so it’s not buggy or looping, shorter is better, etc. Most of the studies I’ve participated in are actually quite thoughtful in their respondent experience. Mobile panelists are quite precious, and the ease one can give a 1 star savaging in the app stores is on supplier’s minds.

Regardless of the survey design, UX, etc; what is the key issue regarding mobile research quality? It is this: I’m standing in the (insert_name_here) aisle at Target, I’m taking a barcode scan of the correct or incorrect product with instant validation, I’m taking a picture of my receipt or maybe using the product at my home. I have provided evidence that I have indeed purchased said product, or been in the aisle examining the signage…etc. Moreover, an implementation of geofencing or geovalidation ensures I’m indeed inside the store during the study and/or when the submit button is reached. Am I sharing the ‘right’ answers re what I think of the product, signage, etc? No way to ever know that from any respondent, but why wouldn’t I share the truth? There are no social desirability effects and my incentive is arriving whether if I’m yea or nay on the product. Same goes for OOH ad-recall / awareness studies.

In the Relative

Let’s exit the vacuum and compare this methodology to traditional quant techniques. Having spent many (too many) years inside of online panel suppliers, I can attest to the enormousreliance on these panels to power primary market research. The sheer volume of panel-sourced survey completes is staggering.

Frankly, I think comparing mobile research quality to online panel quality is laughable. There is no comparison. This is a slam dunk in favor of mobile. Maybe you think I’m being glib…but if you’ve seen what I’ve seen you would be nodding in agreement. With the exception of invite-only panels, the amount of fraud in this space is greater than you’ve heard or read about. I’m not going to deep dive as it’s off topic but it goes beyond satisficing, identity corroboration, recruiting sources and other supplier sound bites used to reduce hesitation when buying thousands of targeted completes for $2.35.

Yes these apps are in the app stores, ergo anyone with a compatible device can install (and rate) them. Some do allow (or require) custom/private recruiting for ethnography, qual & b2b, but the bulk are freely available to the mobilized masses. Isn’t this then like online panels in that anyone can sign up? Yes, pretty close. So what’s the difference? One difference is that organized (yes, organized) fraud hasn’t infiltrated this space yet. So there’s that. The other difference is that because this space is app powered, the security architecture is entirely different, and stronger relative to online Swiss Cheese firewalls. Yes another difference is the effort required to secure an incentive; specifically the requirement of being in a physical location helps.

Effort = Good

There is effort required with these studies. You’re not sitting on your couch mouse clicking dots on a screen. Effort makes the respondent invest in the experience with their time and candor. There is also multi-media verification. For example, I’ve listened to OE audio comments, and I would encourage you to do the same if you need any convincing that these studies are not ‘real’ somehow (I can play some for you). Once you hear the tone, the frustration, interest, happiness, etc; your doubts about the realness of these studies will dry up. Incidentally, once you’ve heard OE audio, your definition of the phrase ‘Voice of the Customer’ is about to get quite a lot more stringent.

I’ll wrap this up and save more for future posts. Thank you for reading, I hope I gave you food for thought and we can enjoy watching this fascinating technology unfold together.

Mobile Musings: Have you been a respondent yet?

Mobile Musings: Have you been a respondent yet?

Scott Weinberg, Tabla Mobile LLC
Immediate Past President, MN / Upper Midwest MRA Chapter

I’ve been noticing how few market researchers and advertisers have participated in even a single mobile research study. Specifically, I’m referring to an app-based experience, usually using a form of geo-validation and multi-media data capture. I’m not referring to opening a url on your phone and taking a survey, any survey.

Rather, I’m referring to an actual ‘mobile research’ experience, the kind where you’re notified walking into a movie theatre, Best Buy, Target, grocery store, gas station, etc. Alternately, you may be pre-screened and invited to participate, e.g. an out of home ‘assignment.’ The reason I’m curious about this is because of the (profoundly?) unique and different respondent experience these studies entail. Let me give you a few examples.

I took an in-store study, or attempted to, inside a Super Target. I’m not affiliated with this supplier; I have several survey research apps running on my phone (and I never stray far from an electric outlet). Essentially the assignment entailed taking 1 photo of 11 variations of a food product, and responding to a few questions on each. Not difficult; tedious, but not difficult. When I uploaded the first pic, my phone timed out/went into lock mode (set at 1 minute). I tried it three times. I was on an iPhone back then, where pic file sizes range from 1-2 mb, depending on the detail (Androids are similar). This isn’t an issue on a home wifi or similar network, but inside a big box, via your cellular carrier, pic uploads (or any uploads) can be a pickle.

So what did I do? I was calculating that even if I could get the upload to work, I was looking at a 15+ minute boring repetitive survey, while standing in this food aisle. Not much intrigue to this. I was wondering how many others around the country were having this same frustrating experience. I decided to try an experiment of my own: I took 11 random product photos outside the survey (just using my camera) and exited the survey. The survey told me I had an hour to finish up from when I started. I drove home. Resumed the survey on my home wi-fi. At the first upload sequence, I randomly uploaded one of the pics, in about 2 seconds. Answered those questions. Went to the next sequence. Rinse and repeat. Finished in a few minutes. My experiment was to determine if this particular app had any kind of lockout or detection protocols for what I was doing. This supplier is a major player, one of the largest out there. Submitted fine, and my incentive showed up after a few days.

I’ve also noticed recently that Target’s offer free wi-fi. You need to actively accept their terms and login to connect, i.e. it’s not an ‘auto-connect.’ I wonder how many people actually do this? Or how many suppliers tell their potential respondents there is free onsite wi-fi, and to connect to it? I’ve never seen messaging to this effect in a mobile study; have you?

Another example, this time as a project manager rather than a respondent. On a time sensitive, DMA-specific mobile study, a phone recruit to survey app was in effect. Ergo, many of the respondents were ‘first-timers’ to this kind of study. I’m rather keen on these audiences actually; as they bypass the conditioned (i.e. self-select bias) ‘panel people’ who comprise the bulk of all primary online research (and a small but growing portion of mobile respondents). During this study it became apparent that live tech support was needed (and by live I mean immediate, while they were in-aisle). I began emailing my phone number to the potential respondents, and my phone quickly started ringing with confused respondents. They weren’t doing anything wrong, the app was working fine, survey was loading fine, they were just unsure how all this works. Happily however, they were motivated to participate (a healthy incentive didn’t hurt).

So, what are the lessons here? First, suppliers approach signal strength issues differently, with some using offline versions of the app experience (data are uploaded later); others minimize the amount of data uploaded via design. Ask what your options are. Second, when the sampling universe is small, e.g. with specific DMA’s, age groups and such, ergo when each potential response is critical, it’s wise to plan for tech support in advance and have live people ready and on-call to answer questions or take feedback. A confused user may not return to the study if they can’t access the content correctly.

Most importantly, experiencing activities like these make more an impact than reading about it; I always encourage interested parties to experience this methodology as a respondent. It doesn’t matter whether you’re new to the mobile research space or are versed in various fieldwork methods; the technology is rapidly changing, and our assumptions regarding how we should interact are best learned empirically.

Privacy and the Digital Lifestyle

In light of the PRISM leaks, and the revelations of the National Security Agency, we’ve all been hearing about the modern concerns regarding privacy, specifically within our digital lives. To those who may have not been paying attention for the past decade, these revelations may come as a surprise, but not for millennials. I’ve only ever been taught to keep my social security number private and secure, which may come to a surprise for some people. My address, my phone number, my likes and dislikes, my tastes and my turn-ons can all be found online for anyone with a reasonable enough (and I do mean Google search level reasonable here) aptitude. It’s all voluntary, and so far, it has all been to my benefit.

I don’t create a new log in, password, or user name every time I want to create a new account online. Instead, every time I log onto a site that requests my Facebook permissions, Google, Twitter, or whatever, I’m signing over the information I keep with those sources for the sake of convenience. Now, that isn’t so much of a problem when I can easily remove my permission and take control of my data. For most privacy advocates I bump into, that’s all they really want: the ability to say ‘enough’. I only have three or four accounts which are my keys to the web, thanks to embedded Google / Facebook / Twitter sign-in protocols. Does that mean I’m signing away my information? It does, if I had believed it was private in the first place.

For the majority of us, we are content to share our information with those companies or persons who may be interested, because most of it we have already written off as public. Addresses aren’t secret, being published in phone books and directories for decades. My phone number isn’t a secret, as it’s on my business cards, and any social network or web application that supports double-authentication already has it. As the recent PRISM leaks have shown us, digital privacy may be an all but impossible goal in the long run, and if we are to live digitally, it will be within a digital panopticon of surveillance.

So, what is the value of digital privacy anyway? All of my information is already available on the internet, and if I choose to release bits and pieces, I am rewarded with convenient login tools, websites that can translate my content across different mediums and even coupons for lunch. My generation is less and less apt to demand privacy. Instead, we want control of our data. To us, social networks online can be just as real as those connections made in real life, when sitting in front of another. In fact, as anyone who’s ever tried online dating can attest to, sometimes it’s just easier to be honest to your computer screen compared to another face.

And that’s just it. For all of the hoaxes, phishing scams and fears placed into us by the media or those who came before us, we are more willing and more able to trust someone over a computer screen than someone in front of us. Maybe, it’s just because we can examine them from the comfort of our own homes.

Mobile Musings: Disintermediation

My prior blog post ‘What is Mobile Research’ was a simple primer in the form of a Q & A. I thought I would stir the waters a bit more with this submission. I’ve been discussing, and in fact advocating, mobile research (not mobile surveys, which are different) as a methodology with both market research firms and end clients since early 2012. I haven’t kept count but certainly over 50 separate discussions/presentations, roughly split between the ‘channel’ (MR firms) and client siders. I developed two different talk tracks, based on viability with the company type and also awareness of a new business model:

  1. Within the channel, the positioning is more a VAR model (value added reseller), in which the fieldwork would be outsourced to a specialty firm who is equipped with both the technology, i.e., a mobile research smartphone app, & the traffic, i.e. engaged people who have installed said app. The MR firm then adds their ‘secret sauce,’ hence adding the value, to the fieldwork. Same model as with online panel sourcing, for example.
  2. When I’m with end clients (for me, typically CPGs and big box retailers), the people I’m chatting with have comparatively more open-minds about the methodology. The difference can be dramatic. I’ve seen it over and over.  Mobile research can (virtually) take them into their stores, in front of their products, and their competitor’s products, all from the consumer’s perspective. It’s more a case of ‘how do we get started’ as opposed to ‘what about representativeness?’ (I’ll address representativeness in a later post).

Well, is there a point to this? I think there is, and it’s this: end clients are calling upon their MR firms of record to engage in ‘in the moment’ research with today’s mobilized consumer, and the channel has been caught unprepared. And whatever interest is there now will be dwarfed a year from now. And two years from now. In defense of the channel, the rise of mobile research technologies has grown so quickly, it’s not fair to expect MR firms, whom, after all, are hypotheses generators and insight strategists, to have sophisticated smartphone technologies and a mobile panel in-house and ready to go. Noted.

However…the reactions I receive from my channel discussions, and I’m usually in the room with the senior execs, is one of four:

  1. I’m going to ignore this and hope it goes away
  2. We’ll take a wait and see approach; perhaps my clients will magically request a mobile study
  3. Hmmm…maybe this is a paradigm shift, maybe not, but I should get up to speed
  4. This would have been perfect for a qual/ethnography/mystery shop/shelf set sim study we ran last month, let’s add this to our competencies

I think if you’ve spent time observing mobile research as a business model, you’ve become aware that the end clients are leapfrogging, or disintermediating the channel and going straight to the ‘OEMs,’ i.e. those firms who have been quietly developing the apps and promoting them to audiences (recruiting I addressed in my prior post). These OEM suppliers, be they online panel firms or pure mobile plays, have naturally jumped into the void and are assertively getting in front of these end client study sponsors.

“But what about the secret sauce” I’m often asked (I’m paraphrasing) by the channel? Well, two items of note: raw, or semi-raw, data feeds from inside stores restaurants, etc, with pics, vids, and open end audio clips (what I consider the real Voice of the Customer, finally!) from mobilized consumers is as pure and non-biased a data feed as this industry has seen. Moreover, the OEMs are hiring research pros to diversify their service offer as well. Check out their press releases or attend their conference sessions if you don’t take my word for it.

If I was running an MR firm and planning to be around awhile, I would hedge my bets and initiate frank chats with other industry insiders. Specifically, about the implications of smartphone ubiquity, disintermediation, and how to maintain our perceived value as insight strategists and domain experts with these new technologies swirling around us.

Scott Weinberg, Tabla Mobile LLC

Immediate Past President, MN / Upper Midwest MRA Chapter

 

I am a Millennial

I am a millennial. Twenty-one years old, I have never not known what it means to be connected. I have always had access to a computer, I haven’t had a land-line since 2008, and I haven’t had cable since 2009. I spend more time on social media per day than I do in real-life with my friends or coworkers, and I’m not afraid to speak my mind on those social networks where I might have been hesitant to speak up in my personal life. And yet, my peers are the highest-unemployed demographic in the country, and I have a brand-new private college education in a field that I doubt I will ever enter. I am the profile of your average millennial.

Why am I explaining all of this? Because, at this very moment, thousands of market research professionals are trying to figure my generation out: what makes us tick, why we shop the way we do, and how we interact with brands. We’re resistant to traditional forms of advertising (I haven’t had a magazine subscription in my life, and we miss out on all of the cable advertising by watching Netflix and Hulu), we’re aware that we are being targeted (I have two different ad-blockers built into my web browser) and yet, we leave all of our personal information, our likes and dislikes, lying around online like digital fingerprints. Our interests, our desires and our data are the elephants in the room for market researchers, and yet, there is still one component missing: mobile.

I barely use my laptop for anything but work, but I carry my smartphone with me everywhere. My cellular data moves faster and is more reliable than my home connection, so it’s my primary device for web browsing on the go or at home. Even my tablet is more desirable to use if I’m watching streaming content or simply browsing the web. If we see an email in our inboxes, we decide whether or not it’s important within a microsecond and then it’s either read immediately or sent to the trash. When it comes to work, I live on the same device, with emails and notifications coming from separate accounts all day, every day. My phone has more processing power than my home computer, and it’s surprising to find how little interest there has been in how I use my device and what I do on it. Every time you see a Millennial on their device, they’re not just goofing off or playing a game. More often than not, we’re connecting with dozens, even hundreds of our peers depending on the social network we’re engaged with at the time. Our friendships are stronger, and my brand loyalty is stronger, because of mobile devices.

Never before has there been a generation like mine: constantly connected, sharing and willing to trust a brand with my personal information so willingly. I do a majority of my shopping online while on my phone, whether through mobile applications or mobile sites. This goes in stark contrast with Gen-X’ers and Baby Boomers, whose adoption rates are more spotty.  I work in the wireless industry tackling user issues and hearing about mobile experiences every day. I meet people who have never had a smartphone, and those who have had them for the past decade, who lack the willingness to trust their information to their device, or understand how their information can be used to tailor information, advertising, or marketing to them instead of just another Joe-smartphone.

To understand my generation is to understand the future of mobile. I now own a smartphone and a tablet, but soon, I’ll probably go out of my way to pick up a Pebble smart-watch to deliver smartphone alerts to my wrist instead of my pocket. Or, I may decide the Google Glass headset will be more preferable. Either way, my generation is more prepared to integrate mobile and wearable tech into our everyday lives, and in ignoring the opportunities we may pose for your firm, you may be missing out on a generation of sharing, willing, and mobile consumers.

—————————————————————

Cole Hanson is a recent graduate of Hamline University, and the Lead Technology Adviser with Tabla Mobile, a mobile research advisory service.

Mobile Musings: What is Mobile Research?

Since early 2012 I’ve been actively involved in the emerging methodology known as ‘mobile research.’ As a result, I often find myself in interesting conversations with market researchers who are also getting up to speed with this new fieldwork technique. I hope you find of interest these frequent questions I’m asked (and how I reply).

 

What is mobile research?

This question is not as simple as it appears. Some consider any survey or study activity conducted on a phone or tablet as ‘mobile research.’ Regardless of the information exchange occurring via browser-based survey, email, or SMS exchange; as long as it’s happening via mobile device, this is ‘mobile research.’ In fact, I’ve talked to people who consider a telephone/IDI survey as mobile research, if the respondent was on their mobile phone during it.
 A more stringent definition involves apps designed to enable the native GPS and multi-media capabilities built into smartphones and tablets. The respondent usually has more ‘involvement’ with the study in using these capabilities, compared to responding to directed questions offered in a conventional survey.

Are mobile studies representative?

Another common, and intriguing, question; if I’m being asked ‘are these studies representative of the general population?’ I reply: no, they are not. However, this is what makes mobile research not only intriguing, but a  strength for progressive MR professionals. Specifically, the demographic metrics of smartphone ownership should be of interest for this reason alone. Keep in mind: the percent of the USA population, who have joined online panels, is 1-2%, depending on your source. An enormous amount of MR fieldwork is powered by this tiny slice of the population.
I think it is safe to assume that considerably more than 1-2% of the USA population is in possession of a smartphone. Moreover, the demographics of these owners, from teens on up, working professionals, cell-only households, etc. makes this potential audience of keen interest to consumer brand firms. Certain pockets of these demographics are challenging to reach via traditional advertising mediums of TV, radio and print. However, these same age and demo cohorts are often in possession of a smartphone. And one only nee ds to look around to see how actively involved they are with them.

What about data quality?

Given the reliance of self-report feedback through conventional methods (online surveys, phone/mail surveys), the potential of smartphone-based research should be of keen interest. There is an added layer of validation via mobile-research, with shopping behavior in particular, that is capturing the interest of consumer brand firms and manufactures. Consider the difference between asking ‘are you shopping for a new car?’ in a conventional survey, vs. receiving photographic verification and open end commentary, in audio format, from a car shopper while they are at the dealership (and all the dealerships they are visiting that week or month).
Moreover, mobile research can provide in-store shopper feedback, and purchase verification, of laundry detergent, shampoo, grocery items and other fast moving consumer goods via bar code scanning and receipt photos. The appeal of store promotions, instant coupons, etc. can be measured in-store, to gauge brand loyalty and propensity for on the spot brand switching, etc.

Do I need an app for mobile research?

If one simply requires the respondent to have browser access, then the answer is no. Examples of this may includeclosed audiences, for example at a meeting or conference. Or, the survey may be offered as a convenience, for example as a follow-up to a customer service inquiry, where the survey device doesn’t matter. However, many ‘online surveys’ do not rend er properly on a mobile device, and the importance of this, often overlooked, can quickly lead to respondent frustration and drop offs.
Sophisticated mobile research, involving geo-validation, barcode and multi-media validation, does require an app designed for this kind of activity. Although these apps are designed to conduct similar tasks, there is a surprising amount of variability in the user experience and technical proficiency (i.e. care given to the battery drain problem). As always, it’s good to shop around, and I encourage interested parties to try several apps via the app stores, and experience these as a respondent might.
I hope you find this mini primer in mobile research helpful, and keep an eye out for further newsletter articles
devote to this topic.

 

Scott Weinberg,

Immediate-Past President of the Upper Midwest / MN MRA Chapter

Originally posted in the MN-MRA Winter 2013 Newsletter