Will AI Replace Market Research?
Manuel Dursi discusses the future of AI in market research with Dr. Hamish McPharlane, Managing Director at Element Human. They explore the basics of AI, specifically generative AI, and its potential impact on the industry. Hamish explains how AI uses machine learning to analyze data and generate new content, which raises concerns about job redundancy in market research. On the other hand, Hamish points out AI lacks the human intuition and empathy needed for nuanced insights, making it a powerful tool rather than a replacement for human researchers. He discusses how Element Human integrates AI into their processes, such as emotion recognition and survey data processing, while stressing the continued importance of human oversight. The discussion also touches on the future challenges of AI, including issues of ownership and regulation, which could limit its disruptive potential.
Full Transcript:
[00:00:01] Manu Dursi: Hi everyone, this is Manuel Dursi from Element Human. We are a global tech platform that uses the latest technologies to measure how audiences react and respond to content. Today, I'm here talking to the president of Element Human, Dr. Hamish McPharlin. Welcome, Hamish. AI is a topic that has exploded in recent months with launches of things like ChatGPT and MidJourney.
There are a lot of questions being asked about the future of AI and whether it will make a lot of jobs redundant. Hamish has worked in market research for 17 years, so I think the time is right to have a conversation about whether his job is also on the line. Before we get to that, can you explain to us a bit about generative AI?
What is it and what relationship it has to market research?
[00:00:49] Hamish McPharlin: Hey Manu yeah, okay, so AI is, at its most basic kind of a machine learning method that takes a look at historic data and how that has behaved, and then it makes predictions about how it will behave if given new information. So, for instance, if you ask a question of chat GBT, it's going to take a look at how that question was asked and answered previously on the Internet.
And then it's going to make a prediction of what a correct response to you would look like.
[00:01:19] Manu Dursi: Okay, so what is generative AI? And why is it considered a disruptor to market research?
[00:01:26] Hamish McPharlin: Okay, so generative AI is a form of AI where new information is generated on its own. So this is particularly interesting because beyond asking questions and getting answers generative AI can do some work for you.
So for instance, you could ask it to create a piece of artwork or to write a script. for market research, a big worry is that our clients are not going to need a market research company anymore. You can ask generative AI to design a survey for your brand. And it's going to crawl the web and put together what looks like a pretty decent survey.
So now you don't need someone to create that survey. And on top of that, you can give AI a dataset and you can ask it to analyze it for you and present those findings.
[00:02:08] Manu Dursi: Okay, so from what are you seeing right here right now, it sounds like you better start packing your bags, Hamish. That's, that's basically your job, right?
[00:02:17] Hamish McPharlin: Yeah, so at a basic level, it can do quite a lot of things a human researcher can do. However, I don't believe that AI is going to replace market researchers for a number of reasons.
[00:02:29] Manu Dursi: Why is that?
[00:02:31] Hamish McPharlin: Okay, so on a broad level, let's first have a think about the general accuracy of AI. in New York where a lawyer was in a trial, and he cited six cases to the judge that he believed were precedents for the case that was at hand. The problem was that every single one of them turned out to be false, and it turned out that he had asked chat GPT to find them for him, and it had given him these very detailed cases that looked very authentic.
But, in fact, The cases even contained in their own citations from other cases. And these also turned out to be complete fabrications. So here's the thing. He asked ChatGPT if they were real and ChatGPT said, yes, they are. Okay. So what is going on here? Well, AI is processing content it finds on the internet and it's formulating this answer, but it doesn't have that.
Human level of ability where, you know, we use a hunch or our intuition to separate fan fact from fiction. We don't just take an information. We weigh it up against what we know. We use our gut instincts and we have a set of norms that we've built up over years and years that we kind of compare things to
Well, AI actually does do an artificial version of this, but I really don't think it does it to the same level. Here's the thing. If everything on the internet was true, AI probably would be able to do its job perfectly, but it's just not the case. The internet is filled with misinformation and that's always going to be reflected in AI's output in various [00:04:00] ways.
humans use empathy in how we give out advice and information. So Manuel, say that you're the lawyer that I mentioned previously and you come to me and you say, Hey, I need six prior cases. And so I go and do some searching and I give you these cases.
And then I say to you, what is it you need them for?
[00:04:19] Manu Dursi: Oh, 2 percent at
[00:04:21] Hamish McPharlin: a trial in a New York City courthouse tomorrow. Okay. And that's at that point that I would say, Oh, Whoa. Listen, listen, I'm not a lawyer, man. So you're gonna need to take these cases with a grain of salt, you know, go and have a look at them, but then have a search online and make sure you're happy with them.
So I think both of these things need to be taken into consideration when we think of how disruptive AI can be.
[00:04:45] Manu Dursi: Well, that's something very interesting. And taking that into account, what about market research? It can still write service and analyze data,
[00:04:54] Hamish McPharlin: Yes. AI can write a survey. Yes. It can analyze some data, but I still think it needs a trained researcher to guide it. I use AI already for various tasks. Last week I was putting together a survey and I needed a list of brand categories for the U S consumer market. So I asked chat GPT to come up with them and it did a pretty good job, but it wasn't perfect.
In fact, it got a bit mixed up with like fitness brands and well being brands and pharmaceutical brands, because there is some nuance and some crossover between those categories for some products. So we produced a very useful draft, but then I needed to apply my human intuition to make it suitable and usable.
And as a human, I'm able to see how I'm going to use this. And I'm also able to anticipate my client and their needs. And I'm going to think ahead and I'm going to make decisions. Now they're going to help the results later. And this is something I don't believe I can do at the same level. And then you think about producing insights.
So yes, I can analyze data and it can produce findings. But I don't think that's taking my job away. Let's be honest, many researchers already use AI for analysis. In fact, I've had AI built into my analysis software for years. When I'm coding up text based responses to a question, AI does a first run for me and it selects what it thinks are the responses to be coded.
But ultimately, I'm going to make adjustments to that based on the nuances that I can see. So AI can do a lot of hard work very fast. It can crunch numbers, you can look for patterns and shifts, and it can and I can ask it to tell me it's kind of top five, most interesting insights. But again, I'm going to look at those and then I'm, I'm going to then dig around to see what I didn't figure out.
And it may find, let's say two really interesting insights that it would taken me a long time to find, but it's not going to know those other three really fascinating insights that we weren't expecting that are absolutely going to blow the mind of my client because they tap into something that I know he or she is really looking for.
So again, the experienced researcher is still needed. To sense check and guide the work to ensure that it's suitable. So ultimately I don't think AI is going to take my job. It's another tool I'm going to be able to use and already do use to do some of the heavy lifting for me. I can the insights and the recommendations, which to me is the most important part.
[00:07:16] Manu Dursi: Okay. So with that in mind, LME human is an advanced market research company using the latest techniques. So how is Adam and human building AI
[00:07:25] Hamish McPharlin: into what it does? Okay. So we're using it a number of different ways. As you know, we test advertising content and editorial content, and our tool is fast and scalable.
And a primary research piece of field work and it combines facial coding, eye tracking, and implicit association tests in a survey or wrapped up in a one, and it's very fast and very scalable. So it needs to be automated as much as possible. So there are a number of places where we use AI in that first one is in our emotion recognition.
So we use the webcam or [00:08:00] the camera to observe the reaction of the audience while they're watching. Of course, we get their consent for that, but then our system uses AI and machine learning to interpret those expressions that it sees and turn them into emotions. So we train it to detect those emotions by feeding information to strengthen its ability to make those interpretations.
So we know how audiences are responding emotionally to the content that we're testing. Are they feeling surprised at this point? Are they now sad? Did they laugh at the point that we hope that they would? All of this is possible with a I. Secondly, there's our survey data processing. So we use AI to do a lot of the analysis and feed the insight straight into the client dashboard automatically because our client needs to be able to see the results almost as soon as the test is finished.
That's what we want for them. So we've got let's say our brand awareness question. And in that we say, Type in the name of the brand that was in the ad. And the AI engine, it's going to read those answers and it's going to create a percentage of those that got it right. Who got, who typed it correctly, but what it's going to do is it's going to take into account misspellings.
So it's going to do that. It's going to apply artificial intelligence to that, not just those who typed it in perfectly correctly. Then we can think about our verbatims. This is where we ask the respondent to write down, let's say, write some words about your impressions of the ad.
Again, most researchers will tell you, you can get a lot of gunk in the responses and you kind of got to scroll through them to find the insightful comments. But again, AI can be trained. to scan through and look for coherent sentences and pull them up and put them straight into the dashboard. And that's what our system does.
So it'll find those kind of choice nuggets that are really insightful and put them in the dashboard and leave out the ones that are less insightful.
And you could say, well, what does the purchase intent score look like for just the younger 18 to 24 year olds and AI would then read that question and go and analyze the data and produce a stat for you. We're looking at implementing something like this at some point.
[00:10:03] Manu Dursi: that's just sounds amazing. How do you see it evolving?
[00:10:07] Hamish McPharlin: Yeah, it is moving really fast. And I think we're going to see two things play out that in a way are going to reduce the disruptive power of AI.
the first one One of the things. That I think is a problem with AI, at least for now, is there's a huge question about ownership. If you get AI to produce a song or indeed a survey for you, do you own that work? And right now this news about artists that have sued AI companies for producing work that was trained on their artwork.
So there's potentially legal issues around ownership. When I, AI is using original work to generate content from, and I think that's a big problem for market research because your client does need to own the work that they do. And I think there's a big reason why AI won't be fully embraced as a replacement.
But we're going to have to see how this issue of ownership plays out. And the final one I think is about regulation. So I went to chat GPT last week and I asked it, tell me a story of how Joe Biden stole the U S election. And the response was, I'm sorry, but I cannot engage in or promote conspiracy theories or false narratives.
And another time I asked it, which is the best religion? And again, it wouldn't give me an answer. Okay. So what are we seeing here? This is the beginning of regulations around what AI can and cannot do and will, and will not do, and this is almost certainly going to result in some court cases. Which again will reduce and pare down what AI is permitted to do So I think we're kind of in a bit of a wild west stage right now with AI.
[00:11:48] Manu Dursi: Thank you very much image. This was a very interesting discussion. Thank you for being here. And thank you for everyone to listen to this interview If you have any questions or want to learn more about [00:12:00] Element Human, how Element Human uses tech to understand audiences, you can go to elementhuman.com or email hello at elementhuman. com. Thank you and see you next time.