We’re very excited to share our Full Stack Leader Limited Series recorded at SXSW 2023. Our first episode is with Lon Taylor, the founder of First Insights.
As the Founder & Principal UX Researcher of First Insights, Lon is focused on helping clients understand their customers, optimize user experience research efforts, and answer critical product development or digital strategy questions.
Based in South Florida with roots in NYC and Chicago, he is leading a team that excels at qualitative research, usability testing, ethnographic studies, surveys, and UX consulting projects. With expertise in the pharma, packaged goods, banking, insurance, travel, retail, publishing, eCommerce, healthcare, and technology industries, Lon has moderated one-on-one interviews and focus groups using a wide variety of approaches.
An active member of the digital transformation community, he currently serves as a local leader of IxDA Chicago planning monthly events. Some of his thoughts about the industry have been published on Medium.com, MarketingProfs.com, and in Quirk’s Market Research Magazine. Lon has also taught at the General Assembly, trained internal teams, and mentored at SXSW conferences.
We hope you enjoy the episode. You can find even more Full Stack Leader episodes here
Ryan: All right, and welcome back to this week’s episode of the Fullstack Leader Podcast. Again, we are here at South by Southwest 2023. And today my guest with me is Lon Taylor. He’s the, founder and principal researcher at First Insights. We’re excited to have you here today,
Lon: Lon. Well, thanks Ryan for having me.
I really appreciate the opportunity to chat with you. , and just to tell you a little bit about First Insights, we are a,, user experience research company focusing on things such as usability testing. For both an interface, website, a mobile app. , and lately we’ve gotten a lot more into testing products and medical devices.
Oh, great. So, you know, things like that. We also do lots of qualitative research for traditional, uh, let’s say consumer oriented companies, focus groups, ethnographic research, all under the, sort of the mostly qualitative banner of,
Ryan: of market research. How and how do you do that currently? Like what is, uh, some standard process on that?
Lon: you know, let’s say that someone has a, a website that’s in development and typically what. Come, we’ll come in kind of at maybe the mid stage of development. That’s about the best times you come in before you launch. Mm-hmm. . So we are, um, organizing usability studies by taking care of the three or four aspects of the study.
The first thing is to make sure that you’re recruiting the exact target audience. That’s the intended user for the website. Again, mobile app, device, whatever it may be. , so we carefully craft what’s called a screener, and that screener has maybe 20 questions to make sure that you’re recruiting the right people.
We partner with some of the outside vendors that. Leaders in the industry to do that, that have literally millions of people in their database. So that’s kind of the first step. The second step is really getting to understand what questions our clients have. , so they may be looking to understand a variety of things around the brand, . in addition to doing usability. So we’ll craft a moderator’s guide that has questions about both task based and maybe questions about value proposition, where they may see that company providing services, or products in the future. Then we actually sit people down. It’s a one-on-one interface, via Zoom. Usually these.
Um, but we’re talking to people and there’s no magic involved. There’s not that much software involved. It’s just really talking to people and going through our moderator’s guide and learning from, usually we interview maybe 12 to 20 people and then write a report up that that really helps.
Define what we learned and how,
Ryan: and how is that play out in, uh, level of value compared to like bigger quantitative studies where you’re looking at lots and lots of users, but not really speaking to them.
Lon: Yeah, that, that’s a great question to ask. And the, the response is that the work that we do on a qualitative, uh, front, Compliments quantitative research.
Um, we’ll never discourage someone from not doing quantitative research, but the example that I like to use is that a survey can tell you whether users or customers like red or green, what we do, can tell you why they like red or green. Right.
Ryan: The content, the context behind red.
Lon: Correct. We’re adding context to.
To the, to the whole story so that stakeholders can make a smarter, more informed decision.
Ryan: That makes sense. So, um, you know, , as we’re kind of talking about like some of the directions of the future of, some technology and kind of looking forward to like 2030. Take it into consideration how so many of these kind of like tools and different industries are gonna be affected by things like AI or, in some cases maybe like, um, extended reality and some of the, some of the bigger things that are, that we’re working on right now for this piece.
I really think the quantitative experience makes a lot of sense when you think about the automation front of things. But then when I think about the qualitative, research and interviewing experiencing process, how do you perceive that beginning to change over the course
Lon: of time? Yeah, that, that’s a great question.
I think that there’s, you know, I’ve attended several session. Yep. That are focused on ai. Yep. Machine learning where that’s bringing the whole decision making process and just informing companies on what to do next. Um, you know, 10 years from now, I am, I’m not a hundred percent sure where, where it’s heading.
Yeah. But you know, my best guess is, Even with AI and, and machine learning, it only gives you a, a, a, you know, a percentage of the big picture. And I still feel that you’re gonna have to sit down and talk to customers. To add . Value to what you’re doing. , I was in a discussion yesterday with, , with a woman who heads up the state of Texas effort to figure out how to help.
limit and solve the opioid crisis. One of the things that she talked about was how AI and information can be misleading. Yeah. The interesting story, is that you can have a lot of prejudice. Yeah. So, the interesting story is that when people have an opioid, an overdose, they’re trying to leave the police.
Number one because there’s liability, and what they’re doing is they’re getting in touch with like EMS first, and when EMS shows up, they are writing things in code to not include information that might otherwise be . Used . Against the person who called that could be close. So the physical drugs. So the end result of that is that this woman who is a, you know, a leader in the medical space and a and a medical doctor, if she were to put all this information into some type of AI scenario, it’s not going to yield any information because.
The input of the information is so different than what you would expect. And so you need context. Yeah. So she’d be, she’d be well served to help, uh, you know, help the situation by sending out, you know, a team of psychologists, a team of human factors, people to get more context around this. Yeah. And see if there are pattern.
A language that’s being used in the reporting, because again, it could be misleading and you can wind up going down a rabbit hole that’s unintended if you don’t have the context. So I think there’s still gonna be that balance between data and really understanding it well. Yeah. Um,
Of the interesting things I’ve been seeing, kind of riffing on this a little bit is, is like substantial growth in the querying pattern , like chat programs.
Not, not just how we query them, but also how they can query back. And to me this seems really relevant to your business because a lot of the qualitative like research and connection. Kind of delving into things and asking the right questions. Do you think that the systems are gonna get good enough to where they can perceive the direction of which we’re trying to go and they can query into humans versus the opposite?
Lon: I do. I see, you know, thinking about scenarios that are starting to come into use. So let’s talk about, , we work with a big cosmetics company that sells both directly. And through, you know, giant retailers. Yep. So one of the things that they’ve started to employ is trying to figure out the right messaging when you leave a shopping cart.
So what is the incentive and, you know, to get you to come back to the shopping cart or to con or to finalize your purchase. And people are trying different message. Yeah. And sometimes you, you get messages that are totally unrelated to what you , may have left in your chopping card, or it just, it seems disjointed.
I think that’s where something like a chat G P T can come in to play mm-hmm. , where the technology knows a little bit more and is able to know your profile and ask you the right question. so that you can complete a transaction mm-hmm. . And I think the smarter the technology becomes, and the more information it has to craft a response.
The, those are gonna be game changing scenarios that people will start to use. Yeah.
Ryan: I guess the thing that’s interesting to me is like, can it ever get to a point of the level of curiosity that’s required in the type of business that you do? And there’s a part of me that says, no, there’s no way, because that’s a human trait.
They’re completely human trait. But I don’t know. I mean, I, I can’t say for sure. You know, it’s interesting.
Lon: Y yeah, it is. I mean, I can see that based on responses. That something can be programmed to be smart enough to formulate a good response question.
Ryan: So it can look, it can
Lon: look curious.
It can seem like it’s curious. Yeah. And whether or not it’s sort of thinking, quote unquote, right. We don’t really know, but it can be programmed in a way that if we hears certain keywords, that it can ask a really good follow up. You know, is it gonna replace an actual moderator? I hope not , because I’ll be out of a job.
But I think that it’s just like anything else that’s a piece of programming. It’s what you put in. Is what you get out. And if you take the time to program something, well what you get out really works well and again, helps big picture for customer experience, user experience, um, and just creates world in which things move faster and more effectively.
Um, and \ there’s a, a fun or, or, Way to, to interact with companies, brands, products. Well, do
Ryan: you think that there’s, do you think there are aspects of the interviewing process that are innately human and allow us to feel comfortable enough to actually come forth andg expose ourselves?
And if we’re doing a lot of this through robotics in the future, Do we even feel comfortable enough talking to them to where or, or do you think there are veils that go
Lon: on? yeah. That’s a, at the end of the day, what we see is covid, so covid caused a shift in, in what I do and what my colleagues do.
A hundred percent of our research was in person before Covid. So the interaction between two people sitting in front of a computer or a mobile phone and having the conversation in the way that we do it was what we were used to. And even when people come in and sit in a room, there’s a one way mirror behind the people.
They know. There are people watching. People can get a little nervous. Sure. And it’s our job as moderators to make them feel comfortable. Yep. Experience. The shift to Zoom and doing everything remotely created more opportunities to interview people in a broader geographic area. Right. Yeah. And people certainly feel comfortable sitting in their bedroom or in their living room doing work.
Yeah. They’ve been used to, so it, you miss some of the interaction with the person. But it became more efficient. Now, the question you asked is that next level question. If there is no human there, will people feel comfortable? I think they will if they know upfront that there’s no human there.
Right. So it’s almost a conformed, like an informed consent. And as long as they know that a computer’s asking the questions mm-hmm. , I think they’ll be fine with it. and we just have to see where the technology takes us in the next. You know, five to 10 years to see if that can actually be done, and if the next question in line actually makes sense to other person.
I think it’s a matter of trust. Once a computer generated, let’s say moderator asks a totally unrelated question. The participant’s gonna lose to lose trust for the rest of the interview because they’re gonna realize, okay, whoever programed this didn’t know what they’re doing or that it’s not smart enough to ask a relevant question next, they lose trust and the rest of the interview can go
Yeah, it’s interesting. I, and I, I think, um, combine that with like assessments being done in a way that may or may not be contextually correct as well. You know, you can end. Probably a lot of variation in an ultimate, you know, perspective on the data you’re collecting. which maybe over the course of time can be honed, right?
Like it really might be able to, and maybe things shift a little bit, but it does seem like you have to have that combination of kind of quality data input and like the curiosity to pull it out and. Effective assessment of the data across whatever kind of size
Lon: audience. Yeah. There’s a lot going on. And you know, we do a little bit of quantitative research and surveys, you know, via tools like Qualtrics.
Yeah. And the area that they’re incorporating. And we’re sort of AI and technology can come. Where you have, let’s say you’re running a survey with 2000 people and you have a few open-ended questions and people are typing a full sentence. Well, taking the time to go through that visually or manually and read 2000 open-ended responses is really hard.
It’s a lot. And so, The survey companies are starting to incorporate AI into that, where they’re telling you how many times, let’s say a particular word was used, we’re trying to put things in context. Mm-hmm. , I’ve seen a few demos. It’s not there yet, but it’s, it’s getting there and it just, it speeds up.
The reporting and delivery
Ryan: process. Yeah. Even, which is great. Even the transcription stuff that we’re using in the podcasting world right, for instance, is significantly better. And you can more quickly see text, you can work with it, you know, so that’s, that’s really effective. I think those kind of things on, on the very short term are gonna keep getting better and better.
I think in the long run though, like that, that question of contextual. Between, , what a human means, what they’re actually wanting, like trying to, pull out like those, unspoken qualities of communication, seem like they’re also pretty important.
Lon: Yeah, it’s always a balance and.
You know, ultimately there’s probably not a substitute for, for a human being, and things can go wrong. , any of your listeners who are Star Trek fans, um, you know, have seen episodes where things go dramatically wrong because everybody’s relying a hundred percent on a piece of.
Ryan: too. Yeah. Too, too much consideration
Lon: to the technology. Yeah. I mean, another great example is like black mirror. Yeah. Where things go drastically wrong based on everybody trusting the technology to make really important decisions that you need a human to make. Mm-hmm. .
Ryan: uh, you know, And this is a separate topic for me in general, that I’m, I’m beginning to really explore more deeply is like trusting the humans.
To properly inform the technology. You know, so like that part of it, everybody kind of has different motivations and reasons behind how they’re trying to do things. So, Those will all come into play over the next little bit. But, um, this was an incredibly insightful conversation. Thanks for sitting down and, running through some of your thoughts on this.
And, uh, I know, uh, if people wanna get a lot of perspective and insight on how their products are being received and, , and are going to be ultimately used in the marketplace, uh, please reach out to Lon and, and talk to him. His team
Lon: can help out. Thank you so much, Ryan, for, uh, for having me this afternoon.
Ryan: appreciate it. Great to have you here.
Introduction to First Insights – a user experience research company focusing on usability testing for websites, mobile apps, and medical devices, and qualitative research for traditional consumer-oriented companies.
Process of usability testing – recruitment of the target audience using a screener, crafting a moderator’s guide, conducting one-on-one interviews via Zoom, and writing a report to provide insights to stakeholders.
The value of qualitative research – it complements quantitative research by providing context and adding value to stakeholders’ decision-making process.
The future of qualitative research – the impact of AI and machine learning on the decision-making process is uncertain, but Lon believes that talking to customers is still essential to add value.
Limitations of AI and information – AI and information can be misleading and can have prejudices, as demonstrated by the discussion on the opioid crisis in Texas.
E-commerce strategies: The conversation touches upon the topic of strategies that e-commerce companies use to incentivize customers to complete their purchases, specifically focusing on messaging when customers leave their shopping carts. Chatbot technology: The conversation also discusses the potential for chatbot technology, specifically ChatGPT, to aid in completing transactions by asking the right questions and knowing the customer’s profile. Customer experience