Transcript for:
Enhancing Design Through User Feedback

[Music] What is up everybody? Wow, a lot of people fun. This is humongous amount of people out here. Uh this is fantastic. It's great to be here at Config, uh the largest most premier design conference in the world. I am Mike McDow, as uh as Jenny said. I'm a principal solution marketing manager at user testing. So, let's get a show of hands really quickly. How many people here actually know what user testing is? Okay, that's almost unanimous. I feel bad for the two people that didn't raise their hands. Um, okay. So, we'll do one more. Uh, how many people actually use user testing today as part of their jobs? Okay, not quite as many. All right, but let's do one more. How many people are using user testing today to test Figma prototypes? Okay, that's actually probably about the same amount of people. So, the good thing though is you're all here because I think that's what you want to be doing. You want to get more from user testing and you want to start getting smarter designs driven by faster feedback from real people. So, we're going to do one more question, right? And this is going to be a shout it out type question. What makes an experience exceptional? Just shout it out. I got four kids. Come on, yell at me. Nobody. No one has any ideas what makes an experience exceptional. Come on. Right here. What was it? Engagement. Engagement. Okay. Anything else? Ease of use. Intuitive. Intuitiveness. All right. We're going to do a little match game. I got three answers there. And someone was raising their hand. I apologize. I didn't get to you on the side. Um, we're going to see if my answers match up with your answers. So, service, that's one possibility, right? Then we've got luxury. So, luxury is important. We've got speed. Speed, of course, depending on what you're doing. Speed is important. Freshness. If you're in the supermarket, definitely want fresh and comfort, right? I'm sure how many people flew here, raise your hands if you flew here, right? We know our airline friends, if the seat's not comfortable, if you don't have space, that is not going to be an exceptional experience. Um, I did have a good experience on the way here, and it was in economy. So, I won't tell you who I flew though. Anyways, I was in travel for over 20 years, so I can always have fun with my airline friends. What this all means essentially is that it's completely subjective what makes an experience exceptional. So I said I worked in travel. I worked at Herz for 20 years. I built the original website basically back in the '90s. And you have customers on opposite ends of the spectrum defining exceptional. On the one hand, you might have somebody who wants to rent a Mercedes. They want low mileage. They want to have heated seats, all the bells and whistles, and they don't really care about the price. To them, that's an exceptional experience. But at the same exact time, you have another person like he's looks like a family man right here in the front. So, let's say he's taking his family on vacation and he's looking both ways now, like very suspect. Um, he's going on vacation with his family and he just cares about having a reliable car that doesn't smell bad, that's a reasonable price, and it's not going to break down from a trustworthy company. So, both of them have a different definition of exceptional, but it's all with the same brand. And so essentially it's all of those things on the slide, all the things you guys yelled out and none of them at the same time, right? Every individual is going to take two or three attributes that make exceptional mean something to them. And unfortunately, as designers, you have to design for all of them at the same time. So that is a unique challenge and I certainly have been there in my career. So, I empathize. When you don't really know how to balance all of these people, you end up with true misunderstandings of customer needs. There's not a lot of information what's going on because most companies lack a feedback loop completely. I won't ask you to raise your hands on this one. Um, some of you are probably saying, "No, Mike, we we actually get feedback from the customer care team or the chat or they tell us what someone called about on the phone." But you have no context. You don't know why somebody complained about something. You didn't watch the system break in front of their eyes. So, you don't know. Now, other people will have a feedback loop that is way too late. And the first person who sees it can raise their hand. It's intentional. Um, I will fall on my sword and say that back in the day, we used to actually go behind a one-way mirror and test fully developed applications about twice a year. And then we would get down on our knees and pray that none of those five to 10 customers pointed out anything disastrous in the thing that we were supposed to launch in one to two weeks. It was way too late in the process. Did any does anyone see what I'm talking about? Does anyone see the thing, you know, on the slide here? No. Well, the word feed look at the word feedback real quick. So again, too late in the process. I'm up here on stage. If I didn't know that was there, that would be a real big problem, right? All of this lack of feedback loop feedback too late. It leads to internal misalignment. And I'm sure we've all been in this situation in a conference room in some kind of knockdown dragout brawl of a meeting arguing about not only what should be done, but what should it look like, should it work, bless you. So, right in the front row here, everybody. Um I I don't want to miss anything. So the design we've talked about the different kinds of definitions of uh exceptional trying to figure that out who's going to win it's basically creates a lot of problems. Some of you are also saying no we have data but more than just our just our phone logs and our chat logs and things we've got things like data and analytics. I managed data analytics for 25 years and one of the things about data and analytics is if it's in your analytics it already happened. there's nothing you can do about it. So if you see that you had fallout, abandonment, low purchase rates, bounce rates are high, all of that already happened, you lost those customers. It is too late in the process. It is reactive. Surveys, how many people run surveys? This I'll just ask. How many people run surveys as part of their feedback? Right? So surveys are great quantitative feedback, but they're impersonal. And what I mean by that is you don't know why people are giving the responses that they're giving. So, you might have a particular question. Hey, uh, on a scale of 1 to five, how do you feel about this? And people say, oh, we averaged a 4.2. You're thinking, that's amazing. 4.2, yay, we can go home early today. But the reality is that people who gave you threes and fours, if you added a qualitative layer onto that, you would be hearing why they didn't give you a five. And most often, people still scoring things threes and fours are giving negative feedback when they do. So, that's why those car sales people always ask you to give them fives or it's a failure. And then focus groups. Focus groups are great, too, but they don't scale. They take a long time to set up. They're expensive. And you end up with, god forbid, someone like myself or maybe some of you in the audience that likes to talk a lot. And so you get sort of group think coming out of a focus group when we just said that it is an individual subjective view of what makes the experience exceptional. So we want to get individual feedback so that we can solve these unique problems that we have as designers. So what do we want to do? I promise you the slideware is going to end soon. Um so what do we want to do? We need to convert this into a proactive, personal, scalable approach to craft exceptional customer experiences through AI integration and humans. Humans, real people who are willing to share their thoughts and feelings and ready to answer your questions in mere minutes. AI to actually use advanced analytics to transform those insights, that raw feedback into actionable insights that are immediately available and useful to everyone. And then integration, we want to embed those insights directly and seamlessly into the tools that you already use. So why is crafting exceptional customer experiences in blue? Because at user testing, we're on a mission to enable organizations to craft exceptional customer experiences. How do we do that? By injecting the voice of the customer into everything you do. Uh, show of hands again. How many people know what their company like mission statement is and does it have voice of customer in it? Okay. I think this is probably more people don't know their company mission statement than the fact that it doesn't have voice of customer in it. But a lot of people have it. Most it was like this the the the phrase dour to put into it. And the reality is that unless you're actually watching customers on a regular basis, listening to them talk about your experiences as they're going through it, you literally are not really voice of customer driven. So why user testing? I mean you all have heard of us. So that was really great. User recognized leader in the space. Um Forester Wave number one experience research platform G2 just gave us a bunch of awards etc etc. Quant it's all there. If you're any one of these customers on the bottom, that's just a sampling. Of course, if you're here in the audience, we thank you very much for your partnership. But let's be done with this, right? How does user testing help to drive efficiency in design? Now, I'm going to show it to you. We're going to move over to the future vision of user testing and Figma. This is something some of this is available. It's coming. This is actively being worked on. So, I think you're going to really, really like what I've got to show you. So, we're going to switch over to a live demonstration. This is Jane, everybody. Jane is a UX designer. She works at a company, Threadline, fictitious company. So, Threadline is a modern housewares company. They sell things like sheets and linens and, you know, other kinds of household goods. And she is on a mission really to improve and optimize the conversion rate for her platform, for her website. And the product details page really stands out to her as something that she wants to get uh improved. She wants to make new she wants to make a new version of it. Right now though internally lot of conflict about what to do, what should we change, whatever, lot of different opinions, a lot of different voices. And so what she's done is she's created three unique uh versions of this product details page and she's she wants to get feedback with user testing from those. So she jumps into Figma. She's got her three concepts laid out there. Concept A, B, and C. And now to run studies are literally no one. That is unanimous support. If any of my leaders are in here, I hope you saw that. Um, but the good thing is she doesn't have to anymore. This is going to be integrated right into Figma. So, she comes over here and she clicks on the little option. She's going to pick the new user testing design insight testing design insights widget. She's got her goal here that she can type in whatever question she wants to know and then pick the from a list of popular goals that she can actually choose and then which actual assets does she want to get feedback on. So she's got all three concepts in there to do testing. She's going to type her question into the box. Which concept helps customers feel most confident buying from Threadline? And she picks compare multiple options. She hits generate. Now it's generated a study for her. You can see it says task and questions generated. She's got a test plan embedded right in here. She can just click view all to see all of it. She's got an audience selected from a list of personas and audience pre profiles that are predefined. So she's going to test with Threadline customers. and the system is reminding her that yes, this is a secure prototype. So, we're going to need a password for this. But instead of having to share the password anymore with the test participants and tell them, oh, the password for the prototype is this. Just type it. It's going to be embedded directly into the system. User testing will automatically use the password. No more sharing of passwords. It will be securely accessed during the test uh that that Jane is running. So, she's happy with this. She's going to go ahead and launch her study. So, yay. Study's launched. Right. We're uh I'm sure some of you are getting uh hangry out there. So is Jane. Jane wants to go off and get herself a bite to eat. Right. So, she takes some time. She's pretty confident that when she comes back, the study will be done. Um how many people have ever run a user testing study that was done in less than one hour? Okay. Handful. Handful. I'll be honest. If my studies are not almost completed in an hour, I start to freak out and think that I did something wrong because yes, even people at user testing sometimes make studies that are a little wonky and don't fill right. Um, you double and you double require something. But she comes back from lunch and boom, test results are ready. Prototype comparison results are there. View the results. So, she's going to go and click on view results. And what does she have right here? a finding summary generated with AI that tells her immediately that option C was the most uh gave people the most confidence when purchasing due to customer review section details about the product. However, several participants mentioned they would like to see more information about shipping and returns to make a purchase. Now, you've got a bunch of options here. You've got details. You've got the QX score. How many people are using QX score at user testing right now? Her hand went up in a half a second back there. So QX score is the quality of experience score. It's a proprietary metric user testing. It combines both attitudinal and behavioral outcomes from a particular study. So behaviors, did they complete a task successfully in the prototype? And then attitudinal, how do they feel about it? Do they like the design? Would they be willing to share it with other people? Those sorts of things. So we can see here overall rating 77% that wins. We've got different things about task success, confidence, preferences, and we've got feedback on the individual items. Now instead of just looking at an overall option, we can go over here and look at the feedback on the IND. Okay. Well, a lot a lot of people running AB testing. We I think everybody knows that AB testing is fantastic, but you never know if something good was in the losing option, right? So, we just go, yeah, that one get rid of get rid of the losers, bring in the winner, and we're done with it. But a lot of times there's bad elements even to the winning version and there are winning elements to the ones that lost. So we can see here that the individual feedback on let's say option A you know participants did not feel confident making a purchase but um you know the call to action visibility issues u undermine purchase success. So there's things like that that she can bring in. She sees her heat maps of everything in the slides here and she can come down and look at all these various options. So she's taken she's got her feedback. She knows what she wants to do. She's going to jump into Figma slides now so that she can share this with the marketing team. So right here, very simple concept comparison, the QX score that 77 is going to be important for the winning version. We can see the confidence rate and the preferences. And she's even embedded real clips of customers right into the slides. So if somebody's looking at the slides, they can watch the video clips, etc. of people going through it. And a marketing team is going to say, "Well, this is great, Jane, but why do I care about these people? How do I know that these people are who they say they are that you got feedback from? Well, she is now going to be able to tell marketing that she is using user testing which uses the new LinkedIn profile verification system which this is live now. And so that she's confident that everybody who says, "Oh yes, I'm a corporate buyer for this or I'm that kind of person." It is verified with LinkedIn. So this is very good. Gives a lot more confidence in the already the highest quality first-party panel out there. But it's even better now with LinkedIn verification. So she's she's presented to marketing and now she's ready to start iterating. She wants to iterate. So what's she going to do next? She is going to jump straight into Fig Jam and start collaborating on the the enhanced version of concept C. So you can see here, here's the notes. You know, have the shipping cost closer to where the cost is on the top. Make the actual cost darker. You know, highlight the shipping cost, etc. all kinds of things to collaborate with the team on the new designs. You can see here she gets feedback. Love where this is headed. I think adding the shipping information closer to the cost makes sense. This is perfect. Everyone is collaborating right in the tools they already use. I actually Christina mentioned something about um the change. There was a problem. People don't want to change and this is exactly it. We don't want you to have to change. We want you to be able to work where you work and actually just get the feedback right in the tools you already use instead of having to leave to go in other tools. So, she's collaborating now. She jumps over into Figma with the new final concept of concept C. We've got make the gallery smaller, highlight the shipping costs, and we could try adding an icon to catch the attention of the user. Uh, or perhaps using primary color for this buttons. So, she's got this new version worked up. And what's she going to do now? Now she goes ahead and she's ready to present it officially to the marketing team. You can see now that 77 is now an 85 because even though it won the initial test, there was things that could still be improved with qualitative feedback. We got that. So now she's got an 85 out of 100 in terms of the QX score, 0 to 100. And you know, her task success and usability are both great. Trust is great. Appearance is good. Loyalty a little dicey. So there's still room to improve these metrics here if we want to if Jane wants to take another crack at that. So very simple. She's on a Zoom. She's able to present this to the whole marketing team and now ready to roll out this brand new version of the product details page. So how do we get here? So remember she started with an idea. Say she had some web analytics that said that the there was maybe fallout. There wasn't a high click rate on the add to cart button on the product details page. She went into Figma with three prototype concepts. She tested it using the user testing embedded widget. Then back into Fig Jam to take the results and collaborate on how to make an improved concept C and then back into Figma massage massage massage and run another test on it. Again, testing is iterative. So she ends up with a far higher score than she had initially. Um which again shows the value of user testing over pure AB testing or just a single test. um as opposed to doing iterative. So, where do I want you to go now after this session is over now that I've shown you the whole demo? Let's go back to slides, please. If you want to see more of this, ask questions of me. Anyone else? You can go over to the user testing booth in the maker space. Uh we can talk about these concepts. we can show you. There's more stuff that user testing has released in in April in the last release with the Figma integration that I didn't show here. If you want to see that, you can go over to the booth and check it out. Um, outside of that, I do just want to say thank you so much. It's awesome that you guys packed the house. Thanks to all the tech people here. There's so many people running this that are really making it easy for us as speakers. So, thank you guys. And maybe I'll see you over at the booth. [Music]