Tag Archives: user research

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

The post Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers appeared first on WiderFunnel Conversion Optimization.

Original post: 

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

How Copywriting Can Benefit From User Research


I’ve often heard there are four stages along the road to competence: unconscious incompetence, conscious incompetence, conscious competence, and unconscious competence. Most of us begin our careers “unconsciously incompetent,” or unaware of how much we don’t know.

User Research In Copywriting

I’ll never forget the first time I moved from unconscious to conscious incompetence. I was working as an office manager at a small software company, and having been impressed by my writing skills, the director of sales and marketing asked me to throw together a press release, welcoming the new CEO.

The post How Copywriting Can Benefit From User Research appeared first on Smashing Magazine.

Continue reading here – 

How Copywriting Can Benefit From User Research

How To Moderate Effectively In Usability Research


As UX professionals, we know the value of conducting usability research. But UX research initiatives — even when designed well — are not perfect. A lab study to test a website, for example, would never perfectly capture a user’s actual behavior in the wild. This is because, inevitably, the research protocol itself will influence the findings.

The Importance Of Moderating Effectively In Usability Research

A lab environment can never replicate the natural environment of the participant, and the mere presence of a research facilitator or moderator creates a dimension of artificiality that can thwart the research goals. They must not only facilitate sessions in such a way that the research goals are achieved, but also balance two challenges that are constantly at odds with each other: keeping the participant within the scope of the study, while allowing the participant to be in the driver’s seat in order to make the experience as realistic as possible.

The post How To Moderate Effectively In Usability Research appeared first on Smashing Magazine.

Read the article:  

How To Moderate Effectively In Usability Research

Thumbnail

Designing For Smartwatches And Wearables To Enhance Real-Life Experience

Imagine two futures of mobile technology: in one, we are distracted away from our real-world experiences, increasingly focused on technology and missing out on what is going on around us; in the other, technology enhances our life experiences by providing a needed boost at just the right time.

The first reality is with us already. When was the last time you enjoyed a meal with friends without it being interrupted by people paying attention to their smartphones instead of you? How many times have you had to watch out for pedestrians who are walking with their faces buried in a device, oblivious to their surroundings?

The second reality could be our future – it just requires a different design approach. We have to shift our design focus from technology to the world around us. As smartwatches and wearables become more popular, we need to create design experiences that allow us to create experiences that are still engaging, but less distracting.

Lessons Learned From A Real-Life Project

We create a future of excessive distraction by treating our devices as small PCs. Cramming too much onto a small screen, and demanding frequent attention on a device that is strapped to your body means you can’t get away from the constant buzzing and beeping right up against your skin. Long, immersive workflows that are easily handled on a larger device become unbearable on a device that has less screen area and physical navigation space.

I noticed this on my first smartwatch project. By designing an application based on our experience with mobile phones, we accidentally created something intrusive, irritating and distracting. That meant the inputs and workflows demanded a lot of attention and were so involved that people had to stop moving in order to view notifications or interact with the device. Our biggest mistake was using the vibration motor on all notifications. If you had a lot of notifications, your smartwatch would buzz constantly. You can’t get away from it and people would actually get angry at the app.

How The Real World Inspired Our Best Approach

In a meeting, I noticed the lead developer glancing down at the smartwatch on his wrist from time to time. As he glanced down, he was still engaged in the conversation. I wasn’t distracted by his behavior. He had configured his smartwatch to only notify him if he got communications from his family, boss or other important people. Once in a while, he interacted with the device for a split second, and continued on with our conversation. Although he was distracted by the device, it didn’t demand his complete attention.

I was blown away at how different his experience was from my smartphone. If my phone buzzes in my pocket or my bag, it completely distracts me and I stop focusing on what is going on around me to attend to the device. I reach into my pocket, pull out the device, unlock the screen, then navigate to the message, decide if it’s important, and then put the device back. Now where were we? Even if I optimize my device settings to smooth some of this interaction out, it takes me much longer to perform the same task on my smartphone because of the different form factor.

This meeting transformed our approach to developing our app for the smartwatch. Instead of creating an immersive device experience that demanded the user’s attention, we decided to create something much more subtle. In fact, we moved away from focusing on application and web development experiences to focusing on application notifications.

Designing With A Different Focus In Mind

Instead of cramming everything we could think of on these smaller devices, we aimed for a lightweight extension of our digital virtual experience into the real world. You could get full control on a PC, but on the smartwatch, we provided notifications, reminders and short summaries. If it was important, and it could be done easily on a smartwatch, we also provided minimal control over that digital experience. If you needed to do more, you could access the system on a smartwatch, or a PC. We had a theory that we could replicate about 60% of PC functionality on a smartphone, and another 20% of that on a smartwatch.

Each different kind of technology should provide a different window on our virtual data and services depending on their technical capabilities and what the user is doing. By providing just the right information, at just the right time, we can get back to focusing on the real world more quickly. We stopped trying to display, direct and control what our end users could do with an app, and relied on their brains and imaginations more. In fact, when we gave them more control, with information in context to help solve the problem they had right then and there, users seemed to appreciate that.

Design To Enhance Real-Life Experiences

After the initial excitement of buying a device wears off, you usually discover that apps really don’t solve the problems you have as you are on the move. When you talk to others about the device, you find it difficult to explain why you even own and use it other than as a geeky novelty.

Now, imagine an app that reminds you of your meeting location because it can tell you are on the wrong floor. Or one that tells you the daily specials when you walk into a coffee shop and also helps you pay. Imagine an app that alerts you to a safety brief as you head towards a work site, or another app that alerts you when you are getting lost in an unfamiliar city. These ideas may seem a bit far off, but they are the sorts of things smartwatches and similar small screen devices could really help with. As Josh Clark says, these kinds of experiences have the potential to amplify our humanity1.

How is this different from a smartphone? A smartphone demands your complete attention, which interrupts your real-world activities. If your smartwatch alerts you to a new text or email, you can casually glance at your wrist, process the information, and continue on with what you were doing. This is more subtle and familiar behavior borrowed from traditional wristwatches, so it is socially acceptable. In a meeting, constantly checking your smartphone is much more visible, disruptive, irritating and perceived as disrespectful. If you glance at your wrist once in a while, that is fine.

It’s important to remember that all of these devices interrupt our lives in some way. I analyze any interruption in our app designs to see if it has a positive effect, a potentially negative effect, or a neutral effect on what the user is doing at the time. You can actually do amazing things with a positive interruption. But you have to be ruthless about what features you implement. The Pebble smartwatch design guide talks about “tiny moments of awesome” that you experience as you are out in the real world. What will your device provide?

Keep The Human In Mind

Our first smartwatch app prototype was a disaster. It was hard to use, didn’t make proper use of the user interface, and when it was tested in the real world, with real-life scenarios, it was downright annoying. Under certain conditions, it would vibrate and buzz, light up the screen and grab your attention needlessly and constantly. People hated it. The development team was ready to dump the whole app and not support smartwatches at all because of the negative testing experience. It is one thing to have a mobile device buzz in your pocket or hand. It is a completely different thing to have something buzzing away that is attached to you and right up against your skin. People didn’t just get annoyed, they got really angry, really quickly – because you can’t escape easily.

Design For The Senses

I knew we had messed up, but I wasn’t sure exactly why. I talked to Douglas Hagedorn, the founder and CEO of Tactalis, a company developing a tactile computer interface for people who are sight-impaired. Doug said that it is incredibly important to understand that different parts of the body have different levels of sensitivity. A vibration against your leg in your trouser pocket might be a mild annoyance, but it could be incredibly irritating if the device vibrates the same way against your wrist. It could be completely unbearable if it is touching your neck (necklace wearable) or on your finger (ring wearable).

Doug also advised me to take more than one sense into account. He mentioned driving a car as an example. If all you do is provide a visual simulation for driving a car, it doesn’t feel correct to your body. That’s because driving a car also has different senses involved. For touch, there is the sensation of sitting in a seat, with a hand on the steering wheel and a hand on the gear shifter, as well as pedals beneath your feet. There are also sensations of movement and sound. All of these together provide the experience of driving a car.

With a smartwatch or wearable, depending only on one sense won’t help make the experience immersive and real. Doug advised using different notification features on the devices to signify different things. Design so that physical vibrations are for one type of interaction and a screen glow is used for another. That way the user observes a blend of virtual experiences similarly to how they experience the real world.

Figure 1: The author checking a smartwatch notification while walking past a landmark.2
Figure 1: The author checking a smartwatch notification while walking past a landmark. (Image credit: Elizabeth Kohl3) (View large version4)

Understand Context

Because the devices are attached to us, they constantly move, and are looked at and interacted with at awkward angles. Users must be able to read whatever you put on the screen, and easily interact while moving. When moving, it is far more difficult to read and input into the screen. When sitting down, the device and your body are more stable and we can tolerate far more device interaction. Ask critically:

  • What are people going to be doing when using our app?
  • Where will they be?
  • What happens when the user is moving versus sitting down?

It’s critical to understand the device interactions: taps, gestures, voice control, physical buttons and dials.

Understand Emotions

Our emotions vary depending on experiences and contexts, which can be extremely intense and intimate, or bland and public. Our emotional state at a particular point in time has an enormous impact on what we expect from technology. If we are fearful or anxious and in a rush, we have far less patience for an awkward user experience or slow performance. If we are happy or energetic, we will have more patience with areas where the app experience might be weaker.

Since these devices are taken with us wherever we go, they are used in all sorts of conditions and situations. We have no control over people’s emotions so we need to be aware of the full range and make sure our app supports them. It’s also important to provide user control to turn off or mute notifications if they are inappropriate at that time. When people have no control over something that is bothering them, negative emotions can intensify quickly.

  • Spend time on user research and create personas to help you understand your target user.
  • Create impact stories for core features – a happy ending story, a sad ending story, and an unresolved story.
  • Also create storyboards (see Figure 2) to demonstrate the fusion of your virtual solution with the real world.
Figure 2: Demonstrating real-world interaction with an activity tracker using storyboards.5
Figure 2: Demonstrating real-world interaction with an activity tracker using storyboards. (Created with StoryBoardThat6) (View large version7)

We usually spend more time on these efforts than the visual design because we can incorporate context, emotions, and error conditions early on. We can use these dimensions to analyze our features and remove those that don’t make sense once they meet the real world

It is incredibly important to test away from the development lab, out of your building. It is vital to try things out in the real world because it has very different conditions to a development lab. For each scenario, also simulate different conditions that cause different reactions and make them realistic:

  • Simulate stress by setting impossible timelines on a task using the device.
  • Simulate fear by threatening a loss if the task isn’t completed properly.
  • Simulate happiness by rewarding warmly.

Weather conditions have an effect as well. I am far less patient with application performance when it is cold or very hot, and my fingers don’t work as well on a touchscreen in either of those situations. As devices will be used in all weathers, with all kinds of emotions and time pressure, simulating these conditions when testing your designs is eye-opening.

Minimize Interruptions

When we do need to distract people, we should make the notifications high-quality. As we design workflows, screen designs and user interactions, we need to treat them as secondary to the real world so we can enhance what is going on around people rather than detracting from their day-to-day lives.

Try to create apps for notifications and lightweight remote control that help focus on creating an experience that relies on quick information gathering, and making the odd adjustment on the fly. Users stop, read a message, interact easily and quickly, and then move on. They spend only seconds in the app at any given time, rather than minutes.

The frequency of notifications should be minimal so the device doesn’t constantly nag and irritate the wearer. Allow the wearer to configure timing and types of notifications and to easily disable them when needed. During a client consultation it might be completely inappropriate to get notifications, whereas it might be fine while commuting home. Also provide users with the final say in how and when they are notified. A vibration and a screen glow is fine in some contexts, but in others, just a screen glow will suffice since it won’t disturb others.

Design Elegant And Minimalistic Visual Experiences

One of my favorite stories of minimalism in a portable device design is from the PalmPilot project. It’s said that the founder of Palm, Jack Hawkins, walked around with a carved piece of wood that represented the PalmPilot prototype. Any new features had to be laid out physically on the block of wood, and if there wasn’t room on it they had to decide what to do. Could the features be made smaller? If not, what other feature had to be cut from the design? They knew that every pixel counted. We need to be just as careful and demanding in our wearable app decisions.

Figure 3: Minimalist design with color on the Apple Watch.8
Figure 3: Minimalist design with color on the Apple Watch. (Apple Watch template by Fabio Basile9) (View large version10)

Since these devices have small screens or no screens, there is a limit to the information that is displayed. For example, prioritize to show only the most important information needed at that moment. Work on summaries and synthesizing information to provide just enough. Use a newspaper headline rather than a paragraph.

Small Screens

Screens on wearables are very small and the resolutions can feel tiny. These devices also come in all shapes and (small) sizes. Beyond various rectangular combinations, some smartwatch and wearable screens are round. It’s important to design for the resolution of the device as well, and these can vary widely from device to device. Some current examples are: 128×128px, 144×168px, 220×176px, 272×340px, 312×390px, 320×290px, and 320×320px.

Screen resolutions on all devices are increasing, so this is something to keep on top of as new devices are released. If you are designing for different screen sizes, it is probably useful to focus on aspect ratios, since this can reduce your design efforts if different sizes share the same aspect ratio.

When working on responsive websites, you may encounter resolutions as high as 2,880×1,800px on PC displays, down to 480×320px on a small smartphone. When we designed for wearables we believed we could simply shrink the features and visual design further. This was a huge mistake, so we started over from scratch.

We decided to sketch our ideas on paper prior to building a prototype app. This helped tremendously because we were able to analyze designs and simulate user interactions before putting a lot of effort into coding. It was difficult to reach our app ambitions with such a tiny screen. A lot of features were cut, and it was painful at first, but we got the hang of it eventually.

No Screens

Many wearables have no screens at all, or they have a minimal screen that is reminiscent of an old LCD clock radio. Many devices are limited to UIs that only contain number shapes, a limited amount of text and little else. Other devices have no screen at all, relying on vibration motors and blinking lights to get people’s attention.

App engagement while wearing no-screen devices occurs mostly in our brains, aside from the odd alert or alarm through a vibration or blinking light. When devices are synced, a corresponding larger screen offers more details. This multiscreen experience reinforces the story narrative while they are away from a screen using only a wearable. This is more of a service-based approach than a standalone app approach. User data is stored externally (in the cloud), and display, interaction and utility are different depending on the device. The strong narrative that is reinforced in higher-fidelity devices helps persist it across device types. This different view on user-generated data also encourages self-discipline, a sense of completion or accomplishment, competition, and a whole host of feelings and emotions that exist outside of the actual technology experience.

Design Aesthetics

Design aesthetics are incredibly important because wearables extend a user’s personal image. Anything that we put on the screen should also be visually pleasing because it will be seen not only by the wearer but those around them. Minimalist designs are therefore ideal for smartwatches and wearables. Make good use of formatting and the limited whitespace. Use large fonts and objects that can be seen and interacted with while on the move. If you can, use a bit of color to grab attention and create visual interest.

Footnotes

  1. 1 http://globalmoxie.com/blog/smart-watches-wearables-data-rash.shtml
  2. 2 http://www.smashingmagazine.com/wp-content/uploads/2015/01/01-notification-on-the-go-opt.jpg
  3. 3 http://elizabethkohl.com/
  4. 4 http://www.smashingmagazine.com/wp-content/uploads/2015/01/01-notification-on-the-go-opt.jpg
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2015/01/05-wearable-conversion-opt.png
  6. 6 http://www.storyboardthat.com/
  7. 7 http://www.smashingmagazine.com/wp-content/uploads/2015/01/05-wearable-conversion-opt.png
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2015/01/02-apple-watch-opt.png
  9. 9 http://www.sketchappsources.com/free-source/792-apple-watch-sketch-freebie-resource.html
  10. 10 http://www.smashingmagazine.com/wp-content/uploads/2015/01/02-apple-watch-opt.png

The post Designing For Smartwatches And Wearables To Enhance Real-Life Experience appeared first on Smashing Magazine.

Original source: 

Designing For Smartwatches And Wearables To Enhance Real-Life Experience

Thumbnail

Don’t Get Lost In Translation: How To Conduct Website Localization

A common mistake with localized websites is considering the translated content to be just another version of the pages in the original language. Translation isn’t everything. Of course, for the user it’s all about the content: Is the content relevant and understandable and in line with the user’s cultural context?

As pointed out in Entrepreneur1:

“According to research firm IDC, web users are four times more likely to purchase from a company that communicates in their own language. Additionally, visitors to websites stay twice as long on sites that are available in their native tongue, according to Forrester Research.”

From a commercial point of view, when you decide to create and maintain a multilingual website, you have to consider many more points than just translation. We’ll explore some of the issues to think about when localizing a website.

Cultural Context Is Everything

The decision to expand one’s operation into other markets should always be preceded by deep and thorough research. Localizing a website is an important business decision that will have a great impact on how well you achieve your business goals in certain markets. Before localizing any content, partner up with local agencies to better understand the target audience, and check whether your product meets local standards and is in line with the core cultural values of the audience. Understanding the basic differences in customs and practices between nations is important.

Take the US and Poland. The US style of business is more relaxed and outgoing than in Poland, where things tend to be more formal. This is why some Polish companies find it difficult to enter the US market — because their business style, language and approach might be considered stiff and overly formal. And to Polish people, the US style of business might come off as too loose, even unprofessional at times.

If you investigate Japanese culture, you will find that the Japanese are very strict about following business protocol. So, if a spontaneous Italian wanted to do business with a rule-following Japanese citizen, they would both need to understand the nuances of and differences between their cultures. An interesting characteristic of Japanese culture is that sarcasm doesn’t exist2. So, if your content is sarcastic in nature, your Japanese clients simply won’t understand it.

Hire a local consultant to compare notes and ensure that your message will be heard and understood by customers.

First Things First: The Translations

Translating a website (especially a complex one) can be a challenge. You need to decide on your core markets first and then choose the language version accordingly. If you don’t have a clue where to start, then hire a translation agency or a localization consultant. They will plan and guide you through the process.

If you’re into challenges and would like to tackle this one on your own, then a good place to start is ProZ.com3, where you can find and test freelancers and agencies. Chances are you’ll find people experienced in your field who already have similar projects on their books.

Beware: Experienced translators tend to be a little pricey, and if you don’t intend to establish a long-term relationship, then they will apply a minimum charge for your project, which means you would have to pay the same amount for a small touch-up or proof as you would for a two-page translation. This can be a deal-breaker for small businesses. If your budget is small, then consider less experienced freelancers; however, expect some difficulties throughout the process — after all, both of you are learning.

4
ProZ.com – community of translation professionals, and a marketplace for translation services. (Image source5)

Tip: This is a good opportunity to try out your negotiation skills because all fees are negotiable, even if the translator’s profile states otherwise. (Been there, done that.)

The vast majority of people on ProZ.com use translation software, such as Trados6, which is helpful with large projects. With translators who charge by the word, overpaying is easy. Trados will help you estimate your word count and will eliminate repetition to save you money.

7
SDL Trados provides software for translation memory and terminology management. (Image source8)

Trados is also a good base from which to maintain a website in multiple languages. It holds a database of all of your previous translations, which can be shared fairly easily with new translators. It comes at a price, but if you are seriously thinking about making your business accessible to many different markets, then at least consider trying it.

Multilingual Websites: So, What You See Is What You Get?

Well, not exactly. With any software localization project, what you see in other languages is a result of close collaboration between localization experts and programmers. Keeping your IT team in the loop is crucial if you plan to add any languages to your website — the most obvious reason being that someone needs to prepare the system for the new content and help with its daily maintenance.

The IT team doesn’t need to be experts in localization, but you do need to prepare them (and yourself) from the start for the possibility that their code will need to be updated. They (and you) will need to consider potential problems:

  • deciding how to display the default language in a locale;
  • displaying the content in a right-to-left layout;
  • determining how much space is needed to present the content in other languages;
  • displaying special characters and choosing the right encoding for them (such as UTF-8, UTF-16, etc.);
  • displaying first and last names in a culturally sensitive way;
  • removing any hardcoding of dates and currencies;
  • displaying the right calendar for the location.

This is just the beginning of challenges to consider in the back end. Smashing Magazine has a nice roundup of the technical issues to address9. Consider reading it and passing it along to your programmers before planning any localization.

Practical Tips

The Layout

People read in an F-shaped pattern, meaning they will scan a website from left to right and focus most of their attention on the left side. But that is only true in the West. If your core market is any of the Arabic countries, bear in mind that they read right to left, which your localization process will need to address. You can’t simply put the translated content into the same layout because it wouldn’t be in line with the cultural code of your audience.

10
English version of the BBC website. (Image source11)
12
Arabic version of the BBC website. (Image source13)

The Space

Remember that every language has a set of characters that occupy a different amount of space in a layout. If your first language is English and you want to localize a website for Germany, then you might be surprised that some words are ridiculously long, even though both languages use Latin characters. Of course, it’s “ridiculous” only from the point of view of an English-speaking person, whose layout would be built to fit English text. Other languages occupy even less space than English. Do your research to fit the content in the layout.

Nomensa illustrates this variability for the simple and common words “search” and “basket”:

The word “search” takes up 10 characters in French but only two characters in Japanese. The word “basket” takes up 6 characters in English but when translated to German takes up a massive 13 characters.

Make sure your designers and front-end teams know how to handle this from the beginning, because a localized website might be visually very different from the “original.”

For instance, if you are localizing a website for the Chinese market, then you should know that, even if the translation occupies less space in general, the characters themselves will occupy more space than the individual characters of a standard Western alphabet. Your team will need to address this. Also, some languages do not add contrast the way we do. For example, Japanese does not have italics.  You need to find a way around this if you want your localized content to be read naturally by all audiences.

14
An example of how the same text can look like in various languages. (Image source15)

Special Characters

Almost every language in the world comes with its own set of special characters, such as unique letters and accents. In total, 110,116 characters exist16 among all known languages.

The problem arises when you are not prepared to display special characters. The characters might end up as weird symbols as a result. Or, if you decide not to use the special characters at all, your translations will end up meaning something different.

Solve this by properly encoding your website. Encode in UTF-8 in almost all cases17. But use UTF-16 if your core markets are primarily Asian because it reduces the bandwidth of websites that consist mostly of non-Latin characters.

Another culprit of misrendered special characters is custom fonts that are minified and embedded on websites. Stripping out unnecessary characters to reduce a font’s file size and thus speed up loading time is a common practice. Don’t use a minified font to display user-generated content; comments, for example, might come out wrong even though all other content on the website looks fine.

Changing the Locale and Language

The most obvious guideline is to automatically show users content in their native language when it is available. There are a few ways to do this, and you should plan for it from the beginning.

Will you display a language version based on the user’s IP address or on the browser’s settings? What about countries like Switzerland that have more than one national language? Consider asking visitors to set their language upon arriving on the website, and remember their choice.

Allow the user to select their preferred language… (Image source18)

If your content is not 100% localized, is it better to upload a partial translation or wait until it’s all done? How do you inform visitors that it is incomplete? One strategy is just to show what you have and hide everything else — not a good option, though, if you’re running an e-commerce business. Another strategy is to be straightforward and tell potential customers what level of support they can expect in their language.

GetResponse chose to inform customers about expected level of language support. (Image source19)

Market-Specific Issues (Measurements, Calendars, Holidays, General Tone of Copy)

Market-specific issues can be subtle yet make all the difference with the cultural sensitivities of your audience. Take date-pickers. In the US, date-pickers are displayed as MM-DD-YYYY, whereas in most European countries the day precedes the month (DD-MM-YYYY).

Another example is calendars. As Zack Grossbart notes20:

The US starts the week on Sunday, the UK on Monday and the Maldives on Friday.

Even if a feature as simple as a calendar seems obvious to you, it might not be for customers in other markets.

Imagine users who are accustomed to a certain format for picking a delivery date. If they don’t pay attention to the instructions (which mention that the week starts on Sunday), they might just click on the calendar and expect to see Monday first. Users might feel unhappy and feel misled as a result (according to their cultural code). Consider implementing helpful little fixes with your IT team, like jQuery’s Datepicker21.

What about the tone of your message? Did you know that some languages are more formal than others? For instance, an informal tone is almost taken for granted in English, and addressing your audience as Sir and Madam might sound a little weird. German, French and Japanese are quite the opposite. A familiar tone might come off as rude with those audiences. Bear this in mind when hiring translators; ask them for advice on being culturally sensitive with your tone of voice.

Maintenance

Your website is like a living organism. You, as the owner, product manager or marketer, want it to become better and better and meet customers’ expectations. You will be adding products, running promotions and changing content — it’s a continual development process. When you localize a website, you’ll need to update that version every time you change the original.

Your IT team should be ready to jump on any localization-related issues when something is not displaying right. Cultivate relationships with translators and have them on standby to keep your content up to date.

Most importantly, set aside enough time to finish all of your language versions. Completing a website in one language by the deadline is a big enough task. Doing it in several languages will take much longer. Translating takes time; uploading content takes time; proofreading takes time. Account for this in your timeline, and plan well in advance.

Let Them Find You: Be Where Your Audience Is

Make your localized website visible to search engines. And by search engines, we don’t just mean Google. Put yourself in the customer’s shoes for a while and ask yourself (or ask locals) where you would start looking for the service that your company offers. Google is a popular search engine, but a quick search of, say, China will reveal that Baidu22 is king there. And if you’re thinking about expanding into Russia, you cannot afford not to be easily found on Yandex23.

Remember that some countries have tighter control over the Internet and might censor. You can easily overlook this when planning content for different markets. For example, China blocks certain websites and services, including Facebook, YouTube and Dropbox. So, if you have a product video that you would like to share with a Chinese audience, you will need to find an uncensored local platform, such as Tudou24.

Try to come off as professional, and build your brand through social media. Be visible on various local social networks. For example, in addition to LinkedIn, add Xing25 for German-speaking markets.

Legal Considerations

Each country has its own regulations on privacy, terms of service, complaint procedures, customer support, taxes, data protection and so on. You need to ensure that the content you translate is in line with local law. To make sure you are operating legally in another market, consider hiring a local legal specialist. Their services are pricey, but it’s much better than paying legal fees for violating local law.

Let’s say you do business across Europe and you have three German-speaking countries covered: Switzerland, Germany and Austria. Translating your documents once is not enough. These countries all have different regulations on refunds for products purchased on the Internet. So, not only do you have to translate your pages and make sure that the refund window specified for each country is correct, but you have to figure out how to make this information easily accessible in each of these countries. Do this carefully, or else you might pay for it in legal fees.

Make It Easy to Pay

No business will ever succeed if customers feel that paying for a product is hard or they don’t trust the payment provider. The most popular payment methods are popular only for certain markets.

Residents of some countries do not trust PayPal and will expect you to support locally popular payment methods, such as wire transfer. Brazilians are accustomed to paying in instalments and will expect to be offered that option. In the Netherlands, people are used to paying through a secure system named iDeal26, which redirects them to their own bank.

Always know the regulations behind accepting payments and charging customers. Did you know that if customers in Austria don’t submit to you a printed and signed SEPA form authorizing you to charge their bank account, then you are not legally allowed to charge them? On the other hand, only a digital SEPA form is required in the Netherlands. Do your research.

Test Your Ideas on the Target Audience

Once everything is ready, the work isn’t done. As mentioned, a website is like a living organism. It needs to be continually developed to meet the public’s expectations and deliver on your business goals. Remember to test your website on target users. Targeted recruiting will give you access to valuable insider knowledge of certain countries. One useful tool for finding target users is Cint27, which provides access to the opinions of 10 million people in 60 countries.

You can save a great deal of energy by testing remotely. Methods such as click testing28 and web testing29 are perfect for learning what attracts the most attention on your website. Audiences in different countries could be attracted to different things. Always ask for feedback with a simple survey. You’d be surprised by the answers you get to just a few open questions. With this knowledge, you will be able to adjust your content strategy and localize your website with even greater precision.

Many tools will help you get the ball rolling with remote user research. SurveyMonkey30, SoGoSurvey31 and Marketizator32 are perfect for setting up simple questionnaires in no time. UsabilityTools33 and UserTesting34 enable you to set up remote tests in the back end of your website should you decide to expand your research to other methods.

Customer Expectations

This might not fall under the technical issues related to developing and maintaining a multilingual website, but from a customer’s point of view, it is crucial. If your website is in an audience’s native language, they will have certain expectations.

Visitors will want to talk to customer support in their own language. If you offer telephone support only for your local language, customers in other countries might find it unprofessional. Who wants to pay extra for an international call? Many solutions out there enable you to set up local numbers35 in other countries and redirect calls to your main location. You might also receive email in a language that your team doesn’t understand. Either prepare templates to answer common questions or hire native speakers as support advisors. Visitors want both content and support in their native language.

If you’ve decided to localize your website, then you are thinking seriously about expanding into other markets. This is great for a business of any size. What I hope you get out of this article is that localization is not as simple as straight translation. There are many parts of the equation to consider. If you forget one part, the rest won’t add up. Here’s a handy little list to remember:

  • Prepare a localization strategy. What markets do you consider to be the most important, and which languages will you need? This is the time to check with lawyers as well.
  • Have a solid talk with your team about expansion plans. What are their concerns, and what do they advise?
  • Take time to find the best translators and local advisers. They will make your business look good to potential customers with different cultural sensibilities.
  • The entire process will take time, so be prepared for it. Get an estimate of the time and cost from each party involved, and manage the process well. Don’t forget to set aside time for proofreading and testing!
  • Once you’re up and running, give your clients a voice and listen to it carefully. Run tests, and improve your business based on the feedback.

We hope this article has helped you to identify some misconceptions about creating and running a multilingual website. Do you have an experience of your own? What were your ups and downs? Feel free to share them in the comments.

(il, al)

Footnotes

  1. 1 http://www.entrepreneur.com/article/224742
  2. 2 http://www.smartling.com/blog/2014/06/18/challenges-translating-grammar-nuances-cultural-differences-japanese-language/
  3. 3 http://www.proz.com/
  4. 4 http://www.proz.com/
  5. 5 http://www.proz.com/
  6. 6 http://www.translationzone.com/trados.html
  7. 7 http://www.translationzone.com/trados.html
  8. 8 http://www.translationzone.com/trados.html
  9. 9 http://www.smashingmagazine.com/2012/07/18/12-commandments-software-localization/
  10. 10 http://www.bbc.com/news/world_radio_and_tv/
  11. 11 http://www.bbc.com/news/world_radio_and_tv/
  12. 12 http://www.bbc.co.uk/arabic
  13. 13 http://www.bbc.co.uk/arabic
  14. 14 http://www.omniglot.com/language/articles/multilingual_websites.htm
  15. 15 http://www.omniglot.com/language/articles/multilingual_websites.htm
  16. 16 http://www.smashingmagazine.com/2012/06/06/all-about-unicode-utf8-character-sets/
  17. 17 http://www.smashingmagazine.com/2012/07/18/12-commandments-software-localization/
  18. 18 http://www.vemma.eu/
  19. 19 http://www.getresponse.com
  20. 20 http://www.smashingmagazine.com/2012/07/18/12-commandments-software-localization/
  21. 21 http://jqueryui.com/datepicker/#localization
  22. 22 http://www.baidu.com/
  23. 23 http://www.yandex.ru/
  24. 24 http://www.tudou.com/
  25. 25 http://www.tudou.com/
  26. 26 http://www.ideal.nl/en/
  27. 27 http://www.cint.com/
  28. 28 http://www.usability.gov/how-to-and-tools/methods/first-click-testing.html
  29. 29 http://usabilitytools.com/features-benefits/automated-testing/
  30. 30 https://www.surveymonkey.com/
  31. 31 http://www.sogosurvey.com/
  32. 32 http://www.marketizator.com/
  33. 33 http://www.usabilitytools.com
  34. 34 http://www.usertesting.com/
  35. 35 http://www.callforwarding.com/

The post Don’t Get Lost In Translation: How To Conduct Website Localization appeared first on Smashing Magazine.

Taken from:  

Don’t Get Lost In Translation: How To Conduct Website Localization

Thumbnail

How To Run User Tests At A Conference

User testing is hard. In the world of agile software development, there’s a constant pressure to iterate, iterate, iterate. It’s difficult enough to find time to design, let alone get regular feedback from real users.

For many of us, the idea of doing formal user testing, is a formidable challenge. There are many reasons why: you don’t have enough lead time; you can’t find enough participants, or the right type of participant; you can’t convince your boss to spend the money.

In spite of this, user testing is the best way to improve your designs. If you rely on anecdotal data, or your own experience, you can’t design a great solution to your user’s problems. User testing is vital. But how do you make the case for it and actually do it?

What Is User Testing?

Let me start by defining what user testing is, and what it is not.

User Testing Is

  • Formal
    Your goal is to get qualitative feedback on a single design iteration from multiple participants. By keeping the sessions identical (or as similar to one another as possible), you’ll be able to suss out the commonalities between them.
  • Observational
    Users don’t know what they need. Asking them what they want is rarely a winning strategy. Instead, you’re better off being a silent observer. Give them an interactive design and watch them perform real tasks with it.
  • Experimental
    At the core of any user study is a small set of three to five design hypotheses. The goal of your study is to validate or invalidate those hypotheses. The next iteration of the design will change accordingly.

User Testing Is Not

  • Ad-hoc
    Don’t accept what a single person says at face value. Until you get signal from several people that a design is flawed, withhold judgment. Once five or six participants have given consistent feedback, change the design.
  • Interrogative
    Interviews are useful for learning about users, their roles, and their experiences. But keep it brief. Interviews tend to put the focus on what people say they do, not what they actually do.
  • Quantitative
    Because the sample size is small, you can’t make strong statistical extrapolations based on numbers alone. If you care about numbers, look into surveys, telemetry, and self-guided usability tests instead.

What Is A User Study?

A user study is a research project. It starts with a small set of design questions. You take those questions, reformulate them as hypotheses, devise a plan for validating the hypotheses, and conduct five or six user tests. Once done, you summarize the results and decide on next steps. If the findings were clear, you might make improvements to the design. If the findings were unclear, you might conduct an additional study.

apple-mouse-evolution-opt1
You won’t get it right the first time. Test your design, iterate, and repeat.” (Image credit2)

A Good User Study Has Clear and Measurable Outcomes

If you have clear expectations, it will be much easier to take action on what you learn. This is often accomplished with hypotheses: testable statements you assume to be true for the purposes of validation. Examples of good hypotheses include:

  • Users can add an item to their shopping cart and check out within five minutes.
  • Users want to click on server-related error messages to see additional details.
  • Users are not frustrated by the lack of a dashboard in the product.

A Good User Study Is Easy to Facilitate

This is especially important if you are not the facilitator. If the facilitator is inexperienced with user testing, you’ll need to provide a test script which is easy to understand, keeps the test on track, and explains what you are trying to learn from the test.

A Good User Study Must Be Sufficiently Detailed and Interactive

If you want to measure a user’s reaction to an on-screen animation, you probably need a coded prototype. If you need to decide whether a particular screen can be omitted from the final design, a set of PSD mockups will do. Needless to say, this is a lot of moving pieces. Effective user studies are rigorous, and rather expensive to pull off as a result. If you cut corners, you may second-guess your results and need to run another study to be sure.

Self-Evaluation

That’s what user testing is. Now, ask yourself the following questions:

  • Do you conduct user tests?
  • Are they a regular part of your practice?
  • Would you like to do more of them?
  • What’s keeping you from doing more of them?

I ask these questions often. It’s amazing how few of us do user testing with any consistency, myself included. Everyone wishes they did more of it. That’s both a problem and an opportunity.

User Testing In An Agile World

The agile mantra is “fail fast, fail early”. The faster you fail, the faster you’ll converge on the right solution. This equates to a lot of tight iterations. Agile teams traditionally have two-week sprints, with the goal of releasing a running (read: testable) build at the end of each sprint.

Great, right? The problem is that this leaves very little time to validate a design, summarize the results, and do just-in-time design for the next iteration. Recruiting can take a week in itself, to say nothing of the testing.

And that’s not tenable. At most, you’ll have a few days to get some actionable insights before the next iteration starts. How might we solve this problem?

Let’s make a few assumptions:

  • Five iterations from the start to the end of the design process.
  • Five participants in each user test (25 participants for all iterations).
  • Four designs in flight simultaneously (five iterations each, 100 participants in total).

One way to solve the problem of getting out in front is to validate multiple iterations before any software is built. Not every design needs a live-code prototype to validate it. Sometimes, a clickable Balsamiq PDF is enough. Now, we’ve shifted the problem. The number of design iterations (and the number of test participants) is the same as before, but you can get a lot further before engineering starts building anything. You just need a lot of participants, fast.

User Testing At Conferences

Unless you’re lucky enough to design a product that millions of people use, recruiting can be a challenge. Since I design software for system administrators, the best place to get qualitative feedback in a matter of days is at an IT conference.

The basic steps are:

  1. Pick a conference
  2. Write some studies
  3. Set up your booth
  4. Analyze the results (in real time)
  5. Iterate on the design
  6. Rinse and repeat

Obviously, you’ll need help, so bring some volunteers with you. Also, don’t expect to nail this the first time you try it. Give yourself a chance to make mistakes and learn from them.

Conferences: the best place to conduct a lot of user tests in a very short amount of time.3
Conferences: the best place to conduct a lot of user tests in a very short amount of time. (Image credit4) (View large version5)

The number of times you can iterate depends on what you’re learning. If you’re learning a lot, keep going. If you’re running into tool limitations, it might be time to stop and have your development team build you a live-code prototype.

Bonus: if you have software development skills, you might be able to build a prototype yourself. Better yet, bring some developers with you.

Disclaimer: I’ve done conference-based user testing twice, and haven’t entirely nailed these steps (even though we’ve made great strides in the right direction). It might take a few tries to get it right.

Attempt #1: PuppetConf 2012

Once a year, Puppet Labs hosts PuppetConf, a tech conference for IT professionals. In 2012, it was held at the Mission Bay Conference Center in San Francisco and 750 people attended.

Two of us prepared five studies and set up three user testing stations in a high-traffic hallway. Each user testing station consisted of a laptop, a stack of test scripts and NDAs, and a volunteer to help facilitate the tests. We had about 16 volunteers, and ran 50 user tests.

Mission Bay Conference Center at UCSF, the site of our 2012 user testing.6
Mission Bay Conference Center at UCSF, the site of our 2012 user testing. (Image credit7) (View large version8)

This was a great experience, but we didn’t get much actionable research out of it. Our focus was on data gathering. We didn’t bother to analyze that data until weeks after the conference, which meant it had gathered dust. In addition, the things we tested weren’t on our product roadmap, so the research wasn’t timely anyway.

Attempt #2: PuppetConf 2013

In 2013, we repeated our user testing experiment. That year, it was held in the Fairmont San Francisco hotel and 1,200 people attended.

Five of us prepared six studies and set up three user testing stations in a room adjacent to a high-traffic hallway. We added dedicated lapel mics and three-ring binders to keep our scripts organized. With the same number of volunteers (16), we ran almost twice as many user tests (95).

This year was vastly more successful than the previous year. We pulled analysis into the event itself, so we got actionable data more quickly than before.

Fairmont San Francisco, the site of our 2013 user testing.9
Fairmont San Francisco, the site of our 2013 user testing. (Image credit10) (View large version11)

Unfortunately, we didn’t go the extra step of iterating on what we learned during the conference. Our product wasn’t affected until months later. It was a step in the right direction, but too slow to be considered agile.

What Did We Learn?

In 2012, we made a large number of mistakes, but we learned from those mistakes, improved our tests and testing process, and doubled both the quality and quantity of the tests in 2013. So, don’t be afraid of failing. A poor user testing experience will only help you learn and improve for next time.

Here are some of my observations from those experiences.

Conferences Let You Cut the Fat out of Recruiting

Recruiting is very time-consuming. We have a full-time position on our research team at Puppet for that very purpose. But at conferences, people are already present and willing to engage with you. All you need to do is show up.

In a typical user study, we send out a screener email to 50–100 people in our testing pool. A lot of people won’t respond, and of those who do, only some will meet the requirements for the test. It takes time to get enough valid responses, and sometimes we have to widen the net, which takes more time.

Conferences Let You Validate Your Entire Roadmap

In both years we had more interest in testing than we could facilitate. In 2013, the 95 participants who tested with us were far more than we needed.

If you decide to conduct self-guided, quantitative usability tests, you can run even more tests. In 2014, our research team had over 200 people take a single usability test.

Conferences Are Chaotic, But Process Can Help

In 2012, we had a simple four-stage process: greet, recruit, test, and swag.

  1. Greet
    Every time someone came to our booth, we had a greeter volunteer who said Hi and told them what we were doing.
  2. Recruit
    Next, we asked if they wanted to join our Puppet Test Pilot pool for testing opportunities throughout the year. If so, we scanned their badge.
  3. Test
    If we had a test station available, we asked if they wanted to take a 15–20 minute user test. If so, the greeter introduced the participant to a facilitator at one of the stations.
  4. Swag
    At the end of the testing, we thanked each participant, and gave them a limited edition T-shirt and a signed copy of Pro Puppet.

This process worked well, but there were a couple of obvious holes. First, we didn’t have a good screening process, so there was no guarantee that a participant was a good match for the tests. Second, we didn’t have a plan to quickly learn from the tests and act accordingly (see: agile).

To correct these shortcomings, we introduced two additional steps in our 2013 testing process: greet, recruit, screen, test, swag and analyze.

  • Screen
    At the beginning of the testing process, the facilitator asked the participant six questions, one for each user test. If the answer was yes, we knew they’d be a good match for the test.
  • Analyze
    At the end of the testing process, the facilitator filled out a short form. Each user test was allocated a text field, with the study hypotheses alongside. The facilitator entered their notes, and marked the validity of each hypothesis.

Conferences Allow Your Competitors to Snoop

We used NDAs to counteract this. As an unintended side-effect, they made the testing seem more exclusive and special, so participants were eager to sign them.

In 2013, we switched from paper to digital forms, via DocuSign. From a logistical standpoint, this was a great move. We didn’t have to keep track of loose stacks of paper after the conference. On the other hand, the signing workflow was rather cumbersome. People had to sign their initials three times and click multiple times to complete the NDA.

Conferences Are a Great Way to Build User Empathy

Ultimately, user testing is about people, not testing. Both years, we recruited volunteers from non-UX departments within the company: engineering, product, marketing, and sales. It was great to give these people an opportunity to engage with our users over real problems.

And it goes both ways. People love to talk about their job, their pain points, and how your product or service falls short of easing that pain. No, anecdotal data isn’t terribly useful in a design context, but it can help you build a mental model of real-world problems.

Conferences + User Testing Is a Scary Combination

As I mentioned, we recruited volunteers from non-UX teams. Many of those volunteers had never conducted user tests before. It was a nerve-wracking experience for many of them.

In 2013, we instituted a training process to get our volunteers up to speed more quickly. To do this, we instituted a series of training meetings.

In the first meeting, we got everyone in the same room and talked through the testing process and the tests at a high level. Next, we broke up into small groups of two or three people apiece. In these groups, we had volunteers practice facilitating the tests with each other. The test author attended these as well, to spot areas in need of improvement or clarification.

If our volunteers were still nervous about the prospect of user testing, we met with them personally. In some cases, we convinced them to push forward and run user tests anyway. In other cases, we moved them to a less demanding role, usually the role of a greeter.

Conferences Are a Black Hole for Data

In the first year, one of our three test laptops was mysteriously wiped of data. The second year, two of our laptops were stolen. We lost all of the test recordings on those machines.

The silver lining was the post-test analysis we did in 2013. Because our facilitators took such rigorous notes, and saved those notes to the cloud, we retained the data, even though the actual recordings were lost.

Process Is King, But Organization Is Queen

Keeping things digital as much as possible helps. If you must use paper, don’t use manilla folders. Instead, use three-ring binders with labels to keep your papers collated.

On the digital side of things, consider having a single folder where all conference-related documents and data live. Use tools like DropBox or Box to keep everything synchronized across machines. Having local copies is critical, in case the network goes down, which it probably will.

Use Retrospectives to Learn and Improve

After the conference, hold a meeting with the core testing team. For the first five or ten minutes, write ideas on sticky notes. These ideas should take the form of things to stop doing, keep doing, or try doing. Put these stickies on a whiteboard, under the appropriate column (keep, stop, or try).

Once everyone runs out of ideas, pick a volunteer. This person groups the stickies by theme (e.g. “communication”, “process”, “people”). Ideally, everything boils down to three to five groups. For each group, find an actionable way to improve that area, then assign each action item to a member of the group. It becomes that person’s responsibility to own it.

Should You Add Conferences To Your Toolbox?

Having done this a couple of times, it’s clear that there are pros and cons. No user testing tool or technique is a cure-all, and conference-based testing is no exception.

Pros

  • Lots of participants
    Hundreds at a small conference, thousands at a medium conference, tens of thousands at a large conference. Take your pick.
  • Easy recruiting
    Build it and they will come. It helps if you point your laptops into the room, and have the designs clearly displayed on their screens.
  • Enables rapid iteration
    You can easily complete five or six tests in an hour or two. Faster if you have multiple test stations.

Cons

  • Chaotic testing environment
    You know those quiet usability testing rooms with the mirrored glass? You won’t find those at a conference.
  • Travel required
    Unless you’re lucky enough to have a relevant conference in your city, you’ll probably need to fly somewhere. This can be expensive.
  • Difficult timing
    Remember those roadmaps I mentioned earlier? If the design phase doesn’t line up with a conference, find a different way to get the research you need.

In general, this approach works well when you have a predictable product roadmap. If you know what you’re going to be building, and when, you can time the design phase to coincide with one or more conferences.

On the other hand, if you need the flexibility to run tests at a moment’s notice, this approach won’t work well. In that situation, I recommend having a dedicated room for testing at your company, containing all the equipment you’ll need.

Tips To Make This Work For You

If you’ve read this far and think conference-based user testing is right for you, great! Here are some tips to help you succeed.

  • Pick a conference five months in advance
    You don’t have to know exactly what you’ll be testing, but it’s a good idea to have a target date and venue in mind, so you can start thinking about it.
  • Pick a conference with people who don’t know you exist
    Because we ran testing at our own conferences, everybody knew about us. This self-selection bias prevented us from getting a good cross-sample of our potential market.
  • Don’t pick a booth in the busiest hallway
    As tempting as it might be to get maximum visibility, ask youself if the additional chaos is worth it. In 2013, we picked a booth in a room separated by a half wall from a busy hallway. As a result, we had good visibility without being in the middle of the chaos.
  • Don’t write every study yourself
    The first year, I wrote four of the five user studies. As a result, they were difficult to facilitate and didn’t result in actionable data. It’s time consuming to write a good user test that validates your hypotheses and is easy to facilitate.
  • Don’t schedule people in advance
    When your testing stations fill up, it’s tempting to start a waiting list. Don’t do that. You’ll become beholden to the list and have to turn people away, even when there appear to be empty test stations. Be serendipitous about it.
  • Practice running each test on each machine before the conference
    Murphy’s law. Need I say more?
  • Go forth and user test
    The only thing worse than a poor user testing experience is not doing it at all. If you fail, at least you’ll learn how to do it better next time. If you don’t do anything, you’ve learned nothing.

And that’s it. If you have any questions, please get in touch through Twitter or leave a note in the comments below. Thank you for reading.

Resources

When I first proposed conference-based user testing to my team, I was an intern straight out of school. If I could pull this off, so can you. If you’re still intimidated, start small. You can grow your efforts, but you have to start somewhere.

Here are some of the resources we used in testing:

Tools

Articles and examples

Since doing this, I’ve learned of others who have done user testing at events. Here’s a list of articles with slightly different takes on the process:

(il, og, ml)

Front page image credits: Rosenfeld Media
21.

Footnotes

  1. 1 https://www.flickr.com/photos/raneko/4204026836/
  2. 2 https://www.flickr.com/photos/raneko/4204026836/
  3. 3 http://www.smashingmagazine.com/wp-content/uploads/2014/10/03-conference-opt.jpg
  4. 4 https://www.flickr.com/photos/leweb3/6498827487/
  5. 5 http://www.smashingmagazine.com/wp-content/uploads/2014/10/03-conference-opt.jpg
  6. 6 http://www.smashingmagazine.com/wp-content/uploads/2014/10/04-puppetconf2012-opt.jpg
  7. 7 https://www.flickr.com/photos/greentechmedia/5730027311/
  8. 8 http://www.smashingmagazine.com/wp-content/uploads/2014/10/04-puppetconf2012-opt.jpg
  9. 9 http://www.smashingmagazine.com/wp-content/uploads/2014/10/05-puppetconf2013-opt.jpg
  10. 10 https://www.flickr.com/photos/bradfordcoy/4400862442/
  11. 11 http://www.smashingmagazine.com/wp-content/uploads/2014/10/05-puppetconf2013-opt.jpg
  12. 12 http://www.docusign.com
  13. 13 http://www.silverbackapp.com
  14. 14 http://www.nngroup.com/topic/user-testing/
  15. 15 https://puppetlabs.com/community/puppet-test-pilots-program
  16. 16 https://www.gv.com/lib/user-testing-in-the-wild-research-at-conferences-and-other-events
  17. 17 https://www.youeye.com/blog/user-testing-at-events/
  18. 18 http://joylab.co.uk/blog/run-live-user-testing-we-dare-you/
  19. 19 https://blogs.atlassian.com/2014/10/user-testing-atlassian-summit/
  20. 20 http://web.archive.org/web/20090321153629/http://www.dexodesign.com/2007/07/29/usability-testing-at-conferences/
  21. 21 https://www.flickr.com/photos/rosenfeldmedia/7171775806/

The post How To Run User Tests At A Conference appeared first on Smashing Magazine.

Taken from: 

How To Run User Tests At A Conference

Thumbnail

A Guide To Conducting A Mobile UX Diagnostic

Today’s mobile users have increasing expectations, they are intolerant of faults in their mobile experiences1, and they complain about bad mobile experiences on social media and through word of mouth. How do you make sure that your mobile experience meets or exceeds users’ expectations?

One quick way to identify potential problems is to conduct a user experience diagnostic, by having a few mobile specialists look for potential problems with a mobile presence. A diagnostic can be done during design and development to ensure that the mobile website or app adheres to best practices and guidelines. It also serves as a great starting point for a redesign to identify particular opportunities for improvement.

While a diagnostic can be done by a single evaluator, it is more effective when conducted by multiple evaluators with different strengths and backgrounds. These evaluators should be practitioners well versed in principles of user experience (UX) for mobile interfaces and in mobile platform guidelines, and they should not be closely involved with the design itself. A diagnostic is not a replacement for testing with end users, but rather is a quick method in a user-centered design process2.

This article will describe a process you can follow to evaluate a mobile UX, be it for an app or a website accessed on a mobile device. The steps in this process are:

  1. identify users and scenarios,
  2. conduct an evaluation,
  3. conduct a team review,
  4. document and report.

Alongside the explanation of each step, we’ll illustrate the step using the United States Postal Service as an unwitting real-world example.

Identify Users And Scenarios

A mobile UX diagnostic is conducted by expert evaluators who may or may not be active users of the mobile product. To help the evaluators walk a mile in the user’s shoes, select one to three personas based on the target audience, along with scenarios based on common user tasks and goals. Define the boundaries of the evaluation, and make it quick and efficient by asking the following questions:

  1. What should the evaluation focus on?
    Is it a website that would be accessed on a mobile device or a mobile app? If it’s an app, which platform?
  2. Which devices do your target users use?
    One way to find out is by looking at web traffic and analytics. If that’s not available, then select popular devices based on market share.
  3. Which OS versions are being used?
    Base this on the platform and device.
  4. Who are the main competitors of the website or app?
  5. Is any relevant market research available?
    This could be industry trends, reports, etc. One example would be Forrester’s Customer Experience Index3.

We’ll evaluate the app for the United States Postal Service (USPS) — “over 2 million downloads!” — on an iPhone 5 running iOS 7.1. We’ll illustrate it through the eyes of Mary Jane, an average residential postal customer. (The persona and scenarios are made up for this article.)

Persona

I will illustrate the evaluation of the USPS Mobile iOS app (“over 2 million downloads!”) on an iPhone 5 running iOS 7.1, through the eyes of Mary Jane, an average residential postal customer (the persona and scenarios were made up for this article).

01-mary-jane-opt

Mary Jane is a 37-year-old working mother of two, married to a traveling consultant. She has a job with flexible working hours that align with her kids’ school hours, but juggling it all is no easy task. She shops online a lot and has depended on her iPhone for the past five years. Mary rarely sets foot in the post office, instead relying on USPS for her shopping deliveries, occasional bills and frequent mail-in rebates.

Scenarios

  • Track packages
    Mary frequently shops online and gets deliveries to her door. She likes being able to track her packages to make sure she receives everything as expected. She wants to be able to use her phone to check the status of pending deliveries.
  • Find location
    Mary is on her way to pick up her kids from school when she realizes that today is the deadline to postmark one of her rebates. She wants to find a nearby manned post office or a drop-off location with late pick-up hours.
  • Hold mail
    The family takes three to four mini-vacations a year, during which time she places a hold on her mail to prevent any packages from being left at her door in her absence. The family’s anniversary getaway is coming up in a few weeks, and she wants to place a hold on her mail.

Conduct The Evaluation

A best practice is to have two or more evaluators independently conduct the evaluation in three parts:

  1. scenarios and related UX,
  2. rapid competitive benchmarking,
  3. overall UX.

Scenarios and Related UX

The first part involves evaluating the UX using defined scenarios of use, followed by an inspection of other aspects of the UX.

Step 1: Pick a device and OS. Test “glanceability” with a five-second test. Launch the app or website and look at it for five seconds. Then, cover the screen and answer the following: What is being offered, and what can the user do? The app or website passes if your answer closely matches its core offering.

Step 2: Put on your “persona hat” and use the website or app to walk through the scenario. Look for and identify UX issues that the persona might face in the scenario — anything that would slow down or prevent them from completing their tasks. Document the issues by taking screenshots4 and making notes as you go. Where possible, use contextual testing in the field (i.e. outside of the office) to uncover issues that you might not have exposed otherwise (for example, spotty connectivity when using a travel or retail app, or contrast and glare).

Repeat step 2 until every scenario for each persona is completed.

Step 3: Chances are, the scenarios did not cover everything that the website or app has to offer. Switch from your “persona hat” to your “UX specialist hat” to evaluate key areas not yet covered. Use a framework such as the one detailed in “The Elements of the Mobile User Experience85” to organize the evaluation, continuing to document issues and take relevant screenshots. I find that focusing on the problems to be more valuable, unless you are using a scorecard, such as Forrester’s6, or you specifically need to document strengths as well.

For an app, repeat steps 2 and 3 for the other identified platforms and devices to ensure that the app follows the guidelines and conventions of those platforms. For a website, verify that it renders as expected across devices.

For our example, I chose the “Find Location” scenario to evaluate USPS’ app for iOS.

Find Location: Mary is on her way to pick up her kids from school when she realizes that today is the deadline to postmark one of her rebates. She wants to find a nearby manned post office or a drop-off location with late pick-up hours.

Notes for “Find Location” Scenario

Here are some notes jotted down during the evaluation of the app in the “Find Location” scenario. Testing was conducted on USPS’ iOS app, version 3.8.5 (the app was updated 18 December 2013).

  • When the app launches, a splash screen appears for varying lengths of time (as little as a few seconds to over a minute over public Wi-Fi, simulating the guest Wi-Fi network at her children’s school).
  • The home screen does not have a logo or prominent USPS branding — just a screen with icons.
  • The screen titles do not assure Mary that she is heading down the right path. Tapping “Locations” leads to a screen titled “Search,” and the titles of subsequent screens don’t match either (one says “Enter” and then “Refine search”).
    02-locations-information-scent-opt
  • The “Location” screen does not have sufficient information, forcing Mary to tap “Show Details” to understand the different options. Why wasn’t this made the default view?
    03-show-details-unnecessary-click-opt
  • The same icon is used for “Post Offices” and “Pickup Services.”
  • Locating all services at once is not possible. Mary is forced to look them up one at a time (for example, first looking up “Post Office” locations, then going back and looking up “Approved Providers”).
  • Location services are not activated for the app, and there is no alert or reminder to turn it on to use the GPS. Mary is under the impression that that functionality does not work.
  • No option exists to enter a search radius. Results from almost 50 kilometers away are returned.
  • The location results do not indicate whether a location is open or closed.
    04-location-details-click-open-closed-opt
  • When Mary selects a location to view its details, she has to expand the boxes for “Retail Hours” and “Last Collection Time” individually to view that information.
  • Going back from the “Locations” screen crashes the app. Every. Single. Time. (Even after deleting the app and reinstalling.)

Related UX Notes

  • The titles used in the app are not user-friendly, but rather oriented around features and functionality. For example, “Scan” (Scan what? Why?); and “Coupons” (Get coupons? No. What coupons can one add? No clue is given.)
  • Tapping the “Terms of Use” on the home screen results in a confirmation prompt to leave the app (taking users to the mobile website). Really?!
    05-tos-link-opt
  • The input field for the ZIP code does not bring up the appropriate numeric keyboard. In the “Supplies” section, the keyboard that appears for the ZIP code is the alphabetical keyboard, not even the alphanumeric one.
  • Many screens do not have titles (for example, try entering an address for “Supplies”).
    06-nameless-screens-opt
  • The scanning experience is inconsistent. It took a few minutes for one, but was quicker the next time.
  • The app is missing expected functionality (such as expected delivery date, app notifications and a change-of-address option). The app has fewer features than the mobile website (such as an option to change one’s address).
  • The screen to track a package has a “Scan” button, instead of the conventional camera icon.
  • Information is not shared between screens in the app, forcing the user to enter the same information in multiple places (for example, for “Next day pickup,” “Get supplies” and “Hold mail”).
  • Deleting a scheduled pickup in the app does not cancel the pickup, and no warning message is displayed either.
    07-pick-up-delete-cancel-opt
  • A minor issue, the “Terms of Use” link on the home screen does not align with the rest of the sentence.

Rapid Competitive Benchmarking

Rapid competitive benchmarking is a quick exercise to compare how your mobile UX stacks up against the competition’s. To do this, pick a couple of primary competitors or services that offer similar functionality, and complete similar scenarios, followed by a quick scan of their functionality. Look for areas where competitors offer a better user experience, and document with notes and screenshots. For a more detailed analysis, compare features to those of key competitors (Harvey Balls7 do a good job of showing the relative completeness of features).

Competitive Benchmarking: Notes for “Find Location” Scenario

UPS:

  • An option exists to view all types of locations, but with no way to distinguish between them.
  • Results are displayed only on a map (no list view).

FedEx has the best store-locator experience among the three:

  • When location services are turned off, the app gives clear instructions on how to turn it on.
  • A single screen contains both “Use current location” and search by ZIP code, with filters to show one or more types of locations.
  • Location results can be viewed as a list or map.
  • Location results show at a glance whether a location is open or closed.
  • Results show multiple types of locations and identify the type of each location.

Overall UX Feedback

The final step in the individual evaluation is to step back and evaluate the big picture. To do this, review the following:

  • how the user installs the app or finds the website;
  • onboarding help if it’s an app,
  • the cross-channel experience (i.e. comparing the app to the website on different devices),
  • the cross-device experience,
  • reviews in app stores (for apps) and social networks (for websites and apps),
  • comments and feedback received by email (if you have access to this).

Overall UX Notes

  • When the app first launches, the user is forced to accept the terms and conditions to use the app. (I’ve fought my share of battles with legal departments on this topic as well — and lost many.) However, there are no terms and conditions to accept before using USPS’ mobile website.
  • The app has no onboarding help when first launched, and no help within either.

Here are the notes about the cross-channel experience (i.e. between the app, mobile website and desktop website):

  • The logo on the mobile website is low in resolution, with notable pixelation on “Retina” displays.
  • Branding across the three lack consistency in look and feel.
    08-cross-channel-opt
  • Carrying over shipment-tracking or any personal information between the three channels is not possible.
  • The main functionality is not ordered consistently across channels, nor is key functionality available in all three channels.
  • Touch targets are too close together on the mobile website.
    09-touch-targets-opt

Here are the notes about the cross-device experience:

  • Branding appears on the home screen of the Android app, but not of the iOS app (even though it is shown in Apple’s App Store).

And here are the notes about reviews in Apple’s App Store (negative feedback abounds):

  • Location services are inaccurate, and results could be more relevant.
  • Scanning doesn’t always work.
  • The app freezes and crashes.

Conduct A Team Review

Conduct a team review session to compare, validate and aggregate the findings of the individual evaluations. Evaluators with diverse skills (for example, visual designer, usability analyst) tend to have different areas of focus when conducting evaluations, even though they are using common personas and scenarios and a common evaluation framework.

During the team review, one evaluator should facilitate the discussion, bringing up each problem, verifying whether the other evaluators identified that issue and are in agreement, and then assigning a level of severity to the problem. The evaluators should also identify possible solutions for these issues. The result would be a consolidated list of problems and potential solutions.

For an extended evaluation, invite other designers to the team review session, maybe over an extended catered lunch meeting or towards the end of the day over pizza and drinks. The other designers should have spent some time prior to the session (at least 30 minutes) familiarizing themselves with the website or app. This will enable everyone to explore the website or app together as a team, identify and discuss problems as they find them, and discuss possible solutions.

One evaluator should set the stage by outlining background information and problems identified. This should be followed by a facilitated review of the website or app (often using a structure like the one outlined in “The Elements of the Mobile User Experience85” to guide the discussion). Assign a team member to document the session, including the problems identified, ideas, questions and solutions.

Download the sample evaluation list
Download the sample evaluation list9 (XLSX, 10 KB)

Document and Report

The evaluation spreadsheet is a nice way to capture and organize problems and recommendations, but communicating the issues visually is easier. I usually create a slide presentation, organized by the article linked to above10. One slide is dedicated to each severe problem, with screenshots and callouts to elaborate. Less severe problems are grouped together according to the screens they appear on. Along with each problem and its impact, list actionable recommendations. For detailed evaluations, also mock up key recommendations that address the problem and incorporate best practices.

Begin the presentation with slides that set the context and explain the methodology and approach. Mention that the evaluation focuses on identifying problems, so that members of the design and development team do not start passing around antidepressants when they see the laundry list of problems they have to painstakingly work on.

Conclusion

A mobile UX diagnostic is not a replacement for testing with actual users, but rather is meant to quickly identify problems with a mobile website or app using trained eyes. A diagnostic will uncover most of the top usability problems11, and because it is relatively inexpensive and quick, it can be conducted at multiple points in a user-centered design process. Diagnostics go a long way to improving a mobile experience, reducing flaws and meeting users’ expectations.

Related Resources

(da, ml, al, il)

Footnotes

  1. 1 https://econsultancy.com/blog/65041-making-the-most-of-mobile-moments-to-transform-the-customer-experience
  2. 2 http://www.smashingmagazine.com/2011/05/02/a-user-centered-approach-to-web-design-for-mobile-devices/
  3. 3 http://blogs.forrester.com/megan_burns/14-01-21-introducing_forresters_customer_experience_index_2014
  4. 4 http://www.itworld.com/article/2832575/mobile/how-to-grab-a-screenshot-from-iphone–android–and-nearly-any-other-smartphone.html
  5. 5 http://www.smashingmagazine.com/2012/07/12/elements-mobile-user-experience/
  6. 6 http://blogs.forrester.com/adele_sage/10-01-13-announcing_forresters_web_site_user_experience_review_version_80
  7. 7 http://en.wikipedia.org/wiki/Harvey_Balls
  8. 8 http://www.smashingmagazine.com/2012/07/12/elements-mobile-user-experience/
  9. 9 http://provide.smashingmagazine.com/evaluation-issue-list.xlsx
  10. 10 http://www.smashingmagazine.com/2012/07/12/elements-mobile-user-experience/
  11. 11 http://www.measuringusability.com/blog/effective-he.php
  12. 12 http://www.nngroup.com/articles/summary-of-usability-inspection-methods/
  13. 13 http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
  14. 14 http://www.uxmatters.com/mt/archives/2014/01/conducting-expert-reviews-what-works-best.php
  15. 15 http://www.smashingmagazine.com/wp-content/uploads/2014/08/mobile-user-experience-diagnostic-sample-slides2.pdf
  16. 16 http://provide.smashingmagazine.com/evaluation-issue-list.xlsx

The post A Guide To Conducting A Mobile UX Diagnostic appeared first on Smashing Magazine.

See original:  

A Guide To Conducting A Mobile UX Diagnostic