Tag Archives: study

Thumbnail

What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads

How excited would you be if you doubled the number of leads your marketing campaign was generating in less than a month? What if you found out that the improvement wasn’t an improvement at all, because as lead quantity went up, lead quality was going down? That’s exactly what happened with a campaign I ran once. I can assure you – it’s not fun! One survey of B2B marketers found that their #1 and #2 challenges were generating high quality leads and converting leads into customers: Your Landing Page Conversion Rate Is Only Half Of The Story Converting visitors to leads…

The post What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads appeared first on The Daily Egg.

Visit source: 

What Happens After The Conversion? How To Optimize Your Marketing Campaigns For Higher Quality Leads

Thumbnail

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process




FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Zoe Dimov



Today, UX research has earned wide recognition as an essential part of product and service design. However, UX professionals still seem to be facing two big problems when it comes to UX research: A lack of engagement from the team and stakeholders as well as the pressure to constantly reduce the time for research.

In this article, I’ll take a closer look at each of these challenges and propose a new approach known as ‘FAST UX’ in order to solve them. This is a simple but powerful tool that you can use to speed up UX research and turn stakeholders into active champions of the process.

Contrary to what you might think, speeding up the research process (in both the short and long term) requires effective collaboration, rather than you going away and soldiering on by yourself.

The acronym FAST (Focus, Attend, Summarise, Translate) wraps up a number of techniques and ideas that make the UX process more transparent, fun, and collaborative. I also describe a 5-day project with a central UK government department that shows you how the model can be put into practice.

The article is relevant for UX professionals and the people who work with them, including product owners, engineers, business analysts, scrum masters, marketing and sales professionals.

1. Lack Of Engagement Of The Team And Stakeholders

“Stakeholders have the capacity for being your worst nightmare and your best collaborator.”

UIE (2017)

As UX researchers, we need to ensure that “everyone in our team understands the end users with the same empathy, accuracy and depth as we do.” It has been shown that there is no better alternative to increasing empathy than involving stakeholders to actually experience the whole process themselves: from the design of the study (objectives, research questions), to recruitment, set up, fieldwork, analysis and the final presentation.

Anyone who has tried to do this knows that it can be extremely difficult to organize and get stakeholders to participate in research. There are two main reasons for this:

  1. Research is somebody else’s job.
    In my experience, UX professionals are often hired to “do the UX” for a company or organization. Even though the title of “Lead UX Researcher” sounds great and very important in my head, it often leads to misconceptions during kick-off meetings. Everyone automatically assumes that research is solely MY responsibility. It’s no wonder that stakeholders don’t want to get involved in the project. They assume research is my and nobody else’s job.
  2. UX process frameworks are incomplete.
    The problem is that even when stakeholders want to engage and participate in UX, they still do not know *how* they should get involved and *what* they should do. We spend a lot of time selling a UX process and research frameworks that are useful but ultimately incomplete — they do not explain how non-researchers can get involved in the research process.

Problems associated with stakeholders involvement in UX Research.


Fig. 1. Despite our enthusiasm as researchers, stakeholders often don’t understand how to get involved with the research process.

Further, a lot of stakeholders can find words such as ‘design,’ ‘analysis’ or ‘fieldwork’ intimidating or irrelevant to what they do. In fact, “UX is rife with jargon that can be off-putting to people from other fields.” In some situations, terms are familiar but mean something completely different, e.g., research in UX versus marketing research.

2. Pressure To Constantly Reduce The Time For Research

Another issue is that there is a constantly growing pressure to speed up the UX process and reduce the time spent on research. I cannot count the number of times when a project manager asked me to shorten a study even further by skipping the analysis stage or the kick-off sessions.

While previously you could spend weeks on research, a 5-day research cycle is increasingly becoming the norm. In fact, the book Sprint describes how research can dwindle to just a day (from an overall 5-day cycle).

Considering this, there is a LOT of pressure on UX researchers to deliver fast, without compromising the quality of the study. The difficulty increases when there are multiple stakeholders, each with their own opinions, demands, views, assumptions, and priorities.

The Fast UX Approach

Contrary to what you might think, reducing the time it takes to do UX research does not mean that you need to soldier on by yourself. I have done this and it only works in the short term. It does not matter how amazing the findings are — there is not enough PowerPoint slides in the world to convince a team of the urgency to take action if they have not been on the research journey themselves.

In the long term, the more actively engaged your team and stakeholders are in the research, the more empowered they will feel and the more willing they will be to take action. Productive collaboration also means that you can move together at a quicker pace and speed up the whole research process.

The FAST UX Research framework (see Fig. 2 below) is a tool to truly engage team members and stakeholders in a way that turns them into active advocates and champions of the research process. It shows non-researchers when and how they should get involved in UX Research.


The FAST UX Research approach; FAST UX Research methodology.


Fig. 2. The FAST User Experience Research framework

In essence, stakeholders take ownership of each of the UX research stages by carrying out the four activities, each corresponding to its research stage.

Working together reduces the time it takes for UX Research. The true benefit of the approach, however, is that, in the long term, it takes less and less time for the business to take action based on research findings as people become true advocates of user-centricity and the research process.

This approach can be applied to any qualitative research method and with any team. For example, you can carry out FAST usability testing, FAST interviews, FAST ethnography, and so on. In order to be effective, you will need to explain this approach to your stakeholders from the start. Talk them through the framework, explaining each stage. Emphasize that this is what EVERYONE does, that it’s their work as much as the UX researcher’s job, and that it’s only successful if everyone is involved throughout the process.

Stage 1: Focus (Define A Common Goal)

There is a uniform consensus within UX that a research project should start by defining its purpose: why is this research done and how will the results be acted upon?


Focus in FAST UX Research; first stage in the FAST UX Research process.


Fig. 3. Focus is about defining clear objectives and goals for the research and it’s ultimately the team’s and all stakeholders’ shared responsibility to do this.

Generally, this is expressed within the research goals, objectives, research questions and/or hypotheses. Most projects start with a kick-off meeting where those are either discussed (based on an available brief) or are defined during the meeting.

The most regular problem with kick-off sessions like these is that stakeholders come up with too many things they want to learn from a study. The way to turn the situation around is to assign a specific task to your immediate team (other UX professionals you work with) and stakeholders (key decision makers): they will help focus the study from the beginning.

The way they will do that is by working together through the following steps:

  1. Identify as a group the current challenges and problems.
    Ask someone to take notes on a shared document; alternatively, ask everyone to participate and write on sticky notes which are then displayed on a “project wall” for everyone to see.
  2. Identify the potential objectives and questions for a research study.
    Do this the same way you did the previous step. You don’t need to commit to anything yet.
  3. Prioritize.
    Ask the team to order the objectives and questions, starting with the most important ones.
  4. Reword and rephrase.
    Look at the top 3 questions and objectives. Are they too broad or narrow? Could they be reworded so it’s clearer what is the focus of the study? Are they feasible? Do you need to split or merge objectives and questions?
  5. Commit to be flexible.
    Agree on the top 1-2 objectives and ensure that you have agreement from everyone that this is what you will focus on.

Here are some questions you can ask to help your stakeholders and team to get to the focus of the study faster:

  • From the objectives we have recognized, what is most important?
  • What does success look like?
  • If we only learn one thing, which one would be the most important one?

Your role during the process is to provide expertise to determine if:

  • The identified objectives and questions are feasible for a single study;
  • Help with the wording of objectives and questions;
  • Design the study (including selecting a methodology) after the focus has been identified.

At first sight, the Focus and Attend (next stages) activities might be familiar as you are already carrying out a kick-off meeting and inviting stakeholders to attend research sessions.

However, adopting a FAST approach means that your stakeholders have as much ownership as you do during the research process because work is shared and co-owned. Reiterate that the process is collaborative and at the end of the session, emphasize that agreeing on clear research objectives is not easy. Remind everybody that having a shared focus is already better than what many teams start with.

Finally, remind the team and your stakeholders what they need to do during the rest of the process.

Stage 2: Attend (Immerse The Team Deeply In The Research Process)

Seeing first hand the experience of someone using a product or service is so rich that there is no substitute for it. This is why getting stakeholders to observe user research is still considered one of the best and most powerful ways to engage the team.


Attend in FAST UX Research; second stage in FAST UX Research.


Fig. 4. Attend in FAST UX Research is about encouraging the team and stakeholders to be present at all research sessions, but also to be actively engaged with the research.

What often happens is that observers join in on the day of the research study and then they spend the time plastered to their laptops and mobile phones. What is worse, some stakeholders often talk to the note-taker and distract the rest of the design team who need to observe the sessions.

This is why it is just as important that you get the team to interact with the research. The following activities allow the team to immerse themselves in the research session. You can ask stakeholders to:

  • Ask questions during the session through a dedicated live chat (e.g. Slack, Google Hangouts, Skype);
  • Take notes on sticky notes;
  • Summarize observations for everyone (see next stage).

Assign one person per session for each of these activities. Have one “live chat manager,” one “note-taker,” and one “observer” who will sum up the session afterwards.

Rotate people for the next session.

Before the session, it’s useful to walk observers through the ‘ground rules’ very briefly. You can have a poster similar to the one GDS developed that will help you do this and remind the team of their role during the study (see Fig. 3 above).


An observation poster; user research poster.


Fig. 5. A poster can be hanged in the observation room and used to remind the team and stakeholders what their responsibilities are and the ground rules during observation.

Farrell (2017) provides more detail on effective ways for stakeholders to take notes together. When you have multiple stakeholders and it’s not feasible for them to physically attend a field visit (e.g. on the street, in an office, at the home of the participant), you could stream the session to an observation room.

Stage 3: Summarize (Analysis For Non-Researchers)

I am a strong supporter of the idea that analysis starts the moment fieldwork begins. During the very first research session, you start looking for patterns and interpretation of what the data you have means.


Summarize in FAST UX Research; the third stage in FAST UX Research.


Fig. 6. Summarize in FAST UX Research is about asking the team and your stakeholders to tell you about what they thought were the most interesting aspects of user research.

Even after the first session (but typically towards the end of fieldwork) you can carry out collaborative analysis: a fun and productive way that ensures that you have everyone participating in one of the most important stages of research.

The collaborative analysis session is an activity where you provide an opportunity for everyone to be heard and create a shared understanding of the research.

Since you’re including other experts’ perspectives, you’re increasing the chances to identify more objective and relevant insights, and also for stakeholders to act upon the results of the study.

Even though ‘analysis’ is an essential part of any research project, a lot of stakeholders get scared by the word. The activity sounds very academic and complex. This is why at the end of each research session, research day, or the study as a whole, the role of your stakeholders and immediate team is to summarize their observations. Summarizing may sound superfluous but is an important part of the analysis stage; this is essentially what we do during “Downloading” sessions.

Listening to someone’s summary provides you with an opportunity to understand:

  • What they paid attention to;
  • What is important for them;
  • Their interpretation of the event.

Summary At The End Of Each Session

You do this by reminding everyone at the beginning of the session that at the end you will enter the room and ask them to summarize their observations and recommendations.

You then end the session by asking each stakeholder the following:

  • What were their key observations (see also Fig. 3)?
    • What happened during the session?
    • Were there any major difficulties for the participant?
    • What were the things that worked well?
  • Was there anything that surprised them?

This will make the team more attentive during the session, as they know that they will need to sum it up at the end. It will also help them to internalize the observations (and later, transition more easily to findings).

This is also the time to consistently share with your team what you think stands out from the study so far. Avoid the temptation to do a ‘big reveal’ at the end. It’s better if outcomes are told to stakeholders many times.

On multiple occasions, research has given me great outcomes. Instead of sharing them regularly, I keep them to myself until the final report. It doesn’t work well. A big reveal at the end leads to bewildered stakeholders who often cannot jump from observations to insights as quickly. As a result, there is either stubborn pushback or indifferent shrugs.

Summary At The End Of The Day

A summary of the event or the day can then naturally transition into a collaborative analysis session. Your job is to moderate the session.

The job of your stakeholders is to summarize the events of the day and the final results. Ask a volunteer to talk the group through what happened during the day. Other stakeholders can then add to these observations.

Summary At The End Of The Study

After the analysis is done, ask one or two stakeholders to summarize the study. Make sure they cover why we did research, what happened during the study and what are the primary findings. They can also do this by walking through the project wall (if you have one).

It’s very difficult not to talk about your research and leave someone else to do it. But it’s worth it. No matter how much you’re itching to do this yourself — don’t! It’s a great opportunity for people to internalize research and become comfortable with the process. This is one of the key moments to turn stakeholders into active advocates of user research.

At the end of this stage, you should have 5-7 findings that capture the study.

Stage 4: Translate (Make Stakeholders Active Champions Of The Solution)

“Research doesn’t have a value unless it results in decisions and actions.”

—Lang and Howell (2017).

Even when you agree with the findings, stakeholders might still disagree about what the research means or lack commitment to take further action. This is why after summarizing, ask your stakeholders to work with you and identify the “Now what?” or what it all means for the organization, product, service, team and/or individually for each one of them.


Translate in FAST UX Research; the fourth stage in FAST UX Research.


Fig. 7. Translate in FAST UX Research is about asking the team or individual stakeholders to discuss each of the findings and articulate how it will impact the business, the service, and product or their work.

Traditionally, it was the UX researchers’ job to write clear, precise, descriptive findings, and actionable recommendations. However, if the team and stakeholders are not part of identifying actionable recommendations, they might be resistant towards change in future.

To prevent later pushback, ask stakeholders to identify the “Now what?” (also referred to as ‘actionable recommendations’). Together, you’ll be able to identify how the insights and findings will:

  • Affect the business and what needs to be done now;
  • Affect the product/service and what changes do we need to make;
  • Affect people individually and the actions they need to take;
  • Lead to potential problems and challenges and their solutions;
  • Help solve problems or identify potential solutions.

Stakeholders and the team can translate the findings at the end of a collaborative analysis session.

If you decide to separate the activities and conduct a meeting in which the only focus is on actionable recommendations, then consider the following format:

  1. Briefly talk through the 5-7 main findings from the study (as a refresher if this stage is done separately from the analysis session or with other stakeholders).
  2. Split the group into teams and ask them to work on one finding/problem at a time.
  3. Ask them to list as many ways they see the finding affecting them.
  4. Ask one person from each group to present the findings back to the team.
  5. Ask one/two final stakeholders to summarize the whole study, together with the methods, findings, and recommendations.

Later, you can have multiple similar workshops; this is how you get to engage different departments from the organization.

Fast UX In Practice

An excellent example of a FAST UX Research approach in practice is a project I was hired to carry out for a central UK government department. The ultimate goal of the project was to identify user requirements for a very complex internal system.

At first sight, this was a very challenging project because:

  • There was no time to get to know the department or the client.
    Usually, I would have at least a week or two to get to know the client, their needs, opinions, internal pressures, and challenges. For this project, I had to start work on Monday with a team I had never met; in a building I had never worked, in a domain I knew little about, and finish on Friday the same week.
  • The system was very complex and required intense research.
    The internal system and the nature of work were very complex; this required gathering data with at least a few research methods (for triangulation).
  • This was the first time the team had worked with a UX Researcher.
    The stakeholders were primarily IT specialists. However, I was lucky that they were very keen and enthusiastic to be involved in the project and get their hands dirty.
  • Stakeholder availability.
    As is the case on many other projects, all stakeholders were extremely busy as they had their own work on top of the project. Nonetheless, we made it work, even if it meant meeting over lunch, or for a 15-minute wrap up before we went home.
  • There were internal pressures and challenges.
    As with any department and huge organization, there were a number of internal pressures and challenges. Some of them I expected (e.g. legacy systems, slow pace of change) but some I had no clue about when I started.
  • We had to coordinate work with external teams.
    An additional challenge was the need to work with and coordinate efforts with external teams at another UK department.

Despite all of these challenges, this was one of the most enjoyable projects I have worked on because of the tight collaboration initiated by the FAST approach.

The project consisted of:

  • 1 day of kick-off sessions and getting to know the team
  • 2,5 days of contextual inquiries and shadowing of internal team members,
  • Half a day for a co-creation workshop, and
  • 1 day for analysis and results reporting.

In the process, I gathered data from 20+ employees, had 16+ hours of observations, 300+ photos and about 100 pages of notes. Here is a great example of cramming in 3 weeks’ worth of work into a mere 5-day research cycle. More importantly, people in the department were really excited about the process.

Here is how we did it using a FAST UX Research approach:

  • Focus
    At the beginning of the project, the two key stakeholders identified what the focus of research would be while my role was mainly to help with prioritizing the objectives, tweak the research questions and check for feasibility. In this sense, I listened and mainly asked questions, interjecting occasionally with examples from previous projects or options that helped to adjust our approach.

    While I wrote the main discussion guide for the contextual inquiries and shadowing sessions, we sat together with the primary team to discuss and design the co-creation workshop with internal users of the system.

  • Attend
    During the workshop one of the stakeholders moderated half of the session, while the other took notes and observed closely the participants. It was a huge success internally, as stakeholders felt there was better visibility for their efforts to modernize the department, while employees felt listened to and involved in the research.
  • Summarize
    Immediately after the workshop, we sat together with the stakeholders for a 30-minute meeting where I had them summarize their observations.

    As a result of the shadowing, contextual inquiries and co-creation workshop, we were able to identify 60+ issues and problems with the internal system (with regards to integration, functionality, and usability), all captured in six high-level findings.

  • Translate
    Later, we discussed with the team how each of the six major findings translated to a change or implication for the department, the internal system, as well as collaboration with other departments.

We were so perfectly aligned with the team that when we had to talk about our work in front of another UK government department, I could let the stakeholders talk about the process and our progress.

My final task (over two additional days) was to document all of the findings in a research report. This was necessary as a knowledge repository because I had to move onto other projects.

With a more traditional approach, the project could have easily spanned 3 weeks. More importantly, quickly understanding individual and team pressures and challenges were the keys to the success of the new system. This could not have happened within the allocated time without a collaborative approach.

A FAST UX approach resulted in tight collaboration, strong co-ownership and a shared sense of progress; all of those allowed to shorten the time of the project, but also to instill a feeling of excitement about the UX research process.

Have You Tried It Out Already?

While UX research becomes ever more popular, gone are the days when we could soldier on by ourselves and only consult stakeholders at the end.

Mastering our craft as UX researchers means engaging others within the process and being articulate, clear, and transparent about our work. The FAST approach is a simple model that shows how to engage non-researchers with the research process. Reducing the time it takes to do research, both in the short (i.e. the study itself) and long term (i.e. using the research results), is a strategic advantage for the researcher, team, and the business as a whole.

Would you like to improve your efficiency and turn stakeholders into user research advocates? Go and try it out. You can then share your stories and advice here.

I would love to hear your comments, suggestion and any feedback you care to share! If you have tried it out already, do you have success stories you want to share? Be as open as you can — what worked well, and what didn’t? As with all other things UX, it’s most fun if we learn together as a team.

Smashing Editorial
(cc, ra, il)


Visit source: 

FAST UX Research: An Easier Way To Engage Stakeholders And Speed Up The Research Process

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

The post Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers appeared first on WiderFunnel Conversion Optimization.

Original post: 

Case Study: Getting consecutive +15% winning tests for software vendor, Frontline Solvers

7 Ways to Ensure Your Next Webinar is a Success

perfect webinar

Webinars are one of the most popular tools used by marketers for lead generation. Not only are they great for generating demand but they’re also a less pushy way of nurturing cold leads. The reason is that you are offering to provide information that your audience will value in your webinars. You can also demonstrate your expertise and showcase your knowledge of the industry and domain using webinars. However, webinars can be truly beneficial for your company if they are planned and implemented well. Here, we’ll take a look at some of the things you need to do to ensure…

The post 7 Ways to Ensure Your Next Webinar is a Success appeared first on The Daily Egg.

View this article:  

7 Ways to Ensure Your Next Webinar is a Success

Thumbnail

UX At Scale 2017: Free Webinars To Get Scaling Design Right

Design doesn’t scale as cleanly as engineering. It’s not enough that each element and page is consistent with each other — the much bigger challenge lies in keeping the sum of the parts intact, too. And accomplishing that with a lot of designers involved in the same project.
If you’re working in a growing startup or a large corporation, you probably know the issues that come with this: The big-picture falls from view easily as everyone is focusing on the details they are responsible for, and conceptions about the vision of the design might be interpreted differently, too.

See original: 

UX At Scale 2017: Free Webinars To Get Scaling Design Right

How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets

seo hacks

SEO can be expensive, but there are plenty of promotional opportunities that are free and cost-effective if you focus on value exchange and relationships. (Never invest in low-quality spam though — make sure any link building is strategic and carefully supervised). Warnings aside, there are ways to hack around tight budgets and achieve more for less with your link building. You don’t always have to invest in expensive advertising and publishing fees to gain good SEO traction. Here are some ways to get links, mentions, and shares without spending too much money (or time). Most of these tactics will help…

The post How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets appeared first on The Daily Egg.

View original: 

How to Get Links For (Almost) Free: SEO Hacks For Tight Budgets

How pilot pesting can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot pesting can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Credit: 

How pilot pesting can dramatically improve your user research

How pilot testing can dramatically improve your user research

Reading Time: 6 minutes

Today, we are talking about user research, a critical component of any design toolkit. Quality user research allows you to generate deep, meaningful user insights. It’s a key component of WiderFunnel’s Explore phase, where it provides a powerful source of ideas that can be used to generate great experiment hypothesis.

Unfortunately, user research isn’t always as easy as it sounds.

Do any of the following sound familiar:

  • During your research sessions, your participants don’t understand what they have been asked to do?
  • The phrasing of your questions has given away the answer or has caused bias in your results?
  • During your tests, it’s impossible for your participants to complete the assigned tasks in the time provided?
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you’ve experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, where they realize there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments. It’s called pilot testing.

Pilot testing is a rehearsal of your research study. It allows you to test your research approach with a small number of test participants before the main study. Although this may seem like an additional step, it may, in fact, be the time best spent on any research project.
Just like proper experiment design is a necessity, investing time to critique, test, and iteratively improve your research design, before the research execution phase, can ensure that your user research runs smoothly, and dramatically improves the outputs from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with the process

At WiderFunnel, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
website user research in conversion optimization
User Research Process at WiderFunnel

Each part of this process can be discussed at length, but, as I said, this post will focus on pilot testing.

Your research should always start with asking the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

WiderFunnel’s pilot testing process consists of two phases: 1) an internal research design review and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret; thus limiting the number of actionable insights generated.

To overcome this, WiderFunnel uses the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of WiderFunnel’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design “bugs” have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. To overcome this we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of which fit the criteria for the target participants. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, our initial instructions did not provide our participants with enough information on the context of what they were looking for, resulting in confusion of what they were actually supposed to do. Additionally, we made an initial assumption that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to remove the flaws, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous “bugs”.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provide its own “A-ha” moment. Through our initial pilot tests, we realized that participants expected a set function for each page. For the landing page, participants expected a page that grabbed their attention and attracted them to the service, whereas, they expect the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

Nick So

The seemingly ‘failed’ result of the pilot test actually gave us a huge Aha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.

Nick So, Director of Strategy, WiderFunnel

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought.

By conducting pilot testing you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Pilot testing your research, just like proper experiment design, is essential. Yes, this will require an investment of both time and effort. But trust us, that small investment will deliver significant returns on your next research project and beyond.

Do you agree that pilot testing is an essential part of all research projects?

Have you had an “oh-shoot” research moment that could have been prevented by pilot testing? Let us know in the comments!

The post How pilot testing can dramatically improve your user research appeared first on WiderFunnel Conversion Optimization.

Source – 

How pilot testing can dramatically improve your user research

Thumbnail

How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth

Note: This is a guest article written by Sujan Patel, co-founder of Web Profits. Any and all opinions expressed in the post are Sujan’s.


“If you build it, they will come” only works in the movies. In the real world, if you’re serious about e-commerce success, it’s up to you to grab the CRO bull by the horns and make the changes needed to maximize your growth.

Yet, despite the potential of conversion rate optimization to have a major impact on your store’s bottom line, only 59% of respondents to an Econsultancy survey see it as crucial to their overall digital marketing strategy. And given that what’s out of sight is out of mind, you can bet that many of the remaining 41% of businesses aren’t prioritizing this strategy with the importance it deserves.

Implementing an e-commerce CRO program may seem complex, and it’s easy to get overwhelmed by the number of possible things to test. To simplify your path to proper CRO, we’ve compiled a list of ways to optimize your site by channel.

This list is by no means exclusive; every marketing channel supports as many opportunities for experimentation as you can dream up. Some of these, however, are the easiest to put into practice, especially for new e-commerce merchants. Begin with the tactics described here; and when you’re ready to take your campaigns to the next level, check out the following resources:

On-Page Optimization

Your website’s individual pages represent one of the easiest opportunities for implementing a conversion optimization campaign, thanks to the breadth of technology tools and the number of established testing protocols that exist currently.

These pages can also be one of the fastest, thanks to the direct impact your changes can have on whether or not website visitors choose to buy.

Home Page

A number of opportunities exist for making result-driven changes to your site’s home page. For example, you can test:

  • Minimizing complexity: According to ConversionXL, “simple” websites are scientifically better.
  • Increasing prominence and appeal of CTAs: If visitors don’t like what you’re offering as part of your call-to-action (or worse, if they can’t find your CTA at all), test new options to improve their appeal.
  • Testing featured offers: Even template e-commerce shops generally offer a spot for featuring specific products on your store’s home page. Test which products you place there, the price at which you offer them, and how you draw attention to them.
  • Testing store policies – Free shipping is known to reduce cart abandonment. Implement consumer-friendly policies and test the way you feature them on your site.
  • Trying the “five-second test” – Can visitors recall what your store is about in 5 seconds or less? Attention spans are short, and you might not have longer than that to convince a person to stick around. Tools like UsabilityHub can get you solid data.

Home Page Optimization Case Study

Antiaging skincare company NuFACE made the simple change of adding a “Free Shipping” banner to its site header.

Original

eCommerce conversion Optimization - Nuface Control

Test Variation

eCommerce conversion Optimization - Nuface Variation

The results of making this change alone were a 90% increase in orders (with a 96% confidence level) and a 7.32% lift in the average order value.

Product Pages

If you’re confident about your home page’s optimization, move on to getting the most out of your individual product pages by testing your:

  • Images and videos
  • Copy
  • Pricing
  • Inclusion of social proof, reviews, and so on

Product Page Optimization Case Study

Underwater Audio challenged itself to simplify the copy on its product comparison page, testing the new page against its original look.

Original

Underwater Audio Control

Test Variation

Underwater Control Variation - eCommerce conversion rate optimization

This cleaner approach increased website sales for Underwater Audio by 40.81%.

Checkout Flow

Finally, make sure customers aren’t getting hung up in your checkout flow by testing the following characteristics:

Checkout Flow Optimization Case Study

A Scandinavian gift retailer, nameOn, reduced the number of CTAs on their checkout page from 9 to 2.

Original

nameon-1

Test Variation

nameon-2

Making this change led to an estimated $100,000 in increased sales per year.

Lead Nurturing

Proper CRO doesn’t just happen on your site. It should be carried through to every channel you use, including email marketing. Give the following strategies a try to boost your odds of driving conversions, even when past visitors are no longer on your site.

Email Marketing

Use an established email marketing program to take the steps below:

Case Study

There are dozens of opportunities to leverage email to reach out to customers. According to Karolina Petraškienė of Soundest, sending a welcome email results in:

4x higher open rates and 5x higher click rates compared to other promotional emails. Keeping in mind that in e-commerce, average revenue per promotional email is $0.02, welcome emails on average result in 9x higher revenue — $0.18. And if it’s optimized effectively, revenue can be as high as $3.36 per email.”

Live Chat

LemonStand shares that “live chat has the highest satisfaction levels of any customer service channel, with 73%, compared with 61% for email and 44% for phone.” Add live chat to your store and test the following activities:

Case Study

LiveChat Inc.’s report on chat greeting efficiency shares the example of The Simply Group, which uses customized greetings to assist customers having problems at checkout. Implementing live chat has enabled them to convert every seventh greeting to a chat, potentially saving sales that would otherwise be lost.

Content Marketing

Content marketing may be one of the most challenging channels to optimize for conversions, given the long latency periods between reading content pieces and converting. The following strategies can help:

  • Tie content pieces to business goals.
  • Incorporate content upgrades.
  • Use clear CTAs within content.
  • Test content copy, messaging, use of social proof, and so on.
  • Test different distribution channels and content formats.

Case Study

ThinkGeek uses YouTube videos as a fun way to feature their products and funnel interested prospects back to their site. Their videos have been so successful that they’ve accumulated 180K+ subscribers who tune in regularly for their content.

thinkgeek

Post-Acquisition Marketing

According to Invesp, “It costs five times as much to attract a new customer, than to keep an existing one.” Continuing to market to past customers, either in the hopes of selling new items or encouraging referrals, is a great way to boost your overall performance.

Advocacy

Don’t let your CRO efforts stop after a sale has been made. Some of your past clients can be your best sources of new customers, if you take the time to engage them properly.

  • Create an advocacy program: Natural referrals happen, but having a dedicated program turbocharges the process.
  • Test advocacy activation programs: Install a dedicated advocacy management platform like RewardStream or ReferralSaaSquatch and test different methods for promoting your new offering to customers with high net promoter scores.
  • Test different advocate incentives: Try two-way incentives, coupon codes, discounted products, and more.
  • Invest in proper program launch, goal-setting, and ongoing evaluation/management: Customer advocacy programs are never truly “done.”

Case Study

Airbnb tested its advocacy program invitation copy and got better results with the more unselfish version.

airbnb

Reactivation

As mentioned above in the funnel-stage email recommendation, reactivation messages can be powerful drivers of CRO success.

Pay particular attention to these 2 activities:

  • Setting thresholds for identifying inactive subscribers
  • Building an automated reactivation workflow that’s as personalized as possible

Case Study

RailEasy increased opens by 31% and bookings by 38% with a reactivation email featuring a personalized subject line.

raileasy

Internal Efforts

Lastly, make CRO an ongoing practice by prioritizing it internally, rather than relegating it to “something the marketing department does.”

Ask CRO experts, and they’ll tell you that beyond the kinds of tactics and strategies described above, having a culture of experimentation and testing is the most important step you can take to see results from any CRO effort.

Here’s how to do it:

Have an idea for another way CRO can be used within e-commerce organizations? Leave your suggestions in the comments below.

0

0 ratings

How will you rate this content?

Please choose a rating

The post How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth appeared first on VWO Blog.

This article is from: 

How to Leverage eCommerce Conversion Optimization Through Different Channels to Maximize Growth

Infographic: How To Create The Perfect Social Media Post

It’s easy to get into the habit of robotically posting content to social media every day. However, how you post to social media is just as important as the content itself. You need people to click on your post to see your content. So, before you rush around doing your daily social media tasks, pause, take a step back and think about how you can improve what you’re doing. Study this infographic and use it as a cheat sheet the next time you post to social media. One last hint: It’s a good idea to measure your social media efforts…

The post Infographic: How To Create The Perfect Social Media Post appeared first on The Daily Egg.

Continued here – 

Infographic: How To Create The Perfect Social Media Post